Tales From A Lazy Fat DBA

Den of an Oracle DB Performance freak & a fan of PostgreSQL & Cassandra …. \,,/

  • Likes

    • 152,503
  • Archives

  • Categories

  • Subscribe

  • Advertisements
  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Disclaimer!

    FatDBA or Oracle ‘Ant’ is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Posts Tagged ‘performance’

Collecting Exadata Cell Performance Statistics using ‘Cellperfdiag.sql’

Posted by FatDBA on July 22, 2018

Hi Guys,

Last week we finished migration for one of the customer who recently purchased the Oracle Exadata Machines (X6-2), after that complex data movement is successfully completed we observed some great performance improvements without actually making any changes and exadata features in action.

But its not that the Exa machines are ‘In-frangible’ Or Unbreakable. There are many of the times when you actually have to understand the Cell/Computer Nodes or system wide performance and output from tools like Sundiag, Exawatcher, Exachk, TFA, SOS report etc. were not sufficient.

For the purpose of ‘Cell Performance DIAG info’ Oracle has provided a script to collect Exadata Cell Performance statistics which you can use to interpret the issues.

Output of the script is distributed into several sections
1. Cell specific parameters
2. Top CELL waits with wait times (Worst Times)
3. Cell Configuration details

Script to Collect Exadata Cell Performance Information (cellperfdiag.sql) (Doc ID 2038493.1)
Below is the output from one of the Cell Node of a lower environment.


CELLPERFDIAG DATA FOR DixitLab_July19_0257

IMPORTANT PARAMETERS RELATING TO CELL PERFORMANCE:

INST_ID NAME                                     VALUE
------- ---------------------------------------- ----------------------------------------
      1 cell_offload_compaction                  ADAPTIVE
      1 cell_offload_decryption                  TRUE
      1 cell_offload_plan_display                PDTO
      1 cell_offload_processing                  TRUE
      2 cell_offload_compaction                  ADAPTIVE
      2 cell_offload_decryption                  TRUE
      2 cell_offload_plan_display                PDTO
      2 cell_offload_processing                  TRUE

TOP 20 CURRENT CELL WAITS

This is to look at current cell waits, July not return any data.

ASH CELL PERFORMANCE SUMMARY

This query will look at the average cell wait times for each cell in ASH

CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------------------------ --------------- --------------
192.199.66.19;192.199.66.20       1185605081.0        96124.9
192.199.66.21;192.199.66.22       1148823479.0        88371.0
192.199.66.17;192.199.66.18       1048776677.0        82115.3
192.199.66.23;192.199.66.24        927836650.0        76604.7

20 WORST CELL PERFORMANCE MINUTES IN ASH:

APPROACH: These are the minutes where the avg cell perf time
was the highest.  See which cell had the longest waits and
during what minute.

MINUTE       CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------ ------------------------------ --------------- --------------
July18_1308   192.199.66.21;192.199.66.22         28445866.0       536714.5
July18_1158   192.199.66.19;192.199.66.20         25685919.0       524202.4
July18_1218   192.199.66.23;192.199.66.24           496960.0       496960.0
July18_2008   192.199.66.21;192.199.66.22         21481465.0       488215.1
July18_1618   192.199.66.19;192.199.66.20         56796413.0       485439.4
July18_1638   192.199.66.17;192.199.66.18         23823342.0       467124.4
July18_2004   192.199.66.21;192.199.66.22         28935643.0       466703.9
July18_1207   192.199.66.17;192.199.66.18          1779234.0       444808.5
July19_0042   192.199.66.19;192.199.66.20           888334.0       444167.0
July18_1654   192.199.66.23;192.199.66.24          2619836.0       436639.3
July18_2008   192.199.66.19;192.199.66.20         11634865.0       430920.9
July18_1324   192.199.66.19;192.199.66.20         66537869.0       423808.1
July18_1157   192.199.66.21;192.199.66.22         22725628.0       420845.0
July19_0028   192.199.66.21;192.199.66.22           841081.0       420540.5
July18_1323   192.199.66.19;192.199.66.20         76790471.0       419620.1
July18_1157   192.199.66.19;192.199.66.20         28103203.0       401474.3
July18_1159   192.199.66.17;192.199.66.18         14050389.0       401439.7
July18_1158   192.199.66.23;192.199.66.24         10054891.0       386726.6
July18_1639   192.199.66.17;192.199.66.18         79265611.0       386661.5
July18_2009   192.199.66.21;192.199.66.22         20335679.0       383692.1

50 LONGEST CELL WAITS IN ASH ORDERED BY WAIT TIME

APPROACH: These are the top 50 individual cell waits in ASH
in wait time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
19/JULY/18 08:04:19.964 PM 192.199.66.21;192.199.66.22          2 cell smart incremental backup            2990700133          0          0     5560085
19/JULY/18 08:04:18.964 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     3333391
19/JULY/18 04:38:52.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2653079
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2486538
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2461036
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2447901
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2443358
19/JULY/18 11:57:14.811 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3718979034      32768     2433342
19/JULY/18 02:53:14.279 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431871
19/JULY/18 01:24:36.080 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2430882
19/JULY/18 01:36:57.174 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2429672
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2427505
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2427165
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2426099
19/JULY/18 01:24:40.011 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2417661
19/JULY/18 01:24:40.090 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2416163
19/JULY/18 01:24:47.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2396291
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370
19/JULY/18 01:23:36.971 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2370848
19/JULY/18 04:48:26.040 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213  842883291      32768     2359311
19/JULY/18 03:35:36.076 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2344972
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2330945
19/JULY/18 11:57:52.881 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2310010
19/JULY/18 04:38:55.395 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2304803
19/JULY/18 01:37:01.537 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2284322
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2275197
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2250389
19/JULY/18 11:58:11.911 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2256514768      32768     2176213
19/JULY/18 01:23:46.981 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2149317
19/JULY/18 01:24:54.380 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 1783777120      32768     2138193
19/JULY/18 01:24:14.051 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2107128
19/JULY/18 08:07:04.178 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2102692
19/JULY/18 08:04:28.984 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2098081
19/JULY/18 02:43:04.647 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2367494631      32768     2090428
19/JULY/18 01:23:47.921 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2077418
19/JULY/18 01:24:48.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2056411
19/JULY/18 01:24:48.021 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2043911
19/JULY/18 11:57:57.891 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2033684
19/JULY/18 08:03:38.870 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 2251483371      32768     2032547
19/JULY/18 01:37:01.174 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2031799
19/JULY/18 08:03:39.874 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  474454945      32768     2024898
19/JULY/18 11:58:31.931 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  474454945      32768     2013268
19/JULY/18 03:53:00.757 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     2004020
19/JULY/18 04:39:03.405 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     1997819
19/JULY/18 04:18:40.331 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1994866
19/JULY/18 01:23:34.861 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1988301
19/JULY/18 04:38:57.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     1985840
19/JULY/18 01:23:17.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     1983376
19/JULY/18 01:08:56.066 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     1965134
19/JULY/18 04:39:39.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1956491

100 LONGEST CELL WAITS IN ASH ORDERED BY SAMPLE TIME

APPROACH: These are the top 50 individual cell waits in ASH
in sample time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
19/JULY/18 11:57:14.811 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3718979034      32768     2433342
19/JULY/18 11:57:27.821 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  552535141      32768     1627154
19/JULY/18 11:57:49.881 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     1871961
19/JULY/18 11:57:52.881 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     1634342
19/JULY/18 11:57:52.881 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2310010
19/JULY/18 11:57:57.891 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2033684
19/JULY/18 11:58:11.911 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2256514768      32768     2176213
19/JULY/18 11:58:31.931 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  474454945      32768     2013268
19/JULY/18 11:58:56.001 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1433462179      32768     1665222
19/JULY/18 11:59:11.149 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3958490174      32768     1796201
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2426099
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2443358
19/JULY/18 12:02:25.487 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 1783777120      32768     1667222
19/JULY/18 12:36:22.441 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     1817473
19/JULY/18 01:08:11.966 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     1824501
19/JULY/18 01:08:53.056 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  758773684      32768     1633915
19/JULY/18 01:08:56.066 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     1965134
19/JULY/18 01:22:18.849 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1692084
19/JULY/18 01:22:35.760 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1404181515      32768     1927461
19/JULY/18 01:22:35.760 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1404181515      32768     1892900
19/JULY/18 01:22:35.871 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1817924
19/JULY/18 01:23:08.830 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     1798727
19/JULY/18 01:23:17.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     1983376
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2461036
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2275197
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1340519488      32768     1600315
19/JULY/18 01:23:21.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     1806676
19/JULY/18 01:23:32.971 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1916487
19/JULY/18 01:23:34.861 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1988301
19/JULY/18 01:23:36.971 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2370848
19/JULY/18 01:23:46.981 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2149317
19/JULY/18 01:23:47.921 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2077418
19/JULY/18 01:24:09.051 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1716730
19/JULY/18 01:24:14.051 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2107128
19/JULY/18 01:24:36.080 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2430882
19/JULY/18 01:24:40.011 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2417661
19/JULY/18 01:24:40.090 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2416163
19/JULY/18 01:24:47.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2396291
19/JULY/18 01:24:48.021 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2043911
19/JULY/18 01:24:48.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2056411
19/JULY/18 01:24:54.380 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 1783777120      32768     2138193
19/JULY/18 01:35:40.244 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2381222600      32768     1670138
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2330945
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2427165
19/JULY/18 01:36:57.174 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     1872516
19/JULY/18 01:36:57.174 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2429672
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2250389
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2427505
19/JULY/18 01:37:01.174 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2031799
19/JULY/18 01:37:01.174 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     1782826
19/JULY/18 01:37:01.537 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2284322
19/JULY/18 01:37:04.587 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     1609551
19/JULY/18 02:43:01.717 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1705580
19/JULY/18 02:43:04.647 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2367494631      32768     2090428
19/JULY/18 02:43:04.757 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1667922
19/JULY/18 02:43:11.657 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  552535141      32768     1700677
19/JULY/18 02:53:14.279 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431871

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
19/JULY/18 03:09:17.091 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     1718584
19/JULY/18 03:09:22.101 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1404181515      32768     1924659
19/JULY/18 03:09:23.820 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     1609377
19/JULY/18 03:35:36.076 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2344972
19/JULY/18 03:42:53.196 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1705247
19/JULY/18 03:43:09.256 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1340519488      32768     1621901
19/JULY/18 03:53:00.757 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     2004020
19/JULY/18 03:53:03.356 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2701596410      32768     1951623
19/JULY/18 03:53:04.767 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     1911016
19/JULY/18 04:02:42.355 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1122005524      32768     1600296
19/JULY/18 04:18:40.331 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1994866
19/JULY/18 04:18:41.331 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     1680223
19/JULY/18 04:18:47.341 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     1676801
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2447901
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2486538
19/JULY/18 04:38:52.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2653079
19/JULY/18 04:38:55.395 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2304803
19/JULY/18 04:38:57.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     1985840
19/JULY/18 04:39:03.405 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     1997819
19/JULY/18 04:39:26.743 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 1433462179      32768     1686065
19/JULY/18 04:39:39.455 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     1770436
19/JULY/18 04:39:39.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1956491
19/JULY/18 04:39:40.763 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     1719451
19/JULY/18 04:39:45.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     1669777
19/JULY/18 04:39:53.783 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     1606474
19/JULY/18 04:47:20.930 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 1467922874      32768     1598316
19/JULY/18 04:47:21.101 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1723089
19/JULY/18 04:48:26.040 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213  842883291      32768     2359311
19/JULY/18 05:17:36.317 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  125724152      32768     1743703
19/JULY/18 05:23:11.293 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  552535141      32768     1631948
19/JULY/18 05:23:19.727 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     1741381
19/JULY/18 06:23:57.158 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1374369774      32768     1769798
19/JULY/18 08:03:38.870 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     1622627
19/JULY/18 08:03:38.870 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 2251483371      32768     2032547
19/JULY/18 08:03:39.874 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  474454945      32768     2024898
19/JULY/18 08:03:39.874 PM 192.199.66.23;192.199.66.24          2 cell smart incremental backup            3802949213          0          0     1775912
19/JULY/18 08:04:18.964 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     3333391
19/JULY/18 08:04:19.964 PM 192.199.66.21;192.199.66.22          2 cell smart incremental backup            2990700133          0          0     5560085
19/JULY/18 08:04:28.984 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2098081
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370
19/JULY/18 08:07:04.178 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2102692
19/JULY/18 08:09:11.705 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  870191132      32768     1753415
19/JULY/18 10:13:12.946 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1777250

ASH HISTORY CELL PERFORMANCE SUMMARY

This query will look at the average cell wait times for each cell in ASH

CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------------------------ --------------- --------------
192.199.66.19;192.199.66.20       4320370317.0       144803.9
192.199.66.17;192.199.66.18       3720276327.0       126342.3
192.199.66.21;192.199.66.22       3302364789.0       116198.6
192.199.66.23;192.199.66.24       2214253857.0        86457.1

20 WORST CELL PERFORMANCE MINUTES IN ASH HISTORY:

APPROACH: These are the minutes where the avg cell perf time
was the highest.  See which cell had the longest waits and
during what time minute.

MINUTE       CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------ ------------------------------ --------------- --------------
July17_0458   192.199.66.21;192.199.66.22          2403872.0      2403872.0
July17_0454   192.199.66.23;192.199.66.24          2150347.0      2150347.0
July09_1416   192.199.66.17;192.199.66.18          1887469.0      1887469.0
July08_2209   192.199.66.21;192.199.66.22          1693688.0      1693688.0
July17_0200   192.199.66.23;192.199.66.24          1597102.0      1597102.0
July17_1655   192.199.66.19;192.199.66.20          1567615.0      1567615.0
July11_0400   192.199.66.19;192.199.66.20          1373801.0      1373801.0
July16_1343   192.199.66.17;192.199.66.18          1356319.0      1356319.0
July07_2001   192.199.66.21;192.199.66.22          6774340.0      1354868.0
July10_2006   192.199.66.23;192.199.66.24         10787174.0      1348396.8
July04_1431   192.199.66.19;192.199.66.20          5180228.0      1295057.0
July17_0454   192.199.66.19;192.199.66.20          2579203.0      1289601.5
July09_1901   192.199.66.19;192.199.66.20          1260900.0      1260900.0
July11_0400   192.199.66.23;192.199.66.24          1255291.0      1255291.0
July08_1437   192.199.66.19;192.199.66.20          2439786.0      1219893.0
July17_0457   192.199.66.23;192.199.66.24          1177614.0      1177614.0
July04_0906   192.199.66.23;192.199.66.24          1153969.0      1153969.0
July18_0123   192.199.66.17;192.199.66.18          1153542.0      1153542.0
July17_1957   192.199.66.21;192.199.66.22          1112349.0      1112349.0
July11_0958   192.199.66.23;192.199.66.24          4421561.0      1105390.3

50 LONGEST CELL WAITS IN ASH HISTORY ORDERED BY WAIT TIME

APPROACH: These are the top 50 individual cell waits in ASH
history in wait time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
07/July/18 08:01:45.575 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     4381405
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     3489839
19/JULY/18 08:24:30.057 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     3027146
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2994219
14/July/18 11:07:27.489 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2583166
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2572340
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2524157
09/July/18 10:30:37.201 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2511210
09/July/18 07:04:57.612 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     2508541
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2504764
15/July/18 02:23:59.797 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     2498919
09/July/18 07:15:15.200 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2496073
16/July/18 10:01:28.060 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2481597
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2481323
09/July/18 11:37:22.166 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2478192
16/July/18 12:18:57.851 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2476915
16/July/18 03:00:25.122 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2474114
16/July/18 07:54:01.716 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  758773684      32768     2473662
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2473365
15/July/18 01:42:56.817 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2256514768      32768     2472463
08/July/18 05:38:29.082 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3704102872      32768     2466691
17/July/18 02:28:27.460 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2464447
09/July/18 11:07:27.164 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1122005524      32768     2463370
17/July/18 02:43:03.640 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2460091
16/July/18 03:00:45.182 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2459368
16/July/18 07:37:59.702 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2457007
14/July/18 11:07:25.301 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4016617326      32768     2453596
17/July/18 02:40:02.464 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     2449781
09/July/18 02:20:37.918 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2449762
11/July/18 04:09:24.827 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2449147
14/July/18 09:45:22.608 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2447554
16/July/18 10:52:07.321 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3958490174      32768     2444207
14/July/18 04:49:22.691 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2441217
04/July/18 02:31:23.455 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2441112
14/July/18 10:03:34.449 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2440465
04/July/18 03:07:31.342 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2438825
14/July/18 03:32:54.840 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2437829
17/July/18 02:38:52.304 AM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2263141931      32768     2437640
16/July/18 02:29:45.206 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2437155
16/July/18 08:15:35.001 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2436056
16/July/18 10:38:49.813 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2435774
09/July/18 08:12:07.171 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  185394440      32768     2435748
15/July/18 02:44:02.345 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2435230
16/July/18 07:38:55.929 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431597
16/July/18 10:39:39.873 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2430666
10/July/18 03:01:42.766 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2430197
15/July/18 06:11:20.764 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2429871
16/July/18 10:16:11.295 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1019816052      32768     2429717
08/July/18 05:38:19.072 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2263141931      32768     2429692
16/July/18 10:01:38.100 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2428275

100 LONGEST CELL WAITS IN ASH HISTORY ORDERED BY SAMPLE TIME

APPROACH: These are the top 100 individual cell waits in ASH
history in sample time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
04/July/18 02:30:43.598 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1374369774      32768     2424167
04/July/18 03:07:31.342 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2438825
04/July/18 10:29:07.175 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2417864
04/July/18 10:29:47.246 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1233700156      32768     2339031
04/July/18 02:31:23.455 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2441112
07/July/18 08:01:45.575 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     4381405
08/July/18 01:36:44.865 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2420689
08/July/18 01:36:44.865 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2330502
08/July/18 01:36:53.062 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 1783777120      32768     2420230
08/July/18 04:23:25.390 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2410974
08/July/18 04:34:13.379 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2256514768      32768     2414668
08/July/18 05:17:46.103 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 1783777120      32768     2319802
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2481323
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2504764
08/July/18 05:38:19.072 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2263141931      32768     2429692
08/July/18 05:38:29.082 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3704102872      32768     2466691
08/July/18 05:38:29.082 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3616317741      32768     2325172
09/July/18 10:13:18.007 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2418070
09/July/18 10:13:18.007 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1404181515      32768     2293017
09/July/18 10:13:23.693 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2263141931      32768     2355873
09/July/18 10:13:28.017 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2359674
09/July/18 10:30:37.201 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2511210
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2524157
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2383155
09/July/18 11:07:27.164 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1122005524      32768     2463370
09/July/18 11:37:22.166 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2478192
09/July/18 01:57:19.921 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     2337878
09/July/18 02:20:37.918 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2449762
09/July/18 06:51:04.415 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3266942716      32768     2303697
09/July/18 07:04:57.612 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     2508541
09/July/18 07:15:15.200 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2496073
09/July/18 08:12:07.171 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  185394440      32768     2435748
10/July/18 03:01:42.766 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2430197
10/July/18 01:38:20.572 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1122206058      32768     2396284
10/July/18 08:06:28.912 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3616317741      32768     2425901
10/July/18 08:06:37.777 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 3704102872      32768     2423546
11/July/18 04:09:24.827 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2449147
14/July/18 09:45:22.608 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2447554
14/July/18 10:03:34.449 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2440465
14/July/18 10:03:54.460 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2401321
14/July/18 11:07:25.301 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4016617326      32768     2453596
14/July/18 11:07:27.489 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2583166
14/July/18 01:03:23.248 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3958490174      32768     2314095
14/July/18 03:32:54.840 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2437829
14/July/18 04:49:22.691 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2441217
15/July/18 02:23:59.797 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     2498919
15/July/18 02:28:01.245 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2418245
15/July/18 01:42:26.787 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2263141931      32768     2418862
15/July/18 01:42:56.817 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2256514768      32768     2472463
15/July/18 02:44:02.345 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2435230
15/July/18 05:23:22.759 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1404181515      32768     2424663
15/July/18 06:11:20.764 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2429871
15/July/18 06:12:20.859 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1340519488      32768     2299402
16/July/18 02:29:45.206 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2437155
16/July/18 02:31:14.605 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2425509
16/July/18 02:35:59.882 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3958490174      32768     2424458
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     2417036

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     3489839
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2994219
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2572340
16/July/18 03:00:25.122 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2474114
16/July/18 03:00:45.182 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2459368
16/July/18 03:01:07.779 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  870191132      32768     2338236
16/July/18 09:36:02.169 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2395522
16/July/18 10:01:28.060 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2481597
16/July/18 10:01:38.100 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2428275
16/July/18 10:16:11.295 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1019816052      32768     2429717
16/July/18 10:38:49.813 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2435774
16/July/18 10:39:39.873 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2430666
16/July/18 10:39:39.873 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2414021
16/July/18 10:52:07.321 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3958490174      32768     2444207
16/July/18 11:47:58.536 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2381222600      32768     2422526
16/July/18 12:18:57.851 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2476915
16/July/18 05:01:01.612 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     2351090
16/July/18 06:55:13.320 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2375834
16/July/18 06:55:13.320 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2414035
16/July/18 06:56:23.443 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2322248
16/July/18 07:37:59.702 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2457007
16/July/18 07:37:59.702 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2417497
16/July/18 07:38:55.929 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431597
16/July/18 07:54:01.716 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  758773684      32768     2473662
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2419493
16/July/18 08:15:35.001 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2436056
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2473365
17/July/18 02:03:05.401 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2425350
17/July/18 02:38:22.270 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2344538
17/July/18 02:38:49.166 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2423420
17/July/18 02:38:49.166 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3256856773      32768     2419184
17/July/18 02:38:52.304 AM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2263141931      32768     2437640
17/July/18 02:38:59.176 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1433462179      32768     2415119
17/July/18 02:40:02.464 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     2449781
17/July/18 02:43:03.640 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2460091
17/July/18 04:58:17.697 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     2403872
17/July/18 02:24:05.496 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2381222600      32768     2420104
17/July/18 02:28:27.460 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2464447
17/July/18 04:39:42.871 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     2297049
19/JULY/18 08:24:30.057 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     3027146
19/JULY/18 09:55:04.470 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1340519488      32768     2363513
19/JULY/18 04:38:55.395 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2304803
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370

AWR CELL DISK UTILIZATION

APPROACH: This query only works in 12.1 and above.  This is looking
in the AWR history tables to look at cell disk utilization for some
of the worst periods.  Top 100 disk utils.
DISK_UTILIZATION_SUM: Sum of the per-minute disk utilization metrics.
IO_REQUESTS_SUM: Sum of the per-minute IOPs.
IO_MB_SUM: Sum of the per-minute I/O metrics, in megabytes per second.

   SNAP_ID BEGIN               END                 CELL_NAME                      DISK_NAME                      DISK_UTILIZATION_SUM IO_REQUESTS_SUM  IO_MB_SUM
---------- ------------------- ------------------- ------------------------------ ------------------------------ -------------------- --------------- ----------
     22669 July13_1300          July13_1330          192.199.66.17;192.199.66.18    /dev/sda                                       1977           11302       1938
     22669 July13_1300          July13_1330          192.199.66.17;192.199.66.18    /dev/sdb                                       1977           11302       1938
     22669 July13_1300          July13_1330          192.199.66.17;192.199.66.18    /dev/sdm                                       1977           11302       1938
     22669 July13_1300          July13_1330          192.199.66.19;192.199.66.20    /dev/sda                                       1926           11343       1979
     22669 July13_1300          July13_1330          192.199.66.19;192.199.66.20    /dev/sdb                                       1926           11343       1979
     22669 July13_1300          July13_1330          192.199.66.19;192.199.66.20    /dev/sdm                                       1926           11343       1979
     22669 July13_1300          July13_1330          192.199.66.21;192.199.66.22    /dev/sda                                       1966           11227       1945
     22669 July13_1300          July13_1330          192.199.66.21;192.199.66.22    /dev/sdb                                       1966           11227       1945
     22669 July13_1300          July13_1330          192.199.66.21;192.199.66.22    /dev/sdm                                       1966           11227       1945
     22669 July13_1300          July13_1330          192.199.66.23;192.199.66.24    /dev/sda                                       1888           11353       1920
     22669 July13_1300          July13_1330          192.199.66.23;192.199.66.24    /dev/sdb                                       1888           11353       1920
     22669 July13_1300          July13_1330          192.199.66.23;192.199.66.24    /dev/sdm                                       1888           11353       1920
     22827 July16_2000          July16_2030          192.199.66.17;192.199.66.18    /dev/sda                                       2068           11683       1428
     22827 July16_2000          July16_2030          192.199.66.17;192.199.66.18    /dev/sdb                                       2068           11683       1428
     22827 July16_2000          July16_2030          192.199.66.17;192.199.66.18    /dev/sdm                                       2068           11683       1428
     22827 July16_2000          July16_2030          192.199.66.19;192.199.66.20    /dev/sda                                       2089           11588       1443
     22827 July16_2000          July16_2030          192.199.66.19;192.199.66.20    /dev/sdb                                       2089           11588       1443
     22827 July16_2000          July16_2030          192.199.66.19;192.199.66.20    /dev/sdm                                       2089           11588       1443
     22827 July16_2000          July16_2030          192.199.66.21;192.199.66.22    /dev/sda                                       1975           11095       1419
     22827 July16_2000          July16_2030          192.199.66.21;192.199.66.22    /dev/sdb                                       1975           11095       1419
     22827 July16_2000          July16_2030          192.199.66.21;192.199.66.22    /dev/sdm                                       1975           11095       1419
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    /dev/sda                                       3808            9676       3421
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    /dev/sdb                                       3808            9676       3421
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    /dev/sdm                                       3808            9676       3421
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_00_monkeynode11_adm                        1929            4811       1713
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_01_monkeynode11_adm                        1879            4623       1703
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_02_monkeynode11_adm                        1793            5186       1674
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_03_monkeynode11_adm                        1808            4887       1633
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_04_monkeynode11_adm                        1923            4612       1719
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_05_monkeynode11_adm                        1898            4812       1750
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_06_monkeynode11_adm                        1887            4748       1677
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_07_monkeynode11_adm                        1905            4917       1720
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_08_monkeynode11_adm                        1792            4997       1687
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_09_monkeynode11_adm                        1623            4926       1501
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_10_monkeynode11_adm                        1921            4466       1739
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_11_monkeynode11_adm                        1949            4469       1723
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    /dev/sda                                       4075           11529       5240
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    /dev/sdb                                       4075           11529       5240
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    /dev/sdm                                       4075           11529       5240
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_00_monkeynode12_adm                        2033            5673       2614
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_01_monkeynode12_adm                        2040            5626       2622
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_02_monkeynode12_adm                        1942            5636       2452
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_03_monkeynode12_adm                        1897            5623       2351
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_04_monkeynode12_adm                        2051            5586       2591
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_05_monkeynode12_adm                        1862            5838       2297
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_06_monkeynode12_adm                        2058            5355       2581
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_07_monkeynode12_adm                        2063            5668       2592
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_08_monkeynode12_adm                        2076            5506       2676
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_09_monkeynode12_adm                        2066            5829       2619
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_10_monkeynode12_adm                        2046            5322       2562
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_11_monkeynode12_adm                        1972            5429       2487
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    /dev/sda                                       3710            9738       3413
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    /dev/sdb                                       3710            9738       3413
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    /dev/sdm                                       3710            9738       3413
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_00_monkeynode13_adm                        1874            4751       1752
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_01_monkeynode13_adm                        1835            4755       1661
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_02_monkeynode13_adm                        1772            5021       1623

   SNAP_ID BEGIN               END                 CELL_NAME                      DISK_NAME                      DISK_UTILIZATION_SUM IO_REQUESTS_SUM  IO_MB_SUM
---------- ------------------- ------------------- ------------------------------ ------------------------------ -------------------- --------------- ----------
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_03_monkeynode13_adm                        1801            4860       1627
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_04_monkeynode13_adm                        1729            4834       1603
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_05_monkeynode13_adm                        2094            6325       2698
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_06_monkeynode13_adm                        1804            4935       1633
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_07_monkeynode13_adm                        1943            4356       1757
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_08_monkeynode13_adm                        1870            4385       1673
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_09_monkeynode13_adm                        1690            4848       1525
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_10_monkeynode13_adm                        1908            5668       1679
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_11_monkeynode13_adm                        1858            4699       1681
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    /dev/sda                                       3677            9714       3381
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    /dev/sdb                                       3677            9714       3381
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    /dev/sdm                                       3677            9714       3381
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_00_monkeynode14_adm                        1853            4752       1685
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_01_monkeynode14_adm                        1822            4750       1693
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_02_monkeynode14_adm                        1874            5117       1725
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_03_monkeynode14_adm                        1813            4593       1665
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_04_monkeynode14_adm                        1810            4736       1637
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_05_monkeynode14_adm                        1869            4575       1717
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_06_monkeynode14_adm                        1789            4769       1669
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_07_monkeynode14_adm                        1840            4634       1734
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_08_monkeynode14_adm                        1838            4480       1666
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_09_monkeynode14_adm                        1753            4656       1641
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_10_monkeynode14_adm                        1729            5019       1574
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_11_monkeynode14_adm                        1849            4579       1710
     22876 July17_2030          July17_2100          192.199.66.17;192.199.66.18    /dev/sda                                       2658           10231       2347
     22876 July17_2030          July17_2100          192.199.66.17;192.199.66.18    /dev/sdb                                       2658           10231       2347
     22876 July17_2030          July17_2100          192.199.66.17;192.199.66.18    /dev/sdm                                       2658           10231       2347
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    /dev/sda                                       3156           12112       4142
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    /dev/sdb                                       3156           12112       4142
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    /dev/sdm                                       3156           12112       4142
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_01_monkeynode12_adm                        1602            5917       2090
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_04_monkeynode12_adm                        1633            5773       2100
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_07_monkeynode12_adm                        1609            5877       2073
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_08_monkeynode12_adm                        1632            5778       2088
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_09_monkeynode12_adm                        1602            6016       2066
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_10_monkeynode12_adm                        1594            5964       2040
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    /dev/sda                                       2449           10793       2281
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    /dev/sdb                                       2449           10793       2281
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    /dev/sdm                                       2449           10793       2281
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    CD_05_monkeynode13_adm                        1569            6433       2025
     22876 July17_2030          July17_2100          192.199.66.23;192.199.66.24    /dev/sda                                       2657           10505       2409
     22876 July17_2030          July17_2100          192.199.66.23;192.199.66.24    /dev/sdb                                       2657           10505       2409
     22876 July17_2030          July17_2100          192.199.66.23;192.199.66.24    /dev/sdm                                       2657           10505       2409

DB_ID_FOR_CURRENT_DB
--------------------
          2490671309

CELL THREAD HISTORY - LAST FEW MINUTES

This query only works in 12.1 and above.

  COUNT(*) SQL_ID        CELL_NAME                      JOB_TYPE                         DATABASE_ID INSTANCE_ID
---------- ------------- ------------------------------ -------------------------------- ----------- -----------
      1598               192.199.66.19;192.199.66.20    UnidentifiedJob                            0           0
      1555               192.199.66.23;192.199.66.24    UnidentifiedJob                            0           0
      1237               192.199.66.17;192.199.66.18    UnidentifiedJob                            0           0
      1148               192.199.66.21;192.199.66.22    UnidentifiedJob                            0           0
       834               192.199.66.21;192.199.66.22    NetworkPoll                                0           0
       824               192.199.66.17;192.199.66.18    NetworkPoll                                0           0
       804               192.199.66.19;192.199.66.20    NetworkPoll                                0           0
       804               192.199.66.23;192.199.66.24    NetworkPoll                                0           0
       417               192.199.66.21;192.199.66.22    OCL System Message Thread                  0           0
       417               192.199.66.21;192.199.66.22    Storage index eviction                     0           0
       412               192.199.66.17;192.199.66.18    OCL System Message Thread                  0           0
       412               192.199.66.17;192.199.66.18    Storage index eviction                     0           0
       402               192.199.66.19;192.199.66.20    OCL System Message Thread                  0           0
       402               192.199.66.19;192.199.66.20    Storage index eviction                     0           0
       402               192.199.66.23;192.199.66.24    OCL System Message Thread                  0           0
       402               192.199.66.23;192.199.66.24    Storage index eviction                     0           0
       332               192.199.66.23;192.199.66.24    NetworkRead                                0           0
       281               192.199.66.21;192.199.66.22    NetworkRead                                0           0
       254               192.199.66.17;192.199.66.18    NetworkRead                                0           0
       193 9ujkk29vay3bz 192.199.66.23;192.199.66.24    PredicateDiskRead                  584354840           1
       168 9ujkk29vay3bz 192.199.66.17;192.199.66.18    PredicateDiskRead                  584354840           1
       165 9ujkk29vay3bz 192.199.66.19;192.199.66.20    PredicateDiskRead                  584354840           1
       152 9ujkk29vay3bz 192.199.66.21;192.199.66.22    PredicateDiskRead                  584354840           1
       135               192.199.66.19;192.199.66.20    NetworkRead                                0           0
       112               192.199.66.17;192.199.66.18    CacheGet                                   0           0
        86               192.199.66.19;192.199.66.20    CacheGet                                   0           0
        83               192.199.66.21;192.199.66.22    CacheGet                                   0           0
        66               192.199.66.23;192.199.66.24    CacheGet                                   0           0
        55 aym6pqm5uzd90 192.199.66.17;192.199.66.18    PredicateDiskRead                  584354840           1
        52 aym6pqm5uzd90 192.199.66.21;192.199.66.22    PredicateDiskRead                  584354840           1
        51 aym6pqm5uzd90 192.199.66.23;192.199.66.24    PredicateDiskRead                  584354840           1
        50 oj2tdhpjgnvus 192.199.66.23;192.199.66.24    CacheGet                           584354840           1
        47 aym6pqm5uzd90 192.199.66.19;192.199.66.20    PredicateDiskRead                  584354840           1
        37 oj2tdhpjgnvus 192.199.66.17;192.199.66.18    CacheGet                           584354840           1
        34 oj2tdhpjgnvus 192.199.66.19;192.199.66.20    CacheGet                           584354840           1
        29 oj2tdhpjgnvus 192.199.66.21;192.199.66.22    CacheGet                           584354840           1
        18               192.199.66.17;192.199.66.18    PredicateFilter                            0           0
        17               192.199.66.23;192.199.66.24    PredicateFilter                            0           0
        16               192.199.66.19;192.199.66.20    PredicateFilter                            0           0
        16 221fz5z4guyxu 192.199.66.17;192.199.66.18    PredicateDiskRead                 1377831170           1
        15               192.199.66.21;192.199.66.22    PredicateFilter                            0           0
        15               192.199.66.23;192.199.66.24    CachePut                                   0           0
        14 c10mc2tzkhbwb 192.199.66.23;192.199.66.24    PredicateDiskRead                 1377831170           1
        13               192.199.66.23;192.199.66.24    CachePut                          1377831170           1
        12               192.199.66.21;192.199.66.22    CachePut                                   0           0
        11               192.199.66.17;192.199.66.18    CachePut                                   0           0
        10               192.199.66.19;192.199.66.20    CachePut                                   0           0
        10               192.199.66.19;192.199.66.20    PredicateCacheGet                          0           0
         9               192.199.66.21;192.199.66.22    CachePut                           584354840           1
         9 221fz5z4guyxu 192.199.66.19;192.199.66.20    PredicateDiskRead                 1377831170           1

CELL CONFIG


CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
192.199.66.17;192.199.66.18


  1.0
  1526659672886
  
  
    monkeynode11_adm
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    1614NM70A0
    2
    ib0
    ib1
    0.016666666666666666
    192.199.66.17/22
    192.199.66.18/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 01"
    monkeynode11-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
    FALSE
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    18.0
    normal
    22 days, 1:03
    normal
  


192.199.66.19;192.199.66.20


CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------

  1.0
  1526662533109
  
  
    monkeynode12_adm
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    1614NM705Y
    2
    ib0
    ib1
    0.0
    192.199.66.19/22
    192.199.66.20/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 02"
    monkeynode12-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    18.0
    normal
    22 days, 1:03
    normal
  


192.199.66.21;192.199.66.22


  1.0
  1526661149874

CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
  
  
    monkeynode13_adm
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    9717PD519T
    2
    ib0
    ib1
    0.0
    192.199.66.21/22
    192.199.66.22/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 03"
    monkeynode13-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    18.0
    normal
    21 days, 23:03
    normal
  


192.199.66.23;192.199.66.24


  1.0
  1526660912516
  
  
    monkeynode14_adm

CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    1635NM70Y9
    2
    ib0
    ib1
    0.016666666666666666
    192.199.66.23/22
    192.199.66.24/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 04"
    monkeynode14-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    17.0
    normal
    21 days, 22:09
    normal
  



IORM CONFIG


CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
192.199.66.17;192.199.66.18


  1.0
  1526659674006
  
  
    monkeynode11_adm_IORMPLAN
    
    
    basic
    active
  


192.199.66.19;192.199.66.20


  1.0
  1526662534081
  
  
    monkeynode12_adm_IORMPLAN
    
    
    basic
    active
  


192.199.66.21;192.199.66.22


  1.0
  1526661150877
  
  
    monkeynode13_adm_IORMPLAN
    
    
    basic
    active
  


192.199.66.23;192.199.66.24


  1.0
  1526660913655
  
  
    monkeynode14_adm_IORMPLAN
    
    

CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
    basic
    active
  



TIME
-----------------------
July19 02:57:21
1133011440 1122005524      32768     1719451
19/JULY/18 04:39:45.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     1669777
19/JULY/18 04:39:53.783 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     1606474
19/JULY/18 04:47:20.930 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 1467922874      32768     1598316
19/JULY/18 04:47:21.101 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1723089
19/JULY/18 04:48:26.040 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213  842883291      32768     2359311
19/JULY/18 05:17:36.317 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  125724152      32768     1743703
19/JULY/18 05:23:11.293 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  552535141      32768     1631948
19/JULY/18 05:23:19.727 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     1741381
19/JULY/18 06:23:57.158 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1374369774      32768     1769798
19/JULY/18 08:03:38.870 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     1622627
19/JULY/18 08:03:38.870 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 2251483371      32768     2032547
19/JULY/18 08:03:39.874 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  474454945      32768     2024898
19/JULY/18 08:03:39.874 PM 192.199.66.23;192.199.66.24          2 cell smart incremental backup            3802949213          0          0     1775912
19/JULY/18 08:04:18.964 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     3333391
19/JULY/18 08:04:19.964 PM 192.199.66.21;192.199.66.22          2 cell smart incremental backup            2990700133          0          0     5560085
19/JULY/18 08:04:28.984 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2098081
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370
19/JULY/18 08:07:04.178 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2102692
19/JULY/18 08:09:11.705 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  870191132      32768     1753415
19/JULY/18 10:13:12.946 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1777250

ASH HISTORY CELL PERFORMANCE SUMMARY

This query will look at the average cell wait times for each cell in ASH

CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------------------------ --------------- --------------
192.199.66.19;192.199.66.20       4320370317.0       144803.9
192.199.66.17;192.199.66.18       3720276327.0       126342.3
192.199.66.21;192.199.66.22       3302364789.0       116198.6
192.199.66.23;192.199.66.24       2214253857.0        86457.1

20 WORST CELL PERFORMANCE MINUTES IN ASH HISTORY:

APPROACH: These are the minutes where the avg cell perf time
was the highest.  See which cell had the longest waits and
during what time minute.

MINUTE       CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------ ------------------------------ --------------- --------------
July17_0458   192.199.66.21;192.199.66.22          2403872.0      2403872.0
July17_0454   192.199.66.23;192.199.66.24          2150347.0      2150347.0
July09_1416   192.199.66.17;192.199.66.18          1887469.0      1887469.0
July08_2209   192.199.66.21;192.199.66.22          1693688.0      1693688.0
July17_0200   192.199.66.23;192.199.66.24          1597102.0      1597102.0
July17_1655   192.199.66.19;192.199.66.20          1567615.0      1567615.0
July11_0400   192.199.66.19;192.199.66.20          1373801.0      1373801.0
July16_1343   192.199.66.17;192.199.66.18          1356319.0      1356319.0
July07_2001   192.199.66.21;192.199.66.22          6774340.0      1354868.0
July10_2006   192.199.66.23;192.199.66.24         10787174.0      1348396.8
July04_1431   192.199.66.19;192.199.66.20          5180228.0      1295057.0
July17_0454   192.199.66.19;192.199.66.20          2579203.0      1289601.5
July09_1901   192.199.66.19;192.199.66.20          1260900.0      1260900.0
July11_0400   192.199.66.23;192.199.66.24          1255291.0      1255291.0
July08_1437   192.199.66.19;192.199.66.20          2439786.0      1219893.0
July17_0457   192.199.66.23;192.199.66.24          1177614.0      1177614.0
July04_0906   192.199.66.23;192.199.66.24          1153969.0      1153969.0
July18_0123   192.199.66.17;192.199.66.18          1153542.0      1153542.0
July17_1957   192.199.66.21;192.199.66.22          1112349.0      1112349.0
July11_0958   192.199.66.23;192.199.66.24          4421561.0      1105390.3

50 LONGEST CELL WAITS IN ASH HISTORY ORDERED BY WAIT TIME

APPROACH: These are the top 50 individual cell waits in ASH
history in wait time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
07/July/18 08:01:45.575 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     4381405
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     3489839
19/JULY/18 08:24:30.057 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     3027146
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2994219
14/July/18 11:07:27.489 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2583166
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2572340
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2524157
09/July/18 10:30:37.201 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2511210
09/July/18 07:04:57.612 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     2508541
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2504764
15/July/18 02:23:59.797 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     2498919
09/July/18 07:15:15.200 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2496073
16/July/18 10:01:28.060 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2481597
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2481323
09/July/18 11:37:22.166 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2478192
16/July/18 12:18:57.851 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2476915
16/July/18 03:00:25.122 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2474114
16/July/18 07:54:01.716 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  758773684      32768     2473662
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20  

Hope It Helps
Prashant Dixit

Advertisements

Posted in Advanced | Tagged: , | 2 Comments »

DBMS_XPLAN and its different options/arguments available

Posted by FatDBA on May 10, 2018

Hi Mates,

I am back after a long time to write something that is very vital piece of information when you are about to start any troubleshooting task, yes that is the EXECUTION PLAN, well there are many ways to generate the CBO plans (i.e. AUTOTRACE, extended/debug traces, utlxpls.sql, V$SQL_PLAN, Few of the specialized SQL specific AWR reports like awrsqrpt.sql, STS etc.) but the most common and best way of doing thisng in this subecjt is to use DBMS_XPLAN & its function DISPLAY_CURSOR.

So, yes today’s topic is to understand what all options do we have to generate a more interactive, detailed, elaborative plans.

Okay will start it with very rudimentary (BASIC) styled plan and slowly will use rest of the arguments/options available. The plan includes the operation, options, and the object name (table, index, MV, etc)


SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0h79fq6vx4p99',0,'BASIC'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

------------------------------------
| Id  | Operation         | Name   |
------------------------------------
|   0 | SELECT STATEMENT  |        |
|   1 |  TABLE ACCESS FULL| DIXEMP |
------------------------------------

Next is the most common way of generating the plans that’s with the DISPLAY function which allows us to display the execution plan stored in the plan table. First we explain a SQL statement.
Next we use the DBMS_XPLAN.DISPLAY function to display the execution plan.


SQL> explain plan for select * from dixemp;
Explained.

SQL> SELECT * FROM TABLE(dbms_xplan.display);


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3980174453

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |    14 |   532 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DIXEMP |    14 |   532 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Next is the ‘ALLSTATS LAST’ option for FORMAT parameter. Lets see how to use it.
ALLSTATS is a shortcut for IOSTATS and MEMSTATS (e.q. BUFFER information) information and the other keyword LAST can be specified to see only the statistics for the last execution.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |     14 |00:00:00.01 |      10 |
--------------------------------------------------------------------------------------

But if you take a look at the above plan you’ll see that the plan doesn’t contain few of the vital columns or stats like COST and bytes processed (BYTES) as it doesn’t comes by default with that, you actually have to add few more predicates to get that info. That is +cost and +bytes with your FORMAT parameter.

Below is how you can get that missing info from the plan.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0h79fq6vx4p99',format=>'ALLSTATS LAST +cost +bytes'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

-----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |       |     3 (100)|     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |   532 |     3   (0)|     14 |00:00:00.01 |      10 |
-----------------------------------------------------------------------------------------------------------

And the ALL parameter will give you the general plan but rest all of the details like Query Block Name / Object Alias, Predicate Information, column projection details.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0h79fq6vx4p99',0,'ALL'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |       |       |     3 (100)|          |
|   1 |  TABLE ACCESS FULL| DIXEMP |    14 |   532 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DIXEMP@SEL$1

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "DIXEMP"."EMPNO"[NUMBER,22], "DIXEMP"."ENAME"[VARCHAR2,10],
       "DIXEMP"."JOB"[VARCHAR2,9], "DIXEMP"."MGR"[NUMBER,22],
       "DIXEMP"."HIREDATE"[DATE,7], "DIXEMP"."SAL"[NUMBER,22],
       "DIXEMP"."COMM"[NUMBER,22], "DIXEMP"."DEPTNO"[NUMBER,22]

ALLSTATS is a shortcut for IOSTATS and MEMSTATS (e.q. BUFFER information) with your plan.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0h79fq6vx4p99',0,'ALLSTATS'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      7 |        |     98 |00:00:00.01 |      70 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      7 |     14 |     98 |00:00:00.01 |      70 |
--------------------------------------------------------------------------------------

Now, if you want to view additional details of your plan, for example set of hints to reproduce the statement or OUTLINE, you can use it in your format parameter.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0h79fq6vx4p99',format=>'ALLSTATS LAST +outline'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |     14 |00:00:00.01 |      10 |
--------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "DIXEMP"@"SEL$1")
      END_OUTLINE_DATA
  */

Adding or removing any data/stats from the plan, that’s possible using + or – signs follwed by argument.
Example: if you want t view cost and bytes information use +cost, +bytes in your plan or if you want to remove the same info in your run of dbms_xplan use -cost, -bytes.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0h79fq6vx4p99',format=>'ALLSTATS LAST +cost +bytes'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

-----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |       |     3 (100)|     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |   532 |     3   (0)|     14 |00:00:00.01 |      10 |
-----------------------------------------------------------------------------------------------------------

You can write a mix of both as below


SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(sql_id => '0h79fq6vx4p99', cursor_child_no => 0, FORMAT => 'TYPICAL -ROWS -BYTES +COST +PARALLEL +PARTITION +IOSTATS +MEMSTATS +ALIAS +PEEKED_BINDS +OUTLINE +PREDICATE -PROJECTION +REMOTE +NOTE'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      7 |        |     3 (100)|          |     98 |00:00:00.01 |      70 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      7 |     14 |     3   (0)| 00:00:01 |     98 |00:00:00.01 |      70 |
--------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DIXEMP@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "DIXEMP"@"SEL$1")
      END_OUTLINE_DATA
  */

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Visualize your database performance statistics using Tableau.

Posted by FatDBA on January 28, 2018

Hi Everyone,

Today’s post is all about producing some interactive data visualizations your system/database statistics using one of the most popular BI tool Tableau.
There are lot’s of other tools which can be used for your data analysis i.e. Excel, Microstrategy Analytics, Domo, QlikView but i always find Tableau easiest way to do such things, specially during all my database audit, 360 health reviews and troubleshooting task-forces.

Tableau offers a suite of tools that include an online, desktop and server version. All of these versions provide a easy-to-use drag and drop interface that can help you quickly turn your data into business insights. Like many other data analytics and visualization tools, Tableau can connect to local or remote data of many different formats.

Okay now after that short introduction of the tool, time to do some tests using the tool.
I have divided process in to three step activity and are discussed below.

First: Data Collection
You can collect your AWR reports in TEXT format and which will be later on parsed to create a CSV file.
There are many tools/scripts available online to generate multiple AWR reports of your database. I see an awesome work was already done by FlashDBA (Download). You can use his script to generate batch AWR reports and yes in TEXT format ONLY!

Example:

[oracle@dixitlab AWR]$ ls -ltr
total 12852
-rw-r--r--. 1 oracle oinstall 225031 Jan 27 21:25 awrrpt_1_445_446.txt
-rw-r--r--. 1 oracle oinstall 255010 Jan 27 21:26 awrrpt_1_446_447.txt
-rw-r--r--. 1 oracle oinstall 253217 Jan 27 21:26 awrrpt_1_447_448.txt
-rw-r--r--. 1 oracle oinstall 253217 Jan 27 21:26 awrrpt_1_448_449.txt
-rw-r--r--. 1 oracle oinstall 244229 Jan 27 21:26 awrrpt_1_449_450.txt
........
.................

Second: Data Fold or Compression.
Now next you need a tool to fold your AWR reports in to a CSV. Here once again FlashDBA did a marvelous job, he wrote one fabulous script to parse your AWR text files and generate a final AWR report.
You can download the script from his Github (Download Link)

As far as the script, you need to pass the format of your files and direct output to a CSV as shown below.

[oracle@dixitlab AWR]$ ./awr-parser.sh awr*.txt > tunedbperftests.csv

Info : Parsing file awrrpt_1_445_446.txt at 2018-01-27 21:32:49
Info : Parsing file awrrpt_1_446_447.txt at 2018-01-27 21:32:53
Info : Parsing file awrrpt_1_447_448.txt at 2018-01-27 21:32:56
Info : Parsing file awrrpt_1_448_449.txt at 2018-01-27 21:33:01
Info : Parsing file awrrpt_1_449_450.txt at 2018-01-27 21:33:07
Info : Parsing file awrrpt_1_450_451.txt at 2018-01-27 21:33:15
Info : Parsing file awrrpt_1_451_452.txt at 2018-01-27 21:33:21
....
........
Info : Parsing file awrrpt_1_499_500.txt at 2018-01-27 21:36:56
Info : No more files found
Info :
Info : ______SUMMARY______
Info : Files found : 55
Info : Files processed : 55
Info : Processing errors : 0
Info :
Info : Completed with 0 errors
[oracle@dixitlab AWR]$

With that you are done with the parsing of reports and have got the final CSV which we will be using to play around within Tableau.
Contents inside the parsed file.

Filename	Database Name	Instance Number	Instance Name	Database Version	Cluster	Hostname	Host OS	Num CPUs	Server Memory (GB)	DB Block Size	Begin Snap	Begin Time	End Snap	End Time	Elapsed Time (mins)	DB Time (mins)	Average Active Sessions	Busy Flag	Logical Reads/sec
awrrpt_1_445_446.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	445	1/26/2018 21:57	446	1/26/2018 23:00	62.24	6.33	0.1	N	2629.5
awrrpt_1_446_447.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	446	1/26/2018 23:00	447	1/27/2018 0:00	60.19	12.18	0.2	N	13973.4
awrrpt_1_447_448.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	447	1/27/2018 0:00	448	1/27/2018 1:00	60.15	13.52	0.2	N	14055.8
awrrpt_1_448_449.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	448	1/27/2018 1:00	449	1/27/2018 2:00	60.15	10.13	0.1	N	11597.4
awrrpt_1_449_450.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	449	1/27/2018 2:00	450	1/27/2018 3:00	60.16	0.03	0	N	65.4
awrrpt_1_450_451.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	450	1/27/2018 3:00	451	1/27/2018 4:00	60.12	0.02	0	N	70.3
awrrpt_1_452_453.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	452	1/27/2018 5:00	453	1/27/2018 6:00	60.13	0.69	0	N	189.8
awrrpt_1_453_454.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	453	1/27/2018 6:00	454	1/27/2018 7:00	60.13	2.88	0	N	2439.1
awrrpt_1_454_455.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	454	1/27/2018 7:00	455	1/27/2018 8:00	60.14	12.57	0.2	N	14027.3
awrrpt_1_455_456.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	455	1/27/2018 8:00	456	1/27/2018 9:00	60.14	10.11	0.1	N	13916.6
awrrpt_1_456_457.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	456	1/27/2018 9:00	457	1/27/2018 10:00	60.14	10.26	0.1	N	13941.5
awrrpt_1_457_458.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	457	1/27/2018 10:00	458	1/27/2018 10:50	50.25	18.67	0.3	N	14118.9

Third: Data Representation using Tableau.
Okay so here we have the final parsed CSV of all those TEXT AWR reports named ‘tunedbperftests.csv’ and we are ready to play around and learn.

Immediately after launching you will see couple of options available for Data Sources on the left. Choose TEXT as the source and browse the CSV to load.

Next you will see all rows of your data source (tunedbperftests.csv in our case) file.

Next click on Worksheet, your personal area to play.

Tableau then divides the data in two main types: dimensions and measures. Dimensions are usually those fields that cannot be aggregated; measures, as its name suggests, are those fields that can be measured, aggregated, or used for mathematical operations. Dimension fields are usually used for row or column headings; measures are usually used for plotting or giving values to the sizes of markers.

             

Next tab is for Analytical functions, i.e. If you want to add a constant, average, mean, median averages or any reference lines to your graph/chart.

All good now, so we all all set yto plot out performance charts for that DB historical information that we have collected in the form of CSV and loaded to Tableau. Lets, plot for average hard parse per/second, Average DB Time, Average Pareses, Average Transactions happened against Time (Hourly rate of BEGIN TIME as a measure).

So, using above Area Graph you have plotted the average metric usages on the database during a time period.

Next, i will visualize one of the most prominent db wait event observed in the database during the probe (data collection) period ‘DPR’ or ‘Direct Path Reads’ and will plot the Bar graph against the TIME (Hourly BEGIN TIME).

Some more stats visualisations, this time ‘top 5 waits‘ and their hourly frequency.

There are lot’s of other things that you can do with your statistics, i mean you can plot your data in the form of Square, side by side circle, polygon, pie char, polygons, gantt bar, line graph, area graphs, box-and-whisker plots, highlight tables and many more.

So, imagine and you can visualize your database statistics using Tableau!

Questions are welcome. Happy reading! 🙂 🙂

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

AAS or Average Number Of Active Sessions – The first thing to look in an AWR & its Uses.

Posted by FatDBA on January 27, 2018

Hi All,

Today’s post is all about answering the question ‘What is the very first thing that one should look out for in an AWR report ?‘. I have got this question so many times in the past about the first statistic i prefer to look at when troubleshooting a performance problem so though of answering this by writing this post with some real-time examples.

And the answer is ‘AAS‘ or ‘Average Number of Active Sessions’ is the first thing that i always look out for while reading AWR reports because It gives you a quick idea about how busy the system is and about the workload happening.

Okay so first lets understand what is an ‘Active Session’ : In simple words It is a session currently spending time in the database (i.e. from v$session where status=’ACTIVE’).
Now, what exactly is AAS – It’s the ratio or rate of change of DB time over clock time. The value of this metric is calculated by using a standard formula of (DB Time/Elapsed Time).

Lets calculate the value for one of the system.

Host Name	        Platform	       CPUs   
dixitLab1.fatdba.com	Linux x86 64-bit	16

AAS In this case : 1024.72/60.04 => 17.067 of average active sessions during the snap interval of an hour.

Let’s further decode the magical Figures of AAS.
We always use CPU Count as a standard for comparing the AAS. Few rule of thumbs while doing this comparison are give below.
– If the AAS is higher than the number of CPU you have then there is a problem. i.e In above example we have an AAS value of 17 and CPU
count 16, hence we could have performance problems and needs investigation.
– If the value is very high than the number of CPUs then there is a choke-point in the database.

You could also use the AAS to plot your graphs, lines and Manhattan’s as one of the axis to compare it with CPU consumption and quickly pin point the pain areas and time slots. Let understand and use it through a scenario.

Assume one fine day you got a call from monitoring team that they have observed huge spikes in system resource usage and many of the other metrics set on the dashboard are in red. And as usual lot’s of fingers and eyes started pointing towards you and the DBA team.

Now you as a DBA quickly generated the AWR for that specific time frame to understand the system behavior and performance and observed a huge workload is happening on the database with AAS of 305 (For a 2 Node RAC database with 128 CPUs collectively) and some huge peaks for Application class (i.e. row lock contentions etc.), User IO classified waits (i.e. DBF Sequential Reads, read by other session etc.) and some Network class waits (i.e SQL*Net message from dblink waits) in your database.

Now you want to understand the trend for wait classes for the database during last few days. Here you can use DBA_HIST_ACTIVE_SESSION_HISTORY view to collect historical statistics for the database which you will use to plot charts using excel, tableau etc.

I have collected similar stats using ASH view and have plotted a graph using few of my data representation tools to understand this transient variation in system performance.

Here you see a sudden spike in DB wait Classes (Specially User IO, Cluster, Application and Network) on March 5th with average number of active sessions (AAS) stacked for both of RAC nodes was close around 305. Which if compare it with number of total CPUs (64+64=128) is extremely high.

After further investigation you understand that it’s application class wait ‘enq: TX – row lock contention‘ which is the primary cause of this high system resources utilization.

Below graph is a representation of AAS Waiting on Application class event ‘enq: TX – row lock contention’ per Instance on the database where we can the same happening. A constant then a sudden raise in row locking contentions.

And you have identified the major sources contributing towards this row locking during the probe period of last 7 days till now. You can do a join on dba_hist_active_sess_history and dba_hist_snapshot to get this historical information — Read my previous article on how to get this past information from AWR repository.

Now when you have narrowed down the problem and have identified the problematic SQLs with their total contribution, you can now start the query optimization/tuning to fix the issue.
There are lot of other data representations you can do by using AAS as one of the graph axis i.e. AAS on CPU and Top Wait Events and will discuss in my further posts.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | 2 Comments »

Parse CPU to Parse Elapsd % – Lets clear the Perplex!

Posted by FatDBA on January 26, 2018

Happy Republic Day!

 

Hi Mates,

I see a lot of confusion, mix-ups and perplexity in between the DBAs on few of the metrics under ‘Instance Efficiency % section’ in AWR reports. Specially for one of the metric “Parse CPU to Parse Elapsd %”. In today’s post i will try to explain the metrics in detail which will help you to understand it in depth and clear the muddiness.

Few of the lines you might have read about this metric i.e.
“we should always look for as low as possible numbers for this metric …”
“Try to achieve the impossible value of zero for this one “

– Even some of the Metalink notes are misleading too.

Scenario:

Below is the snippet from one of my test box.

Note: Just taking a look at the instance efficiency ratios can be very dangerous and i advise to first start with Load Profile, top 5 waits and there on …

Okay, so my definition of this statistic differ from what you judge after reading the name of the metric – This datum signals the delay/wait in parsing of SQL queries during the snap interval.
In our example the value is 1.37% this means that for every CPU second spend parsing we spent about 72.99 (100/1.37) Seconds of clock time. It can happen due to various reasons i.e Latch or any contention between the sessions etc.

The ideal value for this stat should also be 100% like rest of the ratios (Yes, that’s correct!!).

Let’s see how this value was calculated, what all it considers while deducing that final figure of 1.37%.
It takes the “parse time cpu/parse time elapsed * 100” to get the ‘Parse CPU to Parse Elapsed’ figure.

 

Statistic	                              Total	per Second     per Trans
------------------------------------------------------------------------------------------------
parse time cpu                                  398	      0.11	0.01
parse time elapsed                           29,055	      8.07	0.81

 

So, in short each time there is a drop in this metric from 100%, means the database was waiting for something which slowed down the parse times.
If you want to dig in deep then you should trace the session using 10046 tracing with level 8 to see where the other % of parse time is being spent and leaving this for readers to test.

 

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

Shared Pool Management in 12c, What’s new!

Posted by FatDBA on January 17, 2018

Hi Everyone,

Few days while working on Shared Pool issue where we were getting ORA-4031 on one of the 11gR2 database i discovered something interesting and new regarding the SGA duration management. Here i wont discuss the problem that i faced and how we fixed but would try to show the architectural changes that has happened with 12c that can now fix these errors/issues.

Let me explain what are ‘Durations’ first. The shared pool is made up of a number of granules. The shared pool then split into sub-pools if you have a large enough SGA, and each sub-pool consists of a number of non-overlapping granules. In 11g each sub-pool also split into four sub-sub-pools known as durations.

What was there before 12c arrived
Starting from Oracle 10g each sub-pool in SGA was divided in to four durations.
Let’s check the distribution by generating the Heap Dump for shared pool, here i used oradebug with level 2 (This provides you the full SGA Summary or you can try with level 2050 to get full summary with contents).

SQL> oradebug setmypid
Statement processed.
SQL> oradebug dump heapdump 2
Statement processed.
SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8127.trc

.....
******************************************************
HEAP DUMP heap name="sga heap(1,0)"  desc=380030610
Total heap size    =218102664
Total free space   =  1066928
Total reserved free space   =  8439520
Unpinned space     = 38812528  rcr=11971 trn=17906
Permanent space    =208595160
HEAP DUMP heap name="sga heap(1,1)"  desc=380031e68
Total heap size    = 67108512
Total free space   =  2912528
Total reserved free space   =  1382816
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(1,2)"  desc=3800336c0
Total heap size    =167771280
Total free space   = 92743480
Total reserved free space   =  3852856
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(1,3)"  desc=380034f18
Total heap size    =268434048
Total free space   = 74547592
Total reserved free space   = 13497472
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(2,0)"  desc=380039e38
Total heap size    =201325536
Total free space   =    17200
Total reserved free space   =  8435920
Unpinned space     = 26474112  rcr=7934 trn=8094
Permanent space    =192871456
HEAP DUMP heap name="sga heap(2,1)"  desc=38003b690
Total heap size    = 83885640
Total free space   = 48723768
Total reserved free space   =  1035792
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(2,2)"  desc=38003cee8
Total heap size    =369096816
Total free space   =258674312
Total reserved free space   = 16982464
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(2,3)"  desc=38003e740
Total heap size    =218102664
Total free space   = 17202608
Total reserved free space   = 10966696
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(3,0)"  desc=380043660
Total heap size    =184548408
Total free space   =    13008
Total reserved free space   =  5061928
Unpinned space     = 26943408  rcr=4930 trn=9425
Permanent space    =179472608
HEAP DUMP heap name="sga heap(3,1)"  desc=380044eb8
Total heap size    = 67108512
Total free space   = 27568352
Total reserved free space   =     4744
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(3,2)"  desc=380046710
Total heap size    =352319688
Total free space   =233302736
Total reserved free space   = 15981216
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
HEAP DUMP heap name="sga heap(3,3)"  desc=380047f68
Total heap size    =385873944
Total free space   =143746536
Total reserved free space   = 19402616
Unpinned space     =        0  rcr=0 trn=0
Permanent space    =        0
.....
******************************************************

So above stats shows that we have three sub-pools [Sub pool (1,0), (1,1), (1,2), (1,3) ….. (3,0), (3,1), (3,2), (3,3)] of SGA Heaps with Four Durations each heap. And every duration has its own size, free space and reserved free space. This type of distribution possibly causes the ORA 4031 even when you have enough free space in other durations and this is what the actual cause in my earlier case, but here we won’t discuss how we fixed that.

Okay so now lets do the same with 12c database. Lets generate the Heap Dump for SGA in 12c database and see the distributions of durations here. Once again we will use the oradebug to dump heapdump with Level 2.

SQL> oradebug setmypid
Statement processed.
SQL> oradebug dump heapdump 2
Statement processed.
SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/tunedb/tunedb/trace/tunedb_ora_11054.trc


******************************************************
HEAP DUMP heap name="sga heap(1,0)"  desc=0x60103678
 extent sz=0xfe0 alt=304 het=32767 rec=9 flg=0x82 opc=0
 parent=(nil) owner=(nil) nex=(nil) xsz=0x1000000 heap=(nil)
 fl2=0x24, nex=(nil), idx=1, dsxvers=1, dsxflg=0x0
 dsx first ext=0x73000000
 dsx empty ext bytes=0  subheap rc link=0x730000c0,0x730000c0

******************************************************
HEAP DUMP heap name="sga heap(1,3)"  desc=0x60107f80
 extent sz=0xfe0 alt=304 het=32767 rec=9 flg=0x82 opc=0
 parent=(nil) owner=(nil) nex=(nil) xsz=0x1000000 heap=(nil)
 fl2=0x24, nex=(nil), idx=1, dsxvers=1, dsxflg=0x0
 dsx first ext=0x7e000000
 dsx empty ext bytes=0  subheap rc link=0x7e0000c0,0x7e0000c0

Alright, so here we only have two groups of SGA durations – ‘Sub pool 1, duration 0’ and ‘Sub Pool 1 duration 3’ for improved sharability and to avoid ORA 4031 errors.

Hope It helps
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

How to troubleshoot, understand HW events & measure performance using PERF (Linux Profiler)! – Part 2

Posted by FatDBA on January 13, 2018

Repeat: I think I’ve mistakenly deleted the post, so re-posting the same.

Hey Folks,

Back with second edition of my previous post on troubleshooting performance issues using Linux in-built profiler named ‘perf’. Many of the users requested to write about few use cases situations on when and how to use this tool.

So, this post is all about discussing some cases that i have faced while working on few performance tuning projects and few of them are from the tests that I’ve performed on my Lab systems.

Okay coming back to the reason on why we need this, perf i mean ?
The answer is sometimes Oracle wait interface is not enough and you need to dig deeper inside the system to understand the problem. That point you have to use some third party, in-built dedicated tools for performance investigation. There perf might help you to understand what your resource intensive query is doing on OS layers.
So, using the tool you can monitor your process on what’s its doing!

And in case if you are using perf on Virtualised system, you might get error “perf.data file has no samples“.
In order to fix it try with “-e cpu-clock” arguments to collect the sample data and then interpret the file.

Okay so one fine day, you saw a spike in server’s CPU consumption using TOP, Oratop, OEM, Scheduled scripts or by any possible monitoring techniques and you have identified the process and its other attributes.
Using the PID you have reached the SID, SQL_ID, SQL_TEXT and other statistics. So now you might want to analyze and understand the oracle’s execution.

 
Global Information
------------------------------
 Status              :  EXECUTING
 Instance ID         :  1
 Session             :  DIXIT (1:53089)
 SQL ID              :  71aa5ju8pwtf2
 SQL Execution ID    :  16777216
 Execution Started   :  01/09/2018 06:10:32
 First Refresh Time  :  01/09/2018 06:10:32
 Last Refresh Time   :  01/09/2018 06:13:03
 Duration            :  152s
 Module/Action       :  SQL*Plus/-
 Service             :  SYS$USERS
 Program             :  sqlplus@dixitlab.localdomain (TNS V1-V3)

Global Stats
========================================================
| Elapsed |   Cpu   |    IO    | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Gets  | Reqs | Bytes |
========================================================
|     163 |     150 |      115 |     5M | 648K |   5GB |
========================================================

SQL Plan Monitoring Details (Plan Hash Value=12102956)
===================================================================================================================================================================
| Id   |      Operation       | Name |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |       Activity Detail       | Progress |
|      |                      |      | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |         (# samples)         |          |
===================================================================================================================================================================
| -> 0 | SELECT STATEMENT     |      |         |       |       150 |     +2 |     1 |        0 |      |       |          |                             |          |
| -> 1 |   SORT AGGREGATE     |      |       1 |       |       150 |     +2 |     1 |        0 |      |       |          |                             |          |
| -> 2 |    TABLE ACCESS FULL | T294 |   96523 | 56397 |       151 |     +1 |     1 |      306 | 648K |   5GB |    99.34 | Cpu (69)                    |      31% |
|      |                      |      |         |       |           |        |       |          |      |       |          | db file sequential read (6) |          |
|      |                      |      |         |       |           |        |       |          |      |       |          | direct path read (126)      |          |
===================================================================================================================================================================

Above results points to high CPU Time (150 Seconds). Now to look more deeper in to things and to understand what exactly the session is doing.

[root@dixitlab ~]#  perf top -e cpu-clock -p 3505 

   PerfTop:     349 irqs/sec  kernel:42.7%  exact:  0.0% [1000Hz cpu-clock],  (target_pid: 3505)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

             samples  pcnt function                    DSO
             _______ _____ ___________________________ __________________________________________________

             1981.00 46.2% _raw_spin_unlock_irqrestore [kernel.kallsyms]
              906.00 21.1% _intel_fast_memcmp          /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
              270.00  6.3% kole_simple_string_match    /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle 
               96.00  2.2% copy_user_generic_unrolled  [kernel.kallsyms]
               96.00  2.2% kcbgtcr                     /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               54.00  1.3% __intel_new_memset          /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle 
               37.00  0.9% __intel_ssse3_rep_memcpy    /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               30.00  0.7% kghfrf                      /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               28.00  0.7% kghalf                      /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               27.00  0.6% kcbldio                     /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               24.00  0.6% kdxbrs1                     /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               24.00  0.6% kspgvc                      /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
               21.00  0.5% kksMapCursor                /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle



Results shows system was mostly busy processing kernel calls (kernel.kallsyms) and its function ‘_raw_spin_unlock_irqrestore‘ with 46% of its time spend using CPU cycles. It’s coming with huge overhead and with large samples and in general irq_restore shows up because re-enabling interrupts is costly, but this is not the real CPU consumption but is how the tool, Interrupt and system works in few of the Virtual systems – I was testing it on my Lab VM.

Here i would like to thank Tanel Poder for reviewing the document and highlighting the issue with the perf top command when executed inside a VM environment.

…… perf top that concluded _raw_spin_unlock_irqrestore taking lots of CPU time. I’ve been through this myself in past – and this is likely not actual CPU usage but rather how perf, OS and interrupts work in some VMs (basically measurement bias/errors by these tools). I assume that you tested in a VM? ……..

Next in the list is function/object ‘_intel_fast_memcmp‘ called by oracle with 21.1% which i believe is for fast memory compilations.

So the conclusion of the analysis:
Most of the CPU by process was spend processing kernel calls and for for fast memory compilations.

Purpose Revisit: This gives you a glimpse of what happens with the process calls and monitors a cpu-bound, database process is pass its time.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

Using Flame Graphs to analyze performance & workloads: Part 1

Posted by FatDBA on January 13, 2018

Hi Guys,

Now after my last two posts on Perf tool/profiler, i guess this the right time to move ahead with interpretation on stack trace data.
Back with another post, this time as promised about ‘Flame Graphs’!
So, would like to first start with the very standard question – Why, what this Flame Graphs are ?

Flame Graphs is tool developed by Kernel/System Performance maestro Brendan Gregg to project or visualize strack traces for both user/all processes and kernel-level stacks for CPU, Off-CPU etc. You can use it on stacks/traces collected by various profiler i.e. perf, Dtrace, SystemTap etc.

In this post we will be using PERF for statistics collection and project them using Flame Graphs and will understand the complexity and the code paths of many Oracle’s internal functions. This being the very first edition for this subject we will start with basics and will first discuss about CPU graphs and try to understand why and where CPUs are busy using stack traces and identify hot code-paths.This can be a really helpful tool for fast identification of performance problems where conventional oracle performance tools failed.

Okay, let’s jump on how you will do it. First you have to download the toolkit (Download Link)
This tool generates results in SVG format using below three steps.

  • Stats Collection using any of the profiler e.g. Perf, Dtrace, Systemtap etc.
  • Compress or Fold your stats
  • Finally use the core (flamegraph.pl) script to generate the SVG file.

Okay, so now after enough description on tool, let’s start out first case – “CPU Flame Graphs for an expensive SQL Statement”.

Scenario: We have a resource intensive query “COUNT on one of the big table “ running from session with SPID 19280
So, we will be collecting process strack traces while the SQL statement was in run.

SQL> explain plan for select count(*) from dwh.**********_*** where ****TIMESTAMP > '23-JUN-16' and ******KEY > 789999999;

Explained.

SQL> SELECT * FROM TABLE(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1112565023

------------------------------------------------------------------------------------------
| Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                     |     1 |    14 |  2604K  (1)| 08:40:53 |
|   1 |  SORT AGGREGATE    |                     |     1 |    14 |            |          |
|*  2 |   TABLE ACCESS FULL| **************_**** |    39M|   524M|  2604K  (1)| 08:40:53 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("*****KEY">789999999 AND "*****TIMESTAMP">'23-JUN-16')

14 rows selected.

SQL>

Now collecting perf statistics for the process at low-frequency of 997 Hertz.

[root@dselimw5862 perftest]# perf record -a -g -F997 -p 19280 

perf record: Woken up 3926 times to write data ]
[ perf record: Captured and wrote 981.809 MB perf.data (9751081 samples) ]

This will create perf data file under the same directory.

[root@dselimw5862 perftest]# ls -ltrh
-rw-------. 1 root root 982M Jan 12 14:58 perf.data

Next, collapse the stacks using below

[root@dselimw5862 perftest]# /root/perftest/FlameGraph-master/stackcollapse-perf.pl 

Finally, time to render the SVG file.

[root@dselimw5862 perftest]# /root/perftest/FlameGraph-master/flamegraph.pl --title "Flame Graph: Selective data search on dwh.W6AUDITHISTORY_FULL Table" > ImageAuditHistory.svg 

Let’s open the SVG file in browser and see what’s it’s got!

Important Note: The x-axis reflects stack profile population, and the y-axis represents the stack depth

Here you will see many of the familiar Oracle specific functions/processes opiodr, opitsk, opiino, opidrv etc.
Okay so, If you take a look at the map, you will see few of the Linux processes spawned first which called Oracle Internal functions next which moved to the query execution (i.e SELECT FETCH -> GROUP BY SORT -> TABLE ACCESS) and then to other functions. For example you can see the execution time during the sampling was spent in kdstf0100101000km functions which was called by kdsttgr which is Kernel Data Scan Table Get Row.

So, now question ‘Which function is on CPU the most ??
The top edge shows who is on CPU directly, copy_user_generic_unrolled function which is used when there is no optimization on CPU level.
Remember Flame Graphs follows ancestry, means copy_user_generic_unrolled was called by __pread_nocancel and __pread_nocancel was called by ksfd_skgfqio and which called … and then it points to …

Next look little above from the bottom of the graph with three functions __libc_start_main, __pread_nocancel and kaf4reasrp1km. If we visually compare them it is clear that __pread_nocancel function was running more often than other two. We see one of the linux function __pread_nocancel() was sampled a lot during the query execution and found repeated 2 times in the graph. This is used by libc when a cancellation point (a POSIX threads concept) is not permitted.

In Short: So, the Flame Graphs might help you when you have a problem in hand ‘My Production Database had poor performance”.
It was a heavy CPU consumer, so i used to CPU profiler to see WHY ? – But do you thing reading those lengthy, messy, long, raw traces that easy. So, here at this point there is a need of some data representation tool to capture the problems and sources and to visualize them. Here comes Flame Graphs to picture, The hero! It also gives you flexibility to move your cursor to any of the object to understand how much percent the object was present during the sample time.

Well, in later editions for this subject i will try to cover more scenarios. The sole purpose of this post is to give audience an idea, a gist of the topic.

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

How to troubleshoot, understand HW events & measure performance using PERF (Linux Profiler)! – Part 1

Posted by FatDBA on January 8, 2018

Hi Pals,

Some time back i was part of a performance task-force for one of our customers in Czech where i did a complete full 360 degrees auditing and troubleshooting of their full database stack. But you know most of the times it not just the software or the database but the real problem lies underneath the OS layer.

Okay, so back to the discussion – So during the assignment at one point where i had to collect some statistics for the OS to prove my point that ‘Problem is not with the Database‘ but with something else, i had used few of the tools like DTrace, Linux Trace Toolkit or LTT, Systemtap and one of the profiler which i hardly used before ‘Perf‘.

Yes, so today’s topic of discussion is to give you some basic idea (Will cover the deep analysis using the tool in future) about the ‘Perf’ profiler which helps to dissect and understand hardware events and measures performance.

Okay, let me take a real time scenario where a sudden surge in CPU in a production environment raised questions.
During the initial probing of issues we have found that that about 25% of the CPU is consumed by SQLPLUS command with PID 6720, lets see what and why its doing that ….

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 6720 oracle    20   0  113m  14m 9.8m R 25.8  0.5   0:20.32 sqlplus   as sysdba

Lets first Sample on-CPU functions for the process ID on 99 Hertz using below command for PID 6720.

perf record -F 99 -p 6720 

This will produce a raw file named ‘perf.data’ under the same directory from where you called it.
Let it run for some time. You can cancel the execution by pressing CTRL+C.

[root@dixitlab perf]# ls -ltrh
total 12K
-rw-------. 1 root root 12K Jan  8 01:14 perf.data

Now lets see what’s there inside the raw file —- lets Dump raw contents from perf.data as hex for debugging.
For this we will use perf script with flag D, it will automatically considers the perf.data file present under the directory to show charts.


[root@dixitlab perf]# perf script -D

0x110 [0x48]: event: 1
.
. ... raw event: size 72 bytes
.  0000:  01 00 00 00 01 00 48 00 ff ff ff ff 00 00 00 00  ......H.........
.  0010:  00 00 00 00 00 00 00 00 ff ff ff 9f ff ff ff ff  ................
.  0020:  00 00 00 81 ff ff ff ff 5b 6b 65 72 6e 65 6c 2e  ........[kernel.
.  0030:  6b 61 6c 6c 73 79 6d 73 5d 5f 74 65 78 74 00 00  kallsyms]_text..
.  0040:  00 00 00 00 00 00 00 00                          ........
.
0x110 [0x48]: PERF_RECORD_MMAP -1/0: [0(0xffffffff9fffffff) @ 0xffffffff81000000]: [kernel.kallsyms]_text

0x158 [0x78]: event: 1
.
. ... raw event: size 120 bytes
.  0000:  01 00 00 00 01 00 78 00 ff ff ff ff 00 00 00 00  ......x.........
.  0010:  00 00 00 a0 ff ff ff ff ff bf 01 00 00 00 00 00  ................
.  0020:  00 00 00 00 00 00 00 00 2f 6c 69 62 2f 6d 6f 64  ......../lib/mod
.  0030:  75 6c 65 73 2f 32 2e 36 2e 33 39 2d 34 30 30 2e  ules/2.6.39-400.
.  0040:  32 39 37 2e 33 2e 65 6c 36 75 65 6b 2e 78 38 36  297.3.el6uek.x86
.  0050:  5f 36 34 2f 6b 65 72 6e 65 6c 2f 64 72 69 76 65  _64/kernel/drive
.  0060:  72 73 2f 6d 64 2f 64 6d 2d 6d 6f 64 2e 6b 6f 00  rs/md/dm-mod.ko.
.  0070:  00 00 00 00 00 00 00 00                          ........
.


0x2bf8 [0x68]: PERF_RECORD_MMAP 4387/4387: [0x7f6062943000(0x4c000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libons.so

0x2c60 [0x68]: event: 1
.
. ... raw event: size 104 bytes
.  0000:  01 00 00 00 02 00 68 00 23 11 00 00 23 11 00 00  ......h.#...#...
.  0010:  00 80 ba 62 60 7f 00 00 00 90 4d 00 00 00 00 00  ...b`.....M.....
.  0020:  00 00 00 00 00 00 00 00 2f 75 30 31 2f 61 70 70  ......../u01/app
.  0030:  2f 6f 72 61 63 6c 65 2f 70 72 6f 64 75 63 74 2f  /oracle/product/
.  0040:  31 32 2e 32 2e 30 2f 64 62 68 6f 6d 65 5f 31 2f  12.2.0/dbhome_1/
.  0050:  6c 69 62 2f 6c 69 62 6e 6e 7a 31 32 2e 73 6f 00  lib/libnnz12.so.
.  0060:  00 00 00 00 00 00 00 00                          ........
.
0x2c60 [0x68]: PERF_RECORD_MMAP 4387/4387: [0x7f6062ba8000(0x4d9000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libnnz12.so

0x2cc8 [0x68]: event: 1
.
. ... raw event: size 104 bytes
.  0000:  01 00 00 00 02 00 68 00 23 11 00 00 23 11 00 00  ......h.#...#...
.  0010:  00 10 2f 63 60 7f 00 00 00 c0 22 00 00 00 00 00  ../c`.....".....
.  0020:  00 00 00 00 00 00 00 00 2f 75 30 31 2f 61 70 70  ......../u01/app
.  0030:  2f 6f 72 61 63 6c 65 2f 70 72 6f 64 75 63 74 2f  /oracle/product/
.  0040:  31 32 2e 32 2e 30 2f 64 62 68 6f 6d 65 5f 31 2f  12.2.0/dbhome_1/
.  0050:  6c 69 62 2f 6c 69 62 69 70 63 31 2e 73 6f 00 00  lib/libipc1.so..
.  0060:  00 00 00 00 00 00 00 00                          ........
.
0x2cc8 [0x68]: PERF_RECORD_MMAP 4387/4387: [0x7f60632f1000(0x22c000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libipc1.so

0x2d30 [0x68]: event: 1
.
. ... raw event: size 104 bytes
.  0000:  01 00 00 00 02 00 68 00 23 11 00 00 23 11 00 00  ......h.#...#...
.  0010:  00 50 72 63 60 7f 00 00 00 20 07 00 00 00 00 00  .Prc`.... ......
.  0020:  00 00 00 00 00 00 00 00 2f 75 30 31 2f 61 70 70  ......../u01/app
.  0030:  2f 6f 72 61 63 6c 65 2f 70 72 6f 64 75 63 74 2f  /oracle/product/
.  0040:  31 32 2e 32 2e 30 2f 64 62 68 6f 6d 65 5f 31 2f  12.2.0/dbhome_1/
.  0050:  6c 69 62 2f 6c 69 62 6d 71 6c 31 2e 73 6f 00 00  lib/libmql1.so..
.  0060:  00 00 00 00 00 00 00 00                          ........
.
0x2d30 [0x68]: PERF_RECORD_MMAP 4387/4387: [0x7f6063725000(0x72000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libmql1.so

0x2d98 [0x78]: event: 1
.
. ... raw event: size 120 bytes
.  0000:  01 00 00 00 02 00 78 00 23 11 00 00 23 11 00 00  ......x.#...#...
.  0010:  00 c0 99 63 60 7f 00 00 00 90 3a 00 00 00 00 00  ...c`.....:.....
.  0020:  00 00 00 00 00 00 00 00 2f 75 30 31 2f 61 70 70  ......../u01/app
.  0030:  2f 6f 72 61 63 6c 65 2f 70 72 6f 64 75 63 74 2f  /oracle/product/
.  0040:  31 32 2e 32 2e 30 2f 64 62 68 6f 6d 65 5f 31 2f  12.2.0/dbhome_1/
.  0050:  6c 69 62 2f 6c 69 62 63 6c 6e 74 73 68 63 6f 72  lib/libclntshcor
.  0060:  65 2e 73 6f 2e 31 32 2e 31 00 00 00 00 00 00 00  e.so.12.1.......
.  0070:  00 00 00 00 00 00 00 00                          ........
.
0x2d98 [0x78]: PERF_RECORD_MMAP 4387/4387: [0x7f606399c000(0x3a9000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libclntshcore.so.12.1

0x2e10 [0x70]: event: 1
.
. ... raw event: size 112 bytes
.  0000:  01 00 00 00 02 00 70 00 23 11 00 00 23 11 00 00  ......p.#...#...
.  0010:  00 a0 f6 63 60 7f 00 00 00 30 65 03 00 00 00 00  ...c`....0e.....
.  0020:  00 00 00 00 00 00 00 00 2f 75 30 31 2f 61 70 70  ......../u01/app
.  0030:  2f 6f 72 61 63 6c 65 2f 70 72 6f 64 75 63 74 2f  /oracle/product/
.  0040:  31 32 2e 32 2e 30 2f 64 62 68 6f 6d 65 5f 31 2f  12.2.0/dbhome_1/
.  0050:  6c 69 62 2f 6c 69 62 63 6c 6e 74 73 68 2e 73 6f  lib/libclntsh.so
.  0060:  2e 31 32 2e 31 00 32 2e 00 00 00 00 00 00 00 00  .12.1.2.........
.
0x2e10 [0x70]: PERF_RECORD_MMAP 4387/4387: [0x7f6063f6a000(0x3653000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libclntsh.so.12.1

0x2e80 [0x70]: event: 1
.
. ... raw event: size 112 bytes
.  0000:  01 00 00 00 02 00 70 00 23 11 00 00 23 11 00 00  ......p.#...#...
.  0010:  00 20 a1 67 60 7f 00 00 00 40 0e 00 00 00 00 00  . .g`....@......
.  0020:  00 00 00 00 00 00 00 00 2f 75 30 31 2f 61 70 70  ......../u01/app
.  0030:  2f 6f 72 61 63 6c 65 2f 70 72 6f 64 75 63 74 2f  /oracle/product/
.  0040:  31 32 2e 32 2e 30 2f 64 62 68 6f 6d 65 5f 31 2f  12.2.0/dbhome_1/
.  0050:  6c 69 62 2f 6c 69 62 73 71 6c 70 6c 75 73 2e 73  lib/libsqlplus.s
.  0060:  6f 00 32 2e 31 00 32 2e 00 00 00 00 00 00 00 00  o.2.1.2.........
.
0x2e80 [0x70]: PERF_RECORD_MMAP 4387/4387: [0x7f6067a12000(0xe4000) @ 0]: /u01/app/oracle/product/12.2.0/dbhome_1/lib/libsqlplus.so

0x2ef0 [0x38]: event: 1
.
. ... raw event: size 56 bytes
.  0000:  01 00 00 00 02 00 38 00 23 11 00 00 23 11 00 00  ......8.#...#...
.  0010:  00 90 8f 09 ff 7f 00 00 00 10 00 00 00 00 00 00  ................
.  0020:  00 00 00 00 00 00 00 00 5b 76 64 73 6f 5d 00 70  ........[vdso].p
.  0030:  00 00 00 00 00 00 00 00                          ........
.

Now lets get back to the point where we had the process (PID: 6720) consuming highest CPU resources.

now profiling CPU’s with sample rate of 99 Hertz/second, -a for all CPU’s, -g for stack traces, sleep of 60 seconds for duration of run.

[root@dixitlab perf]# perf record -F 99 -a -g -- sleep 60 
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.388 MB perf.data (~16948 samples) ]

Now as i discussed above, it will generate a new perf.data under the directory.

[root@dixitlab perf]# ls -ltrh
total 768K
-rw-------. 1 root root 379K Jan  8 01:30 perf.data.old
-rw-------. 1 root root 386K Jan  8 01:31 perf.data

Next, we will interpret results from the newly created ‘perf.data’ file.
I have used ‘perf report -n –stdio’, you can simply use ‘perf report’ too.


[root@dixitlab perf]#
[root@dixitlab perf]# perf report -n --stdio 
# Events: 2K cpu-clock
#
# Overhead  Samples            Command          Shared Object                                    Symbol
# ........ ..........  ...............  .....................  ........................................
#
    50.05%       1001             sshd  [e1000]                [k] e1000_phy_read_status
                       |
                       --- e1000_phy_read_status
                          |
                          |--99.60%-- dev_hard_start_xmit
                          |          sch_direct_xmit
                          |          dev_queue_xmit
                          |          ip_finish_output
                          |          ip_output
                          |          ip_local_out
                          |          ip_queue_xmit
                          |          tcp_transmit_skb
                          |          tcp_write_xmit
                          |          __tcp_push_pending_frames
                          |          tcp_push
                          |          tcp_sendmsg
                          |          inet_sendmsg
                          |          sock_aio_write
                          |          do_sync_write
                          |          vfs_write
                          |          sys_write
                          |          system_call_fastpath
                          |          __write_nocancel
                           --0.40%-- [...]

     8.35%        167          swapper  [kernel.kallsyms]      [k] native_safe_halt
                    |
                    --- native_safe_halt
                        default_idle
                        cpu_idle
                        rest_init
                        start_kernel
                        x86_64_start_reservations
                        x86_64_start_kernel

     4.45%         89          sqlplus  [kernel.kallsyms]      [k] finish_task_switch
                    |
                    --- finish_task_switch
                        __schedule
                       |
                       |--87.64%-- __cond_resched
                       |          _cond_resched
                       |          mutex_lock
                       |          |
                       |          |--91.03%-- process_output_block
                       |          |          n_tty_write
                       |          |          tty_write
                       |          |          vfs_write
                       |          |          sys_write
                       |          |          system_call_fastpath
                       |          |          __write_nocancel
                       |          |
                       |           --8.97%-- process_output
                       |                     n_tty_write
                       |                     tty_write
                       |                     vfs_write
                       |                     sys_write
                       |                     system_call_fastpath
                       |                     __write_nocancel
                       |
                        --12.36%-- schedule
                                  sysret_careful
                                  |
                                  |--81.82%-- __write_nocancel
                                  |
                                   --18.18%-- __write_nocancel


.................
...........................
..................................

Note:
You can further sort results based on cpu, shared objects etc.
perf report –sort=cpu
perf report –sort=dso

Above results reflects the division. While taking a look at the tree, it shows a combination or a club of few OS Layer components (Starting from dev_hard_start_xmit) was sampled of time 50.05*99.60 = 49.84%
with parent sshd command and shared object [e1000] and is a physical read operation.

Rest of all might seem gibberish but needs time and patience to understand. I will try to cover the profiler in more detail with more use case scenarios in future. Till the time you can check the WIKI page of the tool.

Note: In case if you are using perf on VMWare system, you might get error “perf.data file has no samples“.
In order to fix it try using the ‘perf record’ option with -e cpu-clock arguments to collect the sample data and then use ‘perf report’ to interpret the file.

example:
perf record -F 99 -a -g -e cpu-clock — sleep 20


Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | 3 Comments »

Script to measure the Source & Contribution of any Wait Event within AWR snapshots.

Posted by FatDBA on January 1, 2018

🤘 Happy New Year Everyone! 🤘

So, here i am with the maiden post of this year.
Today i will discuss about a script that might be handy and useful while you are investigating any performance problem with the database.
In fact this is one of my favorite script which i always considers to run at a point when we have identified the leading wait event and the time-frame of the spike or for any transient variation in system behavior. This script helps you to measure the source and contribution or the impact of any specific wait event.

The script is a join between ASH views and historical views (dba_hist_active_sess_history AND dba_hist_snapshot) and provides you stats based on Average Number of Active Sessions (AAS) were waiting on this event during the period and with what total contribution in terms of percentage.

Lets work on a use case.
Suppose you have a system with huge User IOs happening, as conformed by any report or tool (AWR, ASH, Via any Script, OEM etc.)

For example using AWR we found one of the User IO wait class event ‘db file scattered read’ was leading the ‘Top Foreground Wait Events’ charts on a specific time when we have the high User IO load on system.

                                           Total Wait       Avg   % DB Wait
Event                                Waits Time (sec)      Wait   time Class
------------------------------ ----------- ---------- --------- ------ --------
db file scattered read              10,282       82.8    41.50ms  43.9 User I/O

So, now you have identified the leading wait event and the timeframe with maximum intensity or frequency, Lets call the script.

Script Inputs:
Example:

Enter value for event_class:
User I/O
Enter value for event_name: db file scattered read
Enter value for begin_snap: 193
Enter value for end_snap: 200
Enter value for dbid: 2896132084

WITH
events AS (
SELECT /*+ MATERIALIZE NO_MERGE */
       SUBSTR(TRIM(h.sql_id||' '||h.program||' '||
       CASE h.module WHEN h.program THEN NULL ELSE h.module END), 1, 128) source,
       h.dbid,
       COUNT(*) samples
  FROM dba_hist_active_sess_history h,
       dba_hist_snapshot s
 WHERE h.wait_class = TRIM('&Event_Class') AND h.event = TRIM('&Event_Name')
   AND h.snap_id BETWEEN &Begin_Snap AND &End_Snap
   AND h.dbid = &dbid
   AND s.snap_id = h.snap_id
   AND s.dbid = h.dbid
   AND s.instance_number = h.instance_number
 GROUP BY
       h.sql_id,
       h.program,
       h.module,
       h.dbid
 ORDER BY
       3 DESC
),
total AS (
SELECT SUM(samples) samples,
       SUM(CASE WHEN ROWNUM > 15 THEN samples ELSE 0 END) others
  FROM events
)
SELECT e.source,
       e.samples,
       ROUND(100 * e.samples / t.samples, 1) percent,
       (SELECT DBMS_LOB.SUBSTR(s.sql_text, 1000, 1) FROM dba_hist_sqltext s WHERE s.sql_id = SUBSTR(e.source, 1, 13) AND s.dbid = e.dbid AND ROWNUM = 1) sql_text
  FROM events e,
       total t
 WHERE ROWNUM  0.1
 UNION ALL
SELECT 'Others',
       others samples,
       ROUND(100 * others / samples, 1) percent,
       NULL sql_text
  FROM total
 WHERE others > 0
   AND ROUND(100 * others / samples, 1) > 0.1;

Below is the output of the query.

SOURCE                                                                      SAMPLES      PERCENT       SQL_TEXT
------------------------------------------------------------------------  ---------- ---------------  -------------------------------
5av23g8w7f3ka sqlplus@dixitlab.localdomain (TNS V1-V3)                          1          45.23       select * from dixit.bigtab
9a7gbkahasj1a sqlplus@dixitlab.localdomain (TNS V1-V4)                          1           5          select * from dixit.gianttable
OTHERS                                                                          14         49.77

Okay so result shows one of the SQL statement with SQL id 5av23g8w7f3ka was responsible for more than 45% of these waits.
So, now you have the evidence and can start troubleshooting this specific statement to reduce the USER IOs.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | 3 Comments »

 
%d bloggers like this: