Tales From A Lazy Fat DBA

Den of an Oracle DB Performance freak & a fan of PostgreSQL & Cassandra …. \,,/

  • Likes

    • 145,590
  • Archives

  • Categories

  • Subscribe

  • Advertisements

PostgreSQL: SELECT on a big table leading to session KILL.

Posted by FatDBA on September 26, 2018

Hi Guys,

Would like to discuss one strange situation what I’ve faced while doing a general SELECT operation on one of the table which is around 1 GB in size and contains 20000000 rows of test data. After spending around less than a minute the session gets killed and kicks me out back to the Linux terminal.

And yes, the OS is RHEL 7 (64 Bit) running on VMWare.

So, this what happened …


postgres=# select * from test;
Killed
-bash-4.2$ 


As always to troubleshoot any issue i started with the instance/db logs to see what’s going on, but it speaks not much and left me confused with no reason of this session kill everytime during this SELECT operation.



 LOG:  could not send data to client: Broken pipe
 STATEMENT:  SELECT * from test;
 FATAL:  connection to client lost



Table details are here below.



postgres=#
postgres=# \dt+ test
                    List of relations
 Schema | Name | Type  |  Owner   |  Size  | Description
--------+------+-------+----------+--------+-------------
 public | test | table | postgres | 996 MB |
(1 row)


postgres=# select count(*) from test;
  count
----------
 20000000
(1 row)

Time: 2045.816 ms



I fired the same query once again to see what’s happening there on the OS, pasted below are top results for the same time when this SELECT was in run. You can use pg_top too, that’s more PostgreSQL specific.



last pid: 15382;  load avg:  4.40,  2.70,  1.65;       up 0+10:27:52                                                                                           
0 processes:
CPU states: 94.0% user,  0.0% nice,  6.0% system,  0.0% idle,  0.0% iowait
Memory: 1836M used, 15M free, 139M cached
DB activity:   0 tps,  0 rollbs/s,   0 buffer r/s,  0 hit%,      0 row r/s,    0 row w/s
DB I/O:    51 reads/s, 25940 KB/s,     0 writes/s,     0 KB/s
DB disk: 0.0 GB total, 0.0 GB free (100% used)
Swap: 1925M used, 123M free, 8916K cached


   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
 15367 postgres  20   0  358572   3660   3376 D 29.8  0.2   0:33.31 postgres: postgres postgres [local] SELECT
  1721 mongod    20   0  976044   3192      0 S 19.0  0.2   4:13.26 /usr/bin/mongod -f /etc/mongod.conf
 15382 postgres  20   0 2012488 1.584g    228 S  2.4 87.6   0:02.62 psql      >>>>>>>>>>>>>>>>>>>>>>>>> Culprit



If we take a look at top results its evident that one of the postgreSQL session with PID 15382 is consuming a lot what is there under ‘RES’ column and that’s the non-swapped physical memory a task has used. It’s using around 1.5 GB of physical RAM and the CPU usage is little high as well.
This makes sense as we only have around 1.5 GB of RAM allocated to this VM machine, so every time i fires this SELECT it maxes out on memory usage, but still i would like to dig-in deep with all my WHY, WHERE, WHAT questions. And the best place to go next is system messages or logs.

Below are the system logs for the same time.



Sep 26 11:33:04 fatdba kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/20/2014
Sep 26 11:33:04 fatdba kernel: ffff880027f41f60 00000000a0ebe647 ffff88006c9c7ad0 ffffffff81686e13

Sep 26 11:33:07 fatdba kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Sep 26 11:33:08 fatdba kernel: [14918]    26 14918    29067        1      13      312             0 bash
Sep 26 11:33:08 fatdba kernel: [15382]    26 15382   521305   419602     920    27070             0 psql   >>>>>>>>>>>>>>>>>> >>>>>>>>>>>> CULPRIT PROCESS
Sep 26 11:33:08 fatdba kernel: [15384]    26 15384    89643      440      64      405             0 postmaster
Sep 26 11:33:08 fatdba kernel: [15385]    26 15385    19311      259      43        0             0 pg_top
Sep 26 11:33:08 fatdba kernel: [15411]     0 15411    28811       46      11       23             0 ksmtuned
Sep 26 11:33:08 fatdba kernel: [15412]     0 15412    28811       61      11       23             0 ksmtuned
Sep 26 11:33:08 fatdba kernel: [15413]     0 15413    37148      172      29        0             0 pgrep
Sep 26 11:33:08 fatdba kernel: [15416]    26 15416    89489      303      59      215             0 postmaster
Sep 26 11:33:08 fatdba kernel: [15417]    26 15417    89511      239      57      224             0 postmaster

Sep 26 11:33:08 fatdba kernel: Out of memory: Kill process 15382 (psql) score 448 or sacrifice child
Sep 26 11:33:08 fatdba kernel: Killed process 15382 (psql) total-vm:2085220kB, anon-rss:1678260kB, file-rss:148kB, shmem-rss:0kB



Sometimes i gets shocked to see the terms OS/Softwares/Apps used while logging internal activities — “Sacrifise Child”, “Node Amnesia”, “Shoot The Other Node in Head”, “Node Suicides” … 🙂

Okay so coming back to the point, so its clear now that the VM/OS has killed this process with ID 15382 (psql) as its is consuming almost all of the system memory resources. Take a look at few of the columns total_vm, rss, swapents .. All too high and pointing towards the real reason of killing this individual session.

We increased the physical memory on this box and ran the same query again with success!

Hope It Helps
Prashant Dixit

Advertisements

Posted in Advanced | Tagged: | Leave a Comment »

Fragmentation Or Bloating in PostgreSQL – How to identify and fix it using DB Vacuuming

Posted by FatDBA on September 4, 2018

Hey Folks,

Back with another post on PostgreSQL. This time related with table fragmentation (Bloating in PG) on how to identify it and fix it using Vacuuming.

Okay, so we have this table of size 995 MBs with close to 20000000 rows and the DB (postgres default db) size is of 2855 MBs.




postgres=#
postgres=# \dt+ large_test
                         List of relations
 Schema |    Name    | Type  |  Owner   |    Size    | Description
--------+------------+-------+----------+------------+-------------
 public | large_test | table | postgres | 995 MB     |



postgres=# \l+ postgres
                                                                List of databases
   Name   |  Owner   | Encoding |  Collate   |   Ctype    | Access privileges |  Size   | Tablespace |                Description
----------+----------+----------+------------+------------+-------------------+---------+------------+--------------------------------------------
 postgres | postgres | UTF8     | en_US.utf8 | en_US.utf8 |                   | 2855 MB | pg_default | default administrative connection database
(1 row)


Now lets do some DMLs to create the scenario.



postgres=# delete from large_test where num3 >=90;
DELETE 7324451

postgres=#
postgres=# INSERT INTO large_test (num1, num2, num3)
postgres-#              SELECT round(random()*10), random(), random()*142
postgres-#                FROM generate_series(1, 2000) s(i);
INSERT 0 2000

postgres=# delete from large_test where num3 >=90;
DELETE 729

postgres=# INSERT INTO large_test (num1, num2, num3)
postgres-#              SELECT round(random()*10), random(), random()*142
postgres-#                FROM generate_series(1, 9000) s(i);
INSERT 0 9000

postgres=#
postgres=# delete from large_test where num1 < 8;
DELETE 9514961

postgres=# delete from large_test where num1 < 10;
DELETE 2536447


Okay now with all those DMLs i am sure we will have enough of fragmentation in the database, lets check.



postgres=#
postgres=# \dt+ large_test
                       List of relations
 Schema |    Name    | Type  |  Owner   |  Size  | Description
--------+------------+-------+----------+--------+-------------
 public | large_test | table | postgres | 996 MB  |
(1 row)


Okay, the size of the table is almost the same what it was before all the DMLs. Lets see if there is any fragmentation in the database, for this i have used below custimized statement, you can also use pg_class table to get basic details on fragmentation.



SELECT
  current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/
  ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::FLOAT/otta END)::NUMERIC,1) AS tbloat,
  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes,
  iname, /*ituples::bigint, ipages::bigint, iotta,*/
  ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::FLOAT/iotta END)::NUMERIC,1) AS ibloat,
  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes
FROM (
  SELECT
    schemaname, tablename, cc.reltuples, cc.relpages, bs,
    CEIL((cc.reltuples*((datahdr+ma-
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::FLOAT)) AS otta,
    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::FLOAT)),0) AS iotta -- very rough approximation, assumes all cols
  FROM (
    SELECT
      ma,bs,schemaname,tablename,
      (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::NUMERIC AS datahdr,
      (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
    FROM (
      SELECT
        schemaname, tablename, hdr, ma, bs,
        SUM((1-null_frac)*avg_width) AS datawidth,
        MAX(null_frac) AS maxfracsum,
        hdr+(
          SELECT 1+COUNT(*)/8
          FROM pg_stats s2
          WHERE null_frac0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename
        ) AS nullhdr
      FROM pg_stats s, (
        SELECT
          (SELECT current_setting('block_size')::NUMERIC) AS bs,
          CASE WHEN SUBSTRING(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,
          CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma
        FROM (SELECT version() AS v) AS foo
      ) AS constants
      GROUP BY 1,2,3,4,5
    ) AS foo
  ) AS rs
  JOIN pg_class cc ON cc.relname = rs.tablename
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname  'information_schema'
  LEFT JOIN pg_index i ON indrelid = cc.oid
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml
ORDER BY wastedbytes DESC;


Okay so below results which we have captured clearly shows that there is a fragmentation (Look at wastedbytes column)




 current_database | schemaname |    tablename     | tbloat | wastedbytes |               iname               | ibloat | wastedibytes
------------------+------------+------------------+--------+-------------+-----------------------------------+--------+--------------
 postgres         | public     | large_test       |   31.6 |  1011122176 | idxlargetest                      |   19.6 |    426352640
 postgres         | public     | large_test       |   31.6 |  1011122176 | idxlargetest1                     |   19.6 |    426704896


Now when the fragmentation is clear, lets do the shrink or space reclaim using VACUUM. I will first try with ‘BASIC VACUUM’ and ANALYZE the table at the same time to make optimizer statistics up-to-date.

To remove dead tuples, vacuum processing provides two modes, i.e. Concurrent VACUUM and Full VACUUM. Concurrent VACUUM, often simply called VACUUM, removes dead tuples for each page of the table file, and other transactions can read the table while this process is running. In contrast, Full VACUUM removes dead tuples and defragments live tuples the whole file, and other transactions cannot access tables while Full VACUUM is running.



postgres=#
postgres=# VACUUM (VERBOSE, ANALYZE) large_test; 
INFO:  vacuuming "public.large_test"
INFO:  index "idxlargetest" now contains 634412 row versions in 54840 pages
DETAIL:  0 index row versions were removed.
19811 index pages have been deleted, 13985 are currently reusable.
CPU 0.49s/0.03u sec elapsed 2.93 sec.
INFO:  index "idxlargetest1" now contains 634412 row versions in 54883 pages
DETAIL:  0 index row versions were removed.
52137 index pages have been deleted, 0 are currently reusable.
CPU 0.23s/0.03u sec elapsed 0.26 sec.
INFO:  "large_test": found 0 removable, 7 nonremovable row versions in 1 out of 127459 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 87 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.72s/0.07u sec elapsed 3.21 sec.
INFO:  analyzing "public.large_test"
INFO:  "large_test": scanned 30000 of 127459 pages, containing 149103 live rows and 0 dead rows; 30000 rows in sample, 633484 estimated total rows
VACUUM
postgres=#


As discussed and expected we see no change in wasted space and fragmentation still exists, see below result which matches the same what we have collected before the BASIC VACUUM.



 current_database | schemaname |    tablename     | tbloat | wastedbytes |               iname               | ibloat | wastedibytes
------------------+------------+------------------+--------+-------------+-----------------------------------+--------+--------------
 postgres         | public     | large_test       |   31.6 |  1011122176 | idxlargetest                      |   19.6 |    426352640
 postgres         | public     | large_test       |   31.6 |  1011122176 | idxlargetest1                     |   19.6 |    426704896


Okay, so time to test the FULL VACUUM which reclaims more space and does the real action of freeing up the space than plain or basic VACUUM but the only issue with it it locks the database table.



postgres=# VACUUM (FULL, VERBOSE, ANALYZE) large_test;
INFO:  vacuuming "public.large_test"
INFO:  "large_test": found 0 removable, 634412 nonremovable row versions in 127459 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 1.48s/0.50u sec elapsed 12.28 sec.
INFO:  analyzing "public.large_test"
INFO:  "large_test": scanned 4041 of 4041 pages, containing 634412 live rows and 0 dead rows; 30000 rows in sample, 634412 estimated total rows
VACUUM


Now lets see if there is any change in fragmentation levels.



postgres=# \l+ postgres
                                                                List of databases
   Name   |  Owner   | Encoding |  Collate   |   Ctype    | Access privileges |  Size   | Tablespace |                Description
----------+----------+----------+------------+------------+-------------------+---------+------------+--------------------------------------------
 postgres | postgres | UTF8     | en_US.utf8 | en_US.utf8 |                   | 1062 MB | pg_default | default administrative connection database
(1 row)

postgres=# \dt+ large_test
                      List of relations
 Schema |    Name    | Type  |  Owner   | Size  | Description
--------+------------+-------+----------+-------+-------------
 public | large_test | table | postgres | 32 MB |
(1 row)

 current_database | schemaname |    tablename     | tbloat | wastedbytes |               iname               | ibloat | wastedibytes
------------------+------------+------------------+--------+-------------+-----------------------------------+--------+--------------
 postgres         | public     | large_test       |    1.0 |       32768 | idxlargetest                      |    0.6 |            0
 postgres         | public     | large_test       |    1.0 |       32768 | idxlargetest1                     |    0.6 |            0


Yup, this time it worked after FULL VACUUMing of the database and now there isn’t any wasted or fragmented space exists in the table.


Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Upgrade PostgreSQL from 9.4 to 9.6 on RHEL 7

Posted by FatDBA on September 3, 2018

Hi Mates,

Every other day while doing this POC (Data migrations between Oracle and PostgreSQL), i am facing regular challenges (But i like them ;)) , this time related with the application and database support. Project team asked us to upgrade the database to at-least 9.6 (Though we have 10 in the marked as well but i guess not yet ready for Prod’s).

Its good to update you that you can run various versions of PostgreSQL at the same time but as per the ask we have this test DB running on version 9.4 and would be upgraded to 9.6. So, before i start with the steps, lets quickly look and collect details about all objects till now created on the database as this will be used to TVT testings later on.

Okay, so we are running on PostgreSQL 9.4.19 have created few sample/test databases with tables, indexes, tablespaces created.

Environment:
Server: RHEL 7 64 Bit
Upgrade: From 9.4.19 to 9.6.10

Pre Checks:


-bash-4.2$ uname -ar
Linux fatdba.localdomain 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 16:26:01 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
-bash-4.2$

postgres=# select version();
                                                    version
----------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.4.19 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)


elephant=# select datname from pg_database;
  datname
-----------
 template1
 template0
 postgres
 elephant
 dixitdb
(5 rows)

elephant=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=# \dt+
                         List of relations
 Schema |    Name    | Type  |  Owner   |    Size    | Description
--------+------------+-------+----------+------------+-------------
 public | alee       | table | postgres | 128 kB     |
 public | events     | table | postgres | 8192 bytes |
 public | large_test | table | postgres | 995 MB     |
(3 rows)

postgres=#

Step 1: Download the required package from PostgreSQL official website (Link: https://www.postgresql.org/download/linux/redhat/), here you will find the latest repository RPM for your OS and the latest PG versions.


[root@fatdba ~]#
[root@fatdba ~]# yum install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-redhat96-9.6-3.noarch.rpm
Loaded plugins: langpacks, ulninfo
pgdg-redhat96-9.6-3.noarch.rpm                                                                                                                                        | 4.7 kB  00:00:00
Examining /var/tmp/yum-root-fEvD8A/pgdg-redhat96-9.6-3.noarch.rpm: pgdg-redhat96-9.6-3.noarch
Marking /var/tmp/yum-root-fEvD8A/pgdg-redhat96-9.6-3.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package pgdg-redhat96.noarch 0:9.6-3 will be installed
--> Finished Dependency Resolution
cassandra/signature                                                                                                                                                   |  833 B  00:00:00
cassandra/signature                                                                                                                                                   | 2.9 kB  00:00:00 !!!
mongodb-org-3.4/7Server                                                                                                                                               | 2.5 kB  00:00:00
ol7_UEKR4/x86_64                                                                                                                                                      | 1.2 kB  00:00:00
ol7_addons/x86_64                                                                                                                                                     | 1.2 kB  00:00:00
ol7_latest/x86_64                                                                                                                                                     | 1.4 kB  00:00:00
pgdg94/7Server/x86_64                                                                                                                                                 | 4.1 kB  00:00:00

Dependencies Resolved

=============================================================================================================================================================================================
 Package                                      Arch                                  Version                                 Repository                                                  Size
=============================================================================================================================================================================================
Installing:
 pgdg-redhat96                                noarch                                9.6-3                                   /pgdg-redhat96-9.6-3.noarch                                2.7 k

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package

Total size: 2.7 k
Installed size: 2.7 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : pgdg-redhat96-9.6-3.noarch                                                                                                                                                1/1
  Verifying  : pgdg-redhat96-9.6-3.noarch                                                                                                                                                1/1

Installed:
  pgdg-redhat96.noarch 0:9.6-3

Complete!
[root@fatdba ~]#



[root@fatdba ~]#
[root@fatdba ~]# yum install postgresql96
Loaded plugins: langpacks, ulninfo
pgdg96                                                                                                                                                                | 4.1 kB  00:00:00
(1/2): pgdg96/7Server/x86_64/group_gz                                                                                                                                 |  249 B  00:00:01
(2/2): pgdg96/7Server/x86_64/primary_db                                                                                                                               | 197 kB  00:00:01
Resolving Dependencies
--> Running transaction check
---> Package postgresql96.x86_64 0:9.6.10-1PGDG.rhel7 will be installed
--> Processing Dependency: postgresql96-libs(x86-64) = 9.6.10-1PGDG.rhel7 for package: postgresql96-9.6.10-1PGDG.rhel7.x86_64
--> Running transaction check
---> Package postgresql96-libs.x86_64 0:9.6.10-1PGDG.rhel7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================================================
 Package                                           Arch                                   Version                                               Repository                              Size
=============================================================================================================================================================================================
Installing:
 postgresql96                                      x86_64                                 9.6.10-1PGDG.rhel7                                    pgdg96                                 1.4 M
Installing for dependencies:
 postgresql96-libs                                 x86_64                                 9.6.10-1PGDG.rhel7                                    pgdg96                                 318 k

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 1.7 M
Installed size: 8.7 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): postgresql96-libs-9.6.10-1PGDG.rhel7.x86_64.rpm                                                                                                                | 318 kB  00:00:07
(2/2): postgresql96-9.6.10-1PGDG.rhel7.x86_64.rpm                                                                                                                     | 1.4 MB  00:00:10
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                        169 kB/s | 1.7 MB  00:00:10
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : postgresql96-libs-9.6.10-1PGDG.rhel7.x86_64                                                                                                                               1/2
  Installing : postgresql96-9.6.10-1PGDG.rhel7.x86_64                                                                                                                                    2/2
  Verifying  : postgresql96-libs-9.6.10-1PGDG.rhel7.x86_64                                                                                                                               1/2
  Verifying  : postgresql96-9.6.10-1PGDG.rhel7.x86_64                                                                                                                                    2/2

Installed:
  postgresql96.x86_64 0:9.6.10-1PGDG.rhel7

Dependency Installed:
  postgresql96-libs.x86_64 0:9.6.10-1PGDG.rhel7

Complete!
[root@fatdba ~]#



[root@fatdba ~]# yum install postgresql96-server
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package postgresql96-server.x86_64 0:9.6.10-1PGDG.rhel7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================================================
 Package                                             Arch                                   Version                                             Repository                              Size
=============================================================================================================================================================================================
Installing:
 postgresql96-server                                 x86_64                                 9.6.10-1PGDG.rhel7                                  pgdg96                                 4.5 M

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package

Total download size: 4.5 M
Installed size: 19 M
Is this ok [y/d/N]: y
Downloading packages:
postgresql96-server-9.6.10-1PGDG.rhel7.x86_64.rpm                                                                                                                     | 4.5 MB  00:00:11
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : postgresql96-server-9.6.10-1PGDG.rhel7.x86_64                                                                                                                             1/1
  Verifying  : postgresql96-server-9.6.10-1PGDG.rhel7.x86_64                                                                                                                             1/1

Installed:
  postgresql96-server.x86_64 0:9.6.10-1PGDG.rhel7

Complete!
[root@fatdba ~]#


Step 2: Create the new PostgreSQL directory. This you can do this by calling the initdb (Initialize DB) with setup shell present under new installed directory for 9.6


[root@fatdba ~]#
[root@fatdba ~]# /usr/pgsql-9.6/bin/postgresql96-setup initdb
Initializing database ... OK

[root@fatdba ~]#
[root@fatdba ~]#
[root@fatdba ~]#
[root@fatdba ~]# cd /var/lib/pgsql
[root@fatdba pgsql]# ls -lthr
total 0
drwxrwxrwx 4 postgres postgres 48 Sep  3 04:39 9.4
drwx------ 3 postgres postgres 29 Sep  3 04:44 tbs
drwx------ 3 postgres postgres 29 Sep  3 20:31 tbs1
drwx------ 4 postgres postgres 48 Sep  3 20:40 9.6
[root@fatdba pgsql]# su - postgres
Last login: Mon Sep  3 20:24:34 IST 2018 on pts/3
-bash-4.2$

Step 3: Before you go with real upgrade steps, its always best and advised to see if its all set and ready for the upgrade. This is one of the most important per-requsites to check the abilities.
Below at the end of the check run it says ‘Cluster is Compatible’ and this is good enough to say that the system is all set for the upgrade.


-bash-4.2$ /usr/pgsql-9.6/bin/pg_upgrade --old-bindir=/usr/pgsql-9.4/bin/ --new-bindir=/usr/pgsql-9.6/bin/ --old-datadir=/var/lib/pgsql/9.4/data/ --new-datadir=/var/lib/pgsql/9.6/data/ --check

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.
Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for roles starting with 'pg_'                      ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*
-bash-4.2$


Step 4: Till this time the older version is still running, so before we do the original upgrade steps that needs to be stopped.


-bash-4.2$ ps -ef|grep post
root       1378      1  0 20:04 ?        00:00:00 /usr/libexec/postfix/master -w
postfix    1380   1378  0 20:04 ?        00:00:00 qmgr -l -t unix -u
postgres   3381      1  0 20:10 ?        00:00:00 /usr/pgsql-9.4/bin/postgres -D /var/lib/pgsql/9.4/data
postgres   3382   3381  0 20:10 ?        00:00:00 postgres: logger process
postgres   3384   3381  0 20:10 ?        00:00:01 postgres: checkpointer process
postgres   3385   3381  0 20:10 ?        00:00:00 postgres: writer process
postgres   3386   3381  0 20:10 ?        00:00:01 postgres: wal writer process
postgres   3387   3381  0 20:10 ?        00:00:00 postgres: autovacuum launcher process
postgres   3388   3381  0 20:10 ?        00:00:00 postgres: stats collector process


-bash-4.2$ su - root
Password:
Last login: Mon Sep  3 20:42:59 IST 2018 from 192.168.40.1 on pts/4
[root@fatdba ~]#
[root@fatdba ~]#
[root@fatdba ~]# service postgresql-9.4 stop
Redirecting to /bin/systemctl stop  postgresql-9.4.service
[root@fatdba ~]#
[root@fatdba ~]# service postgresql-9.4 status
Redirecting to /bin/systemctl status  postgresql-9.4.service
● postgresql-9.4.service - PostgreSQL 9.4 database server
   Loaded: loaded (/usr/lib/systemd/system/postgresql-9.4.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2018-09-03 20:44:59 IST; 5s ago
     Docs: https://www.postgresql.org/docs/9.4/static/
  Process: 4924 ExecStop=/usr/pgsql-9.4/bin/pg_ctl stop -D ${PGDATA} -s -m fast (code=exited, status=0/SUCCESS)
 Main PID: 3381 (code=exited, status=0/SUCCESS)

Sep 03 04:40:02 fatdba.localdomain systemd[1]: Starting PostgreSQL 9.4 database server...
Sep 03 04:40:02 fatdba.localdomain pg_ctl[3377]: LOG:  redirecting log output to logging collector process
Sep 03 04:40:02 fatdba.localdomain pg_ctl[3377]: HINT:  Future log output will appear in directory "pg_log".
Sep 03 04:40:03 fatdba.localdomain systemd[1]: Started PostgreSQL 9.4 database server.
Sep 03 20:44:58 fatdba.localdomain systemd[1]: Stopping PostgreSQL 9.4 database server...
Sep 03 20:44:59 fatdba.localdomain systemd[1]: Stopped PostgreSQL 9.4 database server.
[root@fatdba ~]#

[root@fatdba ~]# ps -ef|grep post
root       1378      1  0 20:04 ?        00:00:00 /usr/libexec/postfix/master -w
postfix    1380   1378  0 20:04 ?        00:00:00 qmgr -l -t unix -u
postfix    4113   1378  0 20:24 ?        00:00:00 pickup -l -t unix -u
postgres   4835   4708  0 20:43 pts/4    00:00:00 tail -200f pg_upgrade_server.log
root       4938   4864  0 20:45 pts/3    00:00:00 grep --color=auto post


Step 5: Let’s upgrade the database now. The upgrade took around ~ 5 minutes for a 5 GB database.


-bash-4.2$ /usr/pgsql-9.6/bin/pg_upgrade --old-bindir=/usr/pgsql-9.4/bin/ --new-bindir=/usr/pgsql-9.6/bin/ --old-datadir=/var/lib/pgsql/9.4/data/ --new-datadir=/var/lib/pgsql/9.6/data/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for roles starting with 'pg_'                      ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
-bash-4.2$
-bash-4.2$

Its always good to put a TAIL on server upgrade logs.


-bash-4.2$ ls -ltrh
total 12K
drwxrwxrwx 4 postgres postgres   48 Sep  3 04:39 9.4
drwx------ 3 postgres postgres   29 Sep  3 04:44 tbs
drwx------ 3 postgres postgres   29 Sep  3 20:31 tbs1
drwx------ 4 postgres postgres   48 Sep  3 20:40 9.6
-rw------- 1 postgres postgres  179 Sep  3 20:43 pg_upgrade_utility.log
-rw------- 1 postgres postgres 1.1K Sep  3 20:43 pg_upgrade_internal.log
-rw------- 1 postgres postgres 1.6K Sep  3 20:43 pg_upgrade_server.log


-----------------------------------------------------------------
  pg_upgrade run on Mon Sep  3 20:46:48 2018
-----------------------------------------------------------------

command: "/usr/pgsql-9.4/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/9.4/data/" -o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/pgsql'" start >> "pg_upgrade_server.log" 2>&1
waiting for server to start....LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
 done
server started


command: "/usr/pgsql-9.4/bin/pg_ctl" -w -D "/var/lib/pgsql/9.4/data/" -o "" -m smart stop >> "pg_upgrade_server.log" 2>&1
waiting for server to shut down.... done
server stopped


command: "/usr/pgsql-9.6/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/9.6/data/" -o "-p 50432 -b -c synchronous_commit=off -c fsync=off -c full_page_writes=off  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/pgsql'" start >> "pg_upgrade_server.log" 2>&1
waiting for server to start.... LOG:  redirecting log output to logging collector process
 HINT:  Future log output will appear in directory "pg_log".
 done
server started


command: "/usr/pgsql-9.6/bin/pg_ctl" -w -D "/var/lib/pgsql/9.6/data/" -o "" -m smart stop >> "pg_upgrade_server.log" 2>&1
waiting for server to shut down.... done
server stopped


command: "/usr/pgsql-9.6/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/9.6/data/" -o "-p 50432 -b -c synchronous_commit=off -c fsync=off -c full_page_writes=off  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/pgsql'" start >> "pg_upgrade_server.log" 2>&1
waiting for server to start.... LOG:  redirecting log output to logging collector process
 HINT:  Future log output will appear in directory "pg_log".
 done
server started


command: "/usr/pgsql-9.6/bin/pg_ctl" -w -D "/var/lib/pgsql/9.6/data/" -o "" -m smart stop >> "pg_upgrade_server.log" 2>&1
waiting for server to shut down.... done
server stopped


command: "/usr/pgsql-9.6/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/9.6/data/" -o "-p 50432 -b -c synchronous_commit=off -c fsync=off -c full_page_writes=off  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/var/lib/pgsql'" start >> "pg_upgrade_server.log" 2>&1
waiting for server to start.... LOG:  redirecting log output to logging collector process
 HINT:  Future log output will appear in directory "pg_log".
 done
server started


command: "/usr/pgsql-9.6/bin/pg_ctl" -w -D "/var/lib/pgsql/9.6/data/" -o "" -m smart stop >> "pg_upgrade_server.log" 2>&1
waiting for server to shut down.... done
server stopped

Step 6: Once the upgrade is done, we have to start & enable the new postgreSQL service.
Okay so below we can see the postmaster and its associated background processes started from new version (9.6) and are accessing the right data directory.

Note: Before you start the new service, always make sure you have all required values set in new parameter or configuration files, moved from old database to the new. For example listen_addresses and max_connections of postgresql.conf or any specific settings related with authorization defied on pg_hba.conf files.


[root@fatdba ~]# service postgresql-9.6 start
Redirecting to /bin/systemctl start  postgresql-9.6.service
[root@fatdba ~]#
[root@fatdba ~]# ps -ef|grep post
root       1378      1  0 20:04 ?        00:00:00 /usr/libexec/postfix/master -w
postfix    1380   1378  0 20:04 ?        00:00:00 qmgr -l -t unix -u
postgres   5298      1  0 20:48 ?        00:00:00 /usr/pgsql-9.6/bin/postmaster -D /var/lib/pgsql/9.6/data/
postgres   5301   5298  0 20:48 ?        00:00:00 postgres: logger process
postgres   5303   5298  0 20:48 ?        00:00:00 postgres: checkpointer process
postgres   5304   5298  0 20:48 ?        00:00:00 postgres: writer process
postgres   5305   5298  0 20:48 ?        00:00:00 postgres: wal writer process
postgres   5306   5298  0 20:48 ?        00:00:00 postgres: autovacuum launcher process
postgres   5307   5298  0 20:48 ?        00:00:00 postgres: stats collector process
root       5309   5228  0 20:48 pts/3    00:00:00 grep --color=auto post
[root@fatdba ~]#

[root@fatdba ~]#
[root@fatdba ~]# systemctl enable postgresql-9.6
Created symlink from /etc/systemd/system/multi-user.target.wants/postgresql-9.6.service to /usr/lib/systemd/system/postgresql-9.6.service.
[root@fatdba ~]#

Postchecks:
The upgrade script creates two new scripts under parent directory, analyze_new_cluster.sh and delete_old_cluster.sh, its good to run analyze_new_cluster.sh which performs the vaccuming or collects stats for objects and is a must in big production setups.


-bash-4.2$ ls -ltrh
total 8.0K
drwxrwxrwx 4 postgres postgres  48 Sep  3 04:39 9.4
drwx------ 4 postgres postgres  48 Sep  3 20:40 9.6
drwx------ 4 postgres postgres  52 Sep  3 20:46 tbs1
drwx------ 4 postgres postgres  52 Sep  3 20:46 tbs
-rwx------ 1 postgres postgres 135 Sep  3 20:47 delete_old_cluster.sh
-rwx------ 1 postgres postgres 755 Sep  3 20:47 analyze_new_cluster.sh

-bash-4.2$ ./analyze_new_cluster.sh
This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/usr/pgsql-9.6/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "dixitdb": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "elephant": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "dixitdb": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "elephant": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "dixitdb": Generating default (full) optimizer statistics
vacuumdb: processing database "elephant": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Done
-bash-4.2$

Post Verification or Sanity Tests.


-bash-4.2$
-bash-4.2$ psql -d elephant -U postgres
psql (9.6.10)
Type "help" for help.

elephant=#
elephant=# select version();
                                                  version
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

elephant=#
elephant=# \d+
                     List of relations
 Schema |  Name  | Type  |  Owner   |  Size   | Description
--------+--------+-------+----------+---------+-------------
 public | serbia | table | postgres | 5120 kB |
(1 row)

elephant=# \dt+
                     List of relations
 Schema |  Name  | Type  |  Owner   |  Size   | Description
--------+--------+-------+----------+---------+-------------
 public | serbia | table | postgres | 5120 kB |
(1 row)

elephant=#
elephant=#
elephant=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=#
postgres=#
postgres=# \dt+
                         List of relations
 Schema |    Name    | Type  |  Owner   |    Size    | Description
--------+------------+-------+----------+------------+-------------
 public | alee       | table | postgres | 128 kB     |
 public | events     | table | postgres | 8192 bytes |
 public | large_test | table | postgres | 995 MB     |
(3 rows)

postgres=#
postgres=#
postgres=# \q

Okay we are all good and no corruption, loss is noticied and we can consider this upgrade as SUCCESSFULL.
Now if required we can go and delete the old database (Postgres calls it a Cluster).


-bash-4.2$
-bash-4.2$ ./delete_old_cluster.sh

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Postgres CREATE TABLESPACE failed with Permission denied

Posted by FatDBA on September 2, 2018

Hi Everyone,

This week i was busy preparing one Postgres database for migration purposes and have faced few basic problems while doing some of the rudimentary or elementary operations. Yesterday itself i was trying to create one Tablespace and was getting ‘Permission Denied’ for the defined path/location.
I tried to create this tablespace on different locations but failed every-time with same error related with permissions and interestingly permissions are okay (Directory owned by POSTGRES with RWX permissions). This left me confused about where exactly is the problem.

This is what i was trying and getting the error.

elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql/tbs';
ERROR:  could not set permissions on directory "/var/lib/pgsql/tbs": Permission denied
elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql';
ERROR:  could not set permissions on directory "/var/lib/pgsql": Permission denied
elephant=# CREATE TABLESPACE aleph location '/var';
ERROR:  could not set permissions on directory "/var": Permission denied

Then all of the sudden i remember something similar i have encountered in the past while working on Oracle Databases where SELinux Policy Prevents SQLPlus From Connecting to Oracle Database. So, i decided to turn off the SELinux status, and to effectively do it i ran setenforce 0 (Or you can use setenforce Permissive )
* The above commands will switch off SELinux enforcement temporarily until the machine is rebooted. If you would like to make it permanently, edit /etc/sysconfig/selinux, enter:
# vi /etc/sysconfig/selinux

And set / update it as follows:
SELINUX=disabled

[root@fatdba ~]# setenforce 0
[root@fatdba ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   permissive
Mode from config file:          enforcing
Policy version:                 26
Policy from config file:        targeted
[root@fatdba ~]#
[root@fatdba ~]#

Now after changing the setting of SELinux i once again tried the same step and BOOM, it worked!

elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql/tbs';
CREATE TABLESPACE
elephant=#
elephant=#

Hope It Helps
Prashant Dixit

Posted in Basics, troubleshooting | Tagged: | Leave a Comment »

Postgresql Error – could not connect to server, Is the server accepting connections on Unix domain socket “/tmp/.s.PGSQL.5432”?

Posted by FatDBA on August 31, 2018

Hey Mates,

I have got this new test box for some POCs related with Data Migration between Oracle and PostgreSQL, and this is the very first time i tried to connect with psql on this server and got an error. This was a newly provisioned server with Postgresql 8.4 installed.

Error is something related with the connection settings and is pasted below.

[root@fatdba ~]# psql

psql: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

I tried to initialize the server and started the services on the machine and it was started smoothly with no error.

[root@fatdba ~]# service postgresql initdb
Initializing database:
                                                           [  OK  ]
[root@fatdba ~]# service postgresql start
Starting postgresql service:

I tried to connect with the database once again and this time got a different set of error which is now related with Authentication which was Ident based. But this gave me an idea that something is related with the authentication protocol what’s bothering the database to start and to verify that I’ve checked pg_hba configuration file and changed the method to ‘Trust’ for this host connection type and did a restart of postgres serviced. And that worked!

[root@fatdba data]# psql -U postgres -W
Password for user postgres:
psql: FATAL:  Ident authentication failed for user "postgres"

[root@fatdba data]# vi pg_hba.conf
[root@fatdba data]#
# TYPE  DATABASE    USER        CIDR-ADDRESS          METHOD
# IPv4 local connections:
host    all         all         127.0.0.1/32          trust
host    all         all         192.168.145.129/32    trust


[root@fatdba data]# service postgresql restart
Stopping postgresql service:                               [  OK  ]
Starting postgresql service:                               [  OK  ]
[root@fatdba data]#

Posted in Basics | Tagged: | Leave a Comment »

12c OEM Error: LongOpManager$ZombieDetection:1017

Posted by FatDBA on August 31, 2018

Hey Pals,

I am sure, few of the us who have the 12c EM configured on their systems must have received one of the annoying alert/incident where the EM Agent on the server has reported
something about the ‘Zombie’ processes repeatedly.

I recently encountered one such issue with this Production system where the 12c agent frequently sending zombie related alerts, like one below. So, this post of all about handling such issues
and how to fix them or to avoid them.

Host=dixitlab.asi.dixson.corp 
Target type=Agent 
Target name=dixitlab.asi.dixson.corp:3873 
Message=Problem: java.lang.Throwable:oracle.sysman.gcagent.tmmain.execution.LongOpManager$ZombieDetection:1017 
Severity=Critical 
Problem creation time=May 5, 2018 10:22:48 AM ADST 
Last updated time=Aug 29, 2018 1:00:47 AM ADST 
Problem ID=113 
Problem key=java.lang.Throwable:oracle.sysman.gcagent.tmmain.execution.LongOpManager$ZombieDetection:1017 
Incident count=5 
Problem Status=New 
Priority=None 
Escalated=No 
Problem acknowledged By Owner=No 
Rule Name=OEM12c-Problems,Enterprise Problems 
Rule Owner=SYSMAN 

Next quick thing in such cases is always checking the agent (gcagent) logs to understand the exact error or issues with EM or Agent.
And as expected the logs has something interesting in them, i saw a DEBUG message captured where Zombie Detection was initiated by the agent after a delay of lot of attempts.
This happens when an EM Agent task such as collecting metrics is running more than the expected time, the process is marked as a zombie and is one of the leading causes of agent crash or halt.

X-AGENT_PERSISTENCE_WAIT_TIME: 60
X-AGENT_PERSISTENCE_ID: https://dixitlab1.asi.dixson.corp:1830/emd/main/
2018-08-29 17:10:26,050 [31:858161EB] DEBUG - Submitting task ZombieDetector for execution
2018-08-29 17:10:26,050 [216:1AE716D8] DEBUG - Begin task ZombieDetector on Thread: GC.SysExecutor.8
2018-08-29 17:10:26,050 [216:69BEAC9D:GC.SysExecutor.8 (ZombieDetector)] DEBUG - Scheduling next ZombieDetector.Task after delay 60000 including periodShift of 0 milliseconds

There are few of the ways to avoid such issues.
1. Set _zombieSuspensions=TRUE
2. Set _canceledThreadWait=900
3. set _zombieThreadPercentThreshold=0

Other way, that is to avoid the alerts/incidents you can set one of the ‘Hidden’ parameter “_zombieCreateIncident=” to FALSE and set it in the agent configuration file followed by agent restart.
This

[oracle@dixitlab config]$
[oracle@dixitlab config]$ pwd
/u01/app/oracle/new_agent12c/agent_inst/sysman/config


[oracle@dixitlab config]$ more emd.properties|grep _zombieCreateIncident
_zombieCreateIncident=false
[oracle@dixitlab config]$




[oracle@dixitlab bin]$
[oracle@dixitlab bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : 12.1.0.5.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/new_agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/new_agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/new_agent12c/core/12.1.0.5.0
Agent Process ID       : 2978
Parent Process ID      : 12860
Agent URL              : https://dixitlab.asi.dixson.corp:3873/emd/main/
Local Agent URL in NAT : https://dixitlab.asi.dixson.corp:3873/emd/main/
Repository URL         : https://dixitlab1.asi.dixson.corp:4900/empbs/upload
Started at             : 2018-08-29 01:01:08
Started by user        : oracle
Operating System       : Linux version 2.6.32-696.20.1.el6.x86_64 (amd64)
Last Reload            : (none)
Last successful upload                       : 2018-08-29 17:28:20
Last attempted upload                        : 2018-08-29 17:28:20
Total Megabytes of XML files uploaded so far : 1.03
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 52.74%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2018-08-29 17:29:16
Last successful heartbeat to OMS             : 2018-08-29 17:29:16
Next scheduled heartbeat to OMS              : 2018-08-29 17:30:16

---------------------------------------------------------------



[oracle@dixitlab bin]$ ./emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ...
 stopped.
[oracle@dixitlab bin]$
[oracle@dixitlab bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent is Not Running
[oracle@dixitlab bin]$
[oracle@dixitlab bin]$
[oracle@dixitlab bin]$ ./emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting agent .............................. started.
[oracle@dixitlab bin]$
[oracle@dixitlab bin]$
[oracle@dixitlab bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : 12.1.0.5.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/new_agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/new_agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/new_agent12c/core/12.1.0.5.0
Agent Process ID       : 2664
Parent Process ID      : 2455
Agent URL              : https://dixitlab.asi.dixson.corp:3873/emd/main/
Local Agent URL in NAT : https://dixitlab.asi.dixson.corp:3873/emd/main/
Repository URL         : https://dixitlab1.asi.dixson.corp:4900/empbs/upload
Started at             : 2018-08-29 17:33:05
Started by user        : oracle
Operating System       : Linux version 2.6.32-696.20.1.el6.x86_64 (amd64)
Last Reload            : (none)
Last successful upload                       : 2018-08-29 17:34:34
Last attempted upload                        : 2018-08-29 17:34:34
Total Megabytes of XML files uploaded so far : 0.03
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 52.71%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2018-08-29 17:34:24
Last successful heartbeat to OMS             : 2018-08-29 17:34:24
Next scheduled heartbeat to OMS              : 2018-08-29 17:35:24

---------------------------------------------------------------
Agent is Running and Ready
[oracle@dixitlab bin]$


Hope It Helps
Prashant Dixit

Posted in troubleshooting | Tagged: | Leave a Comment »

Collecting Exadata Cell Performance Statistics using ‘Cellperfdiag.sql’

Posted by FatDBA on July 22, 2018

Hi Guys,

Last week we finished migration for one of the customer who recently purchased the Oracle Exadata Machines (X6-2), after that complex data movement is successfully completed we observed some great performance improvements without actually making any changes and exadata features in action.

But its not that the Exa machines are ‘In-frangible’ Or Unbreakable. There are many of the times when you actually have to understand the Cell/Computer Nodes or system wide performance and output from tools like Sundiag, Exawatcher, Exachk, TFA, SOS report etc. were not sufficient.

For the purpose of ‘Cell Performance DIAG info’ Oracle has provided a script to collect Exadata Cell Performance statistics which you can use to interpret the issues.

Output of the script is distributed into several sections
1. Cell specific parameters
2. Top CELL waits with wait times (Worst Times)
3. Cell Configuration details

Script to Collect Exadata Cell Performance Information (cellperfdiag.sql) (Doc ID 2038493.1)
Below is the output from one of the Cell Node of a lower environment.


CELLPERFDIAG DATA FOR DixitLab_July19_0257

IMPORTANT PARAMETERS RELATING TO CELL PERFORMANCE:

INST_ID NAME                                     VALUE
------- ---------------------------------------- ----------------------------------------
      1 cell_offload_compaction                  ADAPTIVE
      1 cell_offload_decryption                  TRUE
      1 cell_offload_plan_display                PDTO
      1 cell_offload_processing                  TRUE
      2 cell_offload_compaction                  ADAPTIVE
      2 cell_offload_decryption                  TRUE
      2 cell_offload_plan_display                PDTO
      2 cell_offload_processing                  TRUE

TOP 20 CURRENT CELL WAITS

This is to look at current cell waits, July not return any data.

ASH CELL PERFORMANCE SUMMARY

This query will look at the average cell wait times for each cell in ASH

CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------------------------ --------------- --------------
192.199.66.19;192.199.66.20       1185605081.0        96124.9
192.199.66.21;192.199.66.22       1148823479.0        88371.0
192.199.66.17;192.199.66.18       1048776677.0        82115.3
192.199.66.23;192.199.66.24        927836650.0        76604.7

20 WORST CELL PERFORMANCE MINUTES IN ASH:

APPROACH: These are the minutes where the avg cell perf time
was the highest.  See which cell had the longest waits and
during what minute.

MINUTE       CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------ ------------------------------ --------------- --------------
July18_1308   192.199.66.21;192.199.66.22         28445866.0       536714.5
July18_1158   192.199.66.19;192.199.66.20         25685919.0       524202.4
July18_1218   192.199.66.23;192.199.66.24           496960.0       496960.0
July18_2008   192.199.66.21;192.199.66.22         21481465.0       488215.1
July18_1618   192.199.66.19;192.199.66.20         56796413.0       485439.4
July18_1638   192.199.66.17;192.199.66.18         23823342.0       467124.4
July18_2004   192.199.66.21;192.199.66.22         28935643.0       466703.9
July18_1207   192.199.66.17;192.199.66.18          1779234.0       444808.5
July19_0042   192.199.66.19;192.199.66.20           888334.0       444167.0
July18_1654   192.199.66.23;192.199.66.24          2619836.0       436639.3
July18_2008   192.199.66.19;192.199.66.20         11634865.0       430920.9
July18_1324   192.199.66.19;192.199.66.20         66537869.0       423808.1
July18_1157   192.199.66.21;192.199.66.22         22725628.0       420845.0
July19_0028   192.199.66.21;192.199.66.22           841081.0       420540.5
July18_1323   192.199.66.19;192.199.66.20         76790471.0       419620.1
July18_1157   192.199.66.19;192.199.66.20         28103203.0       401474.3
July18_1159   192.199.66.17;192.199.66.18         14050389.0       401439.7
July18_1158   192.199.66.23;192.199.66.24         10054891.0       386726.6
July18_1639   192.199.66.17;192.199.66.18         79265611.0       386661.5
July18_2009   192.199.66.21;192.199.66.22         20335679.0       383692.1

50 LONGEST CELL WAITS IN ASH ORDERED BY WAIT TIME

APPROACH: These are the top 50 individual cell waits in ASH
in wait time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
19/JULY/18 08:04:19.964 PM 192.199.66.21;192.199.66.22          2 cell smart incremental backup            2990700133          0          0     5560085
19/JULY/18 08:04:18.964 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     3333391
19/JULY/18 04:38:52.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2653079
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2486538
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2461036
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2447901
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2443358
19/JULY/18 11:57:14.811 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3718979034      32768     2433342
19/JULY/18 02:53:14.279 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431871
19/JULY/18 01:24:36.080 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2430882
19/JULY/18 01:36:57.174 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2429672
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2427505
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2427165
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2426099
19/JULY/18 01:24:40.011 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2417661
19/JULY/18 01:24:40.090 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2416163
19/JULY/18 01:24:47.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2396291
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370
19/JULY/18 01:23:36.971 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2370848
19/JULY/18 04:48:26.040 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213  842883291      32768     2359311
19/JULY/18 03:35:36.076 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2344972
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2330945
19/JULY/18 11:57:52.881 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2310010
19/JULY/18 04:38:55.395 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2304803
19/JULY/18 01:37:01.537 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2284322
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2275197
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2250389
19/JULY/18 11:58:11.911 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2256514768      32768     2176213
19/JULY/18 01:23:46.981 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2149317
19/JULY/18 01:24:54.380 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 1783777120      32768     2138193
19/JULY/18 01:24:14.051 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2107128
19/JULY/18 08:07:04.178 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2102692
19/JULY/18 08:04:28.984 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2098081
19/JULY/18 02:43:04.647 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2367494631      32768     2090428
19/JULY/18 01:23:47.921 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2077418
19/JULY/18 01:24:48.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2056411
19/JULY/18 01:24:48.021 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2043911
19/JULY/18 11:57:57.891 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2033684
19/JULY/18 08:03:38.870 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 2251483371      32768     2032547
19/JULY/18 01:37:01.174 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2031799
19/JULY/18 08:03:39.874 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  474454945      32768     2024898
19/JULY/18 11:58:31.931 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  474454945      32768     2013268
19/JULY/18 03:53:00.757 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     2004020
19/JULY/18 04:39:03.405 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     1997819
19/JULY/18 04:18:40.331 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1994866
19/JULY/18 01:23:34.861 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1988301
19/JULY/18 04:38:57.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     1985840
19/JULY/18 01:23:17.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     1983376
19/JULY/18 01:08:56.066 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     1965134
19/JULY/18 04:39:39.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1956491

100 LONGEST CELL WAITS IN ASH ORDERED BY SAMPLE TIME

APPROACH: These are the top 50 individual cell waits in ASH
in sample time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
19/JULY/18 11:57:14.811 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3718979034      32768     2433342
19/JULY/18 11:57:27.821 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  552535141      32768     1627154
19/JULY/18 11:57:49.881 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     1871961
19/JULY/18 11:57:52.881 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     1634342
19/JULY/18 11:57:52.881 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2310010
19/JULY/18 11:57:57.891 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2033684
19/JULY/18 11:58:11.911 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2256514768      32768     2176213
19/JULY/18 11:58:31.931 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  474454945      32768     2013268
19/JULY/18 11:58:56.001 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1433462179      32768     1665222
19/JULY/18 11:59:11.149 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3958490174      32768     1796201
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2426099
19/JULY/18 11:59:13.149 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2443358
19/JULY/18 12:02:25.487 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 1783777120      32768     1667222
19/JULY/18 12:36:22.441 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     1817473
19/JULY/18 01:08:11.966 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     1824501
19/JULY/18 01:08:53.056 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  758773684      32768     1633915
19/JULY/18 01:08:56.066 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     1965134
19/JULY/18 01:22:18.849 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1692084
19/JULY/18 01:22:35.760 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1404181515      32768     1927461
19/JULY/18 01:22:35.760 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1404181515      32768     1892900
19/JULY/18 01:22:35.871 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1817924
19/JULY/18 01:23:08.830 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     1798727
19/JULY/18 01:23:17.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     1983376
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2461036
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2275197
19/JULY/18 01:23:19.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1340519488      32768     1600315
19/JULY/18 01:23:21.951 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     1806676
19/JULY/18 01:23:32.971 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1916487
19/JULY/18 01:23:34.861 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1988301
19/JULY/18 01:23:36.971 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2370848
19/JULY/18 01:23:46.981 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2149317
19/JULY/18 01:23:47.921 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2077418
19/JULY/18 01:24:09.051 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1716730
19/JULY/18 01:24:14.051 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2107128
19/JULY/18 01:24:36.080 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2430882
19/JULY/18 01:24:40.011 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2417661
19/JULY/18 01:24:40.090 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2416163
19/JULY/18 01:24:47.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2396291
19/JULY/18 01:24:48.021 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131  931391314      32768     2043911
19/JULY/18 01:24:48.100 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2056411
19/JULY/18 01:24:54.380 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 1783777120      32768     2138193
19/JULY/18 01:35:40.244 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2381222600      32768     1670138
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2330945
19/JULY/18 01:36:56.527 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2427165
19/JULY/18 01:36:57.174 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     1872516
19/JULY/18 01:36:57.174 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2429672
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2250389
19/JULY/18 01:36:58.527 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2427505
19/JULY/18 01:37:01.174 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2031799
19/JULY/18 01:37:01.174 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     1782826
19/JULY/18 01:37:01.537 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3530077886      32768     2284322
19/JULY/18 01:37:04.587 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     1609551
19/JULY/18 02:43:01.717 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1705580
19/JULY/18 02:43:04.647 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2367494631      32768     2090428
19/JULY/18 02:43:04.757 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1667922
19/JULY/18 02:43:11.657 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  552535141      32768     1700677
19/JULY/18 02:53:14.279 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431871

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
19/JULY/18 03:09:17.091 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     1718584
19/JULY/18 03:09:22.101 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1404181515      32768     1924659
19/JULY/18 03:09:23.820 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     1609377
19/JULY/18 03:35:36.076 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2344972
19/JULY/18 03:42:53.196 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1705247
19/JULY/18 03:43:09.256 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1340519488      32768     1621901
19/JULY/18 03:53:00.757 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     2004020
19/JULY/18 03:53:03.356 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2701596410      32768     1951623
19/JULY/18 03:53:04.767 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     1911016
19/JULY/18 04:02:42.355 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1122005524      32768     1600296
19/JULY/18 04:18:40.331 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     1994866
19/JULY/18 04:18:41.331 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     1680223
19/JULY/18 04:18:47.341 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     1676801
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2447901
19/JULY/18 04:38:51.656 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2486538
19/JULY/18 04:38:52.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2653079
19/JULY/18 04:38:55.395 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2304803
19/JULY/18 04:38:57.666 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     1985840
19/JULY/18 04:39:03.405 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     1997819
19/JULY/18 04:39:26.743 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 1433462179      32768     1686065
19/JULY/18 04:39:39.455 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     1770436
19/JULY/18 04:39:39.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     1956491
19/JULY/18 04:39:40.763 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     1719451
19/JULY/18 04:39:45.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     1669777
19/JULY/18 04:39:53.783 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     1606474
19/JULY/18 04:47:20.930 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 1467922874      32768     1598316
19/JULY/18 04:47:21.101 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1723089
19/JULY/18 04:48:26.040 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213  842883291      32768     2359311
19/JULY/18 05:17:36.317 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  125724152      32768     1743703
19/JULY/18 05:23:11.293 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  552535141      32768     1631948
19/JULY/18 05:23:19.727 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     1741381
19/JULY/18 06:23:57.158 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1374369774      32768     1769798
19/JULY/18 08:03:38.870 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     1622627
19/JULY/18 08:03:38.870 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 2251483371      32768     2032547
19/JULY/18 08:03:39.874 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  474454945      32768     2024898
19/JULY/18 08:03:39.874 PM 192.199.66.23;192.199.66.24          2 cell smart incremental backup            3802949213          0          0     1775912
19/JULY/18 08:04:18.964 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     3333391
19/JULY/18 08:04:19.964 PM 192.199.66.21;192.199.66.22          2 cell smart incremental backup            2990700133          0          0     5560085
19/JULY/18 08:04:28.984 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2098081
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370
19/JULY/18 08:07:04.178 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2102692
19/JULY/18 08:09:11.705 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  870191132      32768     1753415
19/JULY/18 10:13:12.946 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1777250

ASH HISTORY CELL PERFORMANCE SUMMARY

This query will look at the average cell wait times for each cell in ASH

CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------------------------ --------------- --------------
192.199.66.19;192.199.66.20       4320370317.0       144803.9
192.199.66.17;192.199.66.18       3720276327.0       126342.3
192.199.66.21;192.199.66.22       3302364789.0       116198.6
192.199.66.23;192.199.66.24       2214253857.0        86457.1

20 WORST CELL PERFORMANCE MINUTES IN ASH HISTORY:

APPROACH: These are the minutes where the avg cell perf time
was the highest.  See which cell had the longest waits and
during what time minute.

MINUTE       CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------ ------------------------------ --------------- --------------
July17_0458   192.199.66.21;192.199.66.22          2403872.0      2403872.0
July17_0454   192.199.66.23;192.199.66.24          2150347.0      2150347.0
July09_1416   192.199.66.17;192.199.66.18          1887469.0      1887469.0
July08_2209   192.199.66.21;192.199.66.22          1693688.0      1693688.0
July17_0200   192.199.66.23;192.199.66.24          1597102.0      1597102.0
July17_1655   192.199.66.19;192.199.66.20          1567615.0      1567615.0
July11_0400   192.199.66.19;192.199.66.20          1373801.0      1373801.0
July16_1343   192.199.66.17;192.199.66.18          1356319.0      1356319.0
July07_2001   192.199.66.21;192.199.66.22          6774340.0      1354868.0
July10_2006   192.199.66.23;192.199.66.24         10787174.0      1348396.8
July04_1431   192.199.66.19;192.199.66.20          5180228.0      1295057.0
July17_0454   192.199.66.19;192.199.66.20          2579203.0      1289601.5
July09_1901   192.199.66.19;192.199.66.20          1260900.0      1260900.0
July11_0400   192.199.66.23;192.199.66.24          1255291.0      1255291.0
July08_1437   192.199.66.19;192.199.66.20          2439786.0      1219893.0
July17_0457   192.199.66.23;192.199.66.24          1177614.0      1177614.0
July04_0906   192.199.66.23;192.199.66.24          1153969.0      1153969.0
July18_0123   192.199.66.17;192.199.66.18          1153542.0      1153542.0
July17_1957   192.199.66.21;192.199.66.22          1112349.0      1112349.0
July11_0958   192.199.66.23;192.199.66.24          4421561.0      1105390.3

50 LONGEST CELL WAITS IN ASH HISTORY ORDERED BY WAIT TIME

APPROACH: These are the top 50 individual cell waits in ASH
history in wait time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
07/July/18 08:01:45.575 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     4381405
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     3489839
19/JULY/18 08:24:30.057 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     3027146
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2994219
14/July/18 11:07:27.489 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2583166
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2572340
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2524157
09/July/18 10:30:37.201 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2511210
09/July/18 07:04:57.612 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     2508541
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2504764
15/July/18 02:23:59.797 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     2498919
09/July/18 07:15:15.200 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2496073
16/July/18 10:01:28.060 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2481597
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2481323
09/July/18 11:37:22.166 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2478192
16/July/18 12:18:57.851 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2476915
16/July/18 03:00:25.122 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2474114
16/July/18 07:54:01.716 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  758773684      32768     2473662
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2473365
15/July/18 01:42:56.817 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2256514768      32768     2472463
08/July/18 05:38:29.082 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3704102872      32768     2466691
17/July/18 02:28:27.460 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2464447
09/July/18 11:07:27.164 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1122005524      32768     2463370
17/July/18 02:43:03.640 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2460091
16/July/18 03:00:45.182 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2459368
16/July/18 07:37:59.702 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2457007
14/July/18 11:07:25.301 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4016617326      32768     2453596
17/July/18 02:40:02.464 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     2449781
09/July/18 02:20:37.918 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2449762
11/July/18 04:09:24.827 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2449147
14/July/18 09:45:22.608 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2447554
16/July/18 10:52:07.321 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3958490174      32768     2444207
14/July/18 04:49:22.691 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2441217
04/July/18 02:31:23.455 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2441112
14/July/18 10:03:34.449 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2440465
04/July/18 03:07:31.342 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2438825
14/July/18 03:32:54.840 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2437829
17/July/18 02:38:52.304 AM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2263141931      32768     2437640
16/July/18 02:29:45.206 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2437155
16/July/18 08:15:35.001 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2436056
16/July/18 10:38:49.813 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2435774
09/July/18 08:12:07.171 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  185394440      32768     2435748
15/July/18 02:44:02.345 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2435230
16/July/18 07:38:55.929 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431597
16/July/18 10:39:39.873 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2430666
10/July/18 03:01:42.766 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2430197
15/July/18 06:11:20.764 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2429871
16/July/18 10:16:11.295 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1019816052      32768     2429717
08/July/18 05:38:19.072 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2263141931      32768     2429692
16/July/18 10:01:38.100 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2428275

100 LONGEST CELL WAITS IN ASH HISTORY ORDERED BY SAMPLE TIME

APPROACH: These are the top 100 individual cell waits in ASH
history in sample time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
04/July/18 02:30:43.598 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1374369774      32768     2424167
04/July/18 03:07:31.342 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2438825
04/July/18 10:29:07.175 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2417864
04/July/18 10:29:47.246 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1233700156      32768     2339031
04/July/18 02:31:23.455 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2441112
07/July/18 08:01:45.575 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     4381405
08/July/18 01:36:44.865 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2420689
08/July/18 01:36:44.865 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2330502
08/July/18 01:36:53.062 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 1783777120      32768     2420230
08/July/18 04:23:25.390 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2410974
08/July/18 04:34:13.379 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2256514768      32768     2414668
08/July/18 05:17:46.103 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 1783777120      32768     2319802
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2481323
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2504764
08/July/18 05:38:19.072 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2263141931      32768     2429692
08/July/18 05:38:29.082 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3704102872      32768     2466691
08/July/18 05:38:29.082 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3616317741      32768     2325172
09/July/18 10:13:18.007 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2418070
09/July/18 10:13:18.007 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1404181515      32768     2293017
09/July/18 10:13:23.693 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2263141931      32768     2355873
09/July/18 10:13:28.017 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  931391314      32768     2359674
09/July/18 10:30:37.201 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2511210
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2524157
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2383155
09/July/18 11:07:27.164 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 1122005524      32768     2463370
09/July/18 11:37:22.166 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2478192
09/July/18 01:57:19.921 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     2337878
09/July/18 02:20:37.918 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2449762
09/July/18 06:51:04.415 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3266942716      32768     2303697
09/July/18 07:04:57.612 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     2508541
09/July/18 07:15:15.200 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2496073
09/July/18 08:12:07.171 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  185394440      32768     2435748
10/July/18 03:01:42.766 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2430197
10/July/18 01:38:20.572 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1122206058      32768     2396284
10/July/18 08:06:28.912 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 3616317741      32768     2425901
10/July/18 08:06:37.777 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 3704102872      32768     2423546
11/July/18 04:09:24.827 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 3409456992      32768     2449147
14/July/18 09:45:22.608 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1374369774      32768     2447554
14/July/18 10:03:34.449 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2440465
14/July/18 10:03:54.460 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2401321
14/July/18 11:07:25.301 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4016617326      32768     2453596
14/July/18 11:07:27.489 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2583166
14/July/18 01:03:23.248 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3958490174      32768     2314095
14/July/18 03:32:54.840 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2437829
14/July/18 04:49:22.691 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2441217
15/July/18 02:23:59.797 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     2498919
15/July/18 02:28:01.245 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2418245
15/July/18 01:42:26.787 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2263141931      32768     2418862
15/July/18 01:42:56.817 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2256514768      32768     2472463
15/July/18 02:44:02.345 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2435230
15/July/18 05:23:22.759 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1404181515      32768     2424663
15/July/18 06:11:20.764 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2429871
15/July/18 06:12:20.859 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1340519488      32768     2299402
16/July/18 02:29:45.206 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2437155
16/July/18 02:31:14.605 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  744236512      32768     2425509
16/July/18 02:35:59.882 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3958490174      32768     2424458
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     2417036

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     3489839
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2994219
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2572340
16/July/18 03:00:25.122 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2474114
16/July/18 03:00:45.182 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2459368
16/July/18 03:01:07.779 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  870191132      32768     2338236
16/July/18 09:36:02.169 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2395522
16/July/18 10:01:28.060 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2481597
16/July/18 10:01:38.100 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2428275
16/July/18 10:16:11.295 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1019816052      32768     2429717
16/July/18 10:38:49.813 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 2342661109      32768     2435774
16/July/18 10:39:39.873 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2430666
16/July/18 10:39:39.873 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2414021
16/July/18 10:52:07.321 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 3958490174      32768     2444207
16/July/18 11:47:58.536 AM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2381222600      32768     2422526
16/July/18 12:18:57.851 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2476915
16/July/18 05:01:01.612 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440   24718082      32768     2351090
16/July/18 06:55:13.320 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2375834
16/July/18 06:55:13.320 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2414035
16/July/18 06:56:23.443 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3718979034      32768     2322248
16/July/18 07:37:59.702 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2457007
16/July/18 07:37:59.702 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2417497
16/July/18 07:38:55.929 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440   24718082      32768     2431597
16/July/18 07:54:01.716 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  758773684      32768     2473662
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2419493
16/July/18 08:15:35.001 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2367494631      32768     2436056
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131  280877291      32768     2473365
17/July/18 02:03:05.401 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133  744236512      32768     2425350
17/July/18 02:38:22.270 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2344538
17/July/18 02:38:49.166 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     2423420
17/July/18 02:38:49.166 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3256856773      32768     2419184
17/July/18 02:38:52.304 AM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213 2263141931      32768     2437640
17/July/18 02:38:59.176 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1433462179      32768     2415119
17/July/18 02:40:02.464 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     2449781
17/July/18 02:43:03.640 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2460091
17/July/18 04:58:17.697 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     2403872
17/July/18 02:24:05.496 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2381222600      32768     2420104
17/July/18 02:28:27.460 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2464447
17/July/18 04:39:42.871 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     2297049
19/JULY/18 08:24:30.057 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     3027146
19/JULY/18 09:55:04.470 AM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1340519488      32768     2363513
19/JULY/18 04:38:55.395 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1317632796      32768     2304803
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370

AWR CELL DISK UTILIZATION

APPROACH: This query only works in 12.1 and above.  This is looking
in the AWR history tables to look at cell disk utilization for some
of the worst periods.  Top 100 disk utils.
DISK_UTILIZATION_SUM: Sum of the per-minute disk utilization metrics.
IO_REQUESTS_SUM: Sum of the per-minute IOPs.
IO_MB_SUM: Sum of the per-minute I/O metrics, in megabytes per second.

   SNAP_ID BEGIN               END                 CELL_NAME                      DISK_NAME                      DISK_UTILIZATION_SUM IO_REQUESTS_SUM  IO_MB_SUM
---------- ------------------- ------------------- ------------------------------ ------------------------------ -------------------- --------------- ----------
     22669 July13_1300          July13_1330          192.199.66.17;192.199.66.18    /dev/sda                                       1977           11302       1938
     22669 July13_1300          July13_1330          192.199.66.17;192.199.66.18    /dev/sdb                                       1977           11302       1938
     22669 July13_1300          July13_1330          192.199.66.17;192.199.66.18    /dev/sdm                                       1977           11302       1938
     22669 July13_1300          July13_1330          192.199.66.19;192.199.66.20    /dev/sda                                       1926           11343       1979
     22669 July13_1300          July13_1330          192.199.66.19;192.199.66.20    /dev/sdb                                       1926           11343       1979
     22669 July13_1300          July13_1330          192.199.66.19;192.199.66.20    /dev/sdm                                       1926           11343       1979
     22669 July13_1300          July13_1330          192.199.66.21;192.199.66.22    /dev/sda                                       1966           11227       1945
     22669 July13_1300          July13_1330          192.199.66.21;192.199.66.22    /dev/sdb                                       1966           11227       1945
     22669 July13_1300          July13_1330          192.199.66.21;192.199.66.22    /dev/sdm                                       1966           11227       1945
     22669 July13_1300          July13_1330          192.199.66.23;192.199.66.24    /dev/sda                                       1888           11353       1920
     22669 July13_1300          July13_1330          192.199.66.23;192.199.66.24    /dev/sdb                                       1888           11353       1920
     22669 July13_1300          July13_1330          192.199.66.23;192.199.66.24    /dev/sdm                                       1888           11353       1920
     22827 July16_2000          July16_2030          192.199.66.17;192.199.66.18    /dev/sda                                       2068           11683       1428
     22827 July16_2000          July16_2030          192.199.66.17;192.199.66.18    /dev/sdb                                       2068           11683       1428
     22827 July16_2000          July16_2030          192.199.66.17;192.199.66.18    /dev/sdm                                       2068           11683       1428
     22827 July16_2000          July16_2030          192.199.66.19;192.199.66.20    /dev/sda                                       2089           11588       1443
     22827 July16_2000          July16_2030          192.199.66.19;192.199.66.20    /dev/sdb                                       2089           11588       1443
     22827 July16_2000          July16_2030          192.199.66.19;192.199.66.20    /dev/sdm                                       2089           11588       1443
     22827 July16_2000          July16_2030          192.199.66.21;192.199.66.22    /dev/sda                                       1975           11095       1419
     22827 July16_2000          July16_2030          192.199.66.21;192.199.66.22    /dev/sdb                                       1975           11095       1419
     22827 July16_2000          July16_2030          192.199.66.21;192.199.66.22    /dev/sdm                                       1975           11095       1419
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    /dev/sda                                       3808            9676       3421
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    /dev/sdb                                       3808            9676       3421
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    /dev/sdm                                       3808            9676       3421
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_00_monkeynode11_adm                        1929            4811       1713
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_01_monkeynode11_adm                        1879            4623       1703
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_02_monkeynode11_adm                        1793            5186       1674
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_03_monkeynode11_adm                        1808            4887       1633
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_04_monkeynode11_adm                        1923            4612       1719
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_05_monkeynode11_adm                        1898            4812       1750
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_06_monkeynode11_adm                        1887            4748       1677
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_07_monkeynode11_adm                        1905            4917       1720
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_08_monkeynode11_adm                        1792            4997       1687
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_09_monkeynode11_adm                        1623            4926       1501
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_10_monkeynode11_adm                        1921            4466       1739
     22875 July17_2000          July17_2030          192.199.66.17;192.199.66.18    CD_11_monkeynode11_adm                        1949            4469       1723
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    /dev/sda                                       4075           11529       5240
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    /dev/sdb                                       4075           11529       5240
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    /dev/sdm                                       4075           11529       5240
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_00_monkeynode12_adm                        2033            5673       2614
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_01_monkeynode12_adm                        2040            5626       2622
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_02_monkeynode12_adm                        1942            5636       2452
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_03_monkeynode12_adm                        1897            5623       2351
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_04_monkeynode12_adm                        2051            5586       2591
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_05_monkeynode12_adm                        1862            5838       2297
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_06_monkeynode12_adm                        2058            5355       2581
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_07_monkeynode12_adm                        2063            5668       2592
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_08_monkeynode12_adm                        2076            5506       2676
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_09_monkeynode12_adm                        2066            5829       2619
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_10_monkeynode12_adm                        2046            5322       2562
     22875 July17_2000          July17_2030          192.199.66.19;192.199.66.20    CD_11_monkeynode12_adm                        1972            5429       2487
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    /dev/sda                                       3710            9738       3413
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    /dev/sdb                                       3710            9738       3413
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    /dev/sdm                                       3710            9738       3413
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_00_monkeynode13_adm                        1874            4751       1752
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_01_monkeynode13_adm                        1835            4755       1661
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_02_monkeynode13_adm                        1772            5021       1623

   SNAP_ID BEGIN               END                 CELL_NAME                      DISK_NAME                      DISK_UTILIZATION_SUM IO_REQUESTS_SUM  IO_MB_SUM
---------- ------------------- ------------------- ------------------------------ ------------------------------ -------------------- --------------- ----------
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_03_monkeynode13_adm                        1801            4860       1627
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_04_monkeynode13_adm                        1729            4834       1603
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_05_monkeynode13_adm                        2094            6325       2698
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_06_monkeynode13_adm                        1804            4935       1633
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_07_monkeynode13_adm                        1943            4356       1757
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_08_monkeynode13_adm                        1870            4385       1673
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_09_monkeynode13_adm                        1690            4848       1525
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_10_monkeynode13_adm                        1908            5668       1679
     22875 July17_2000          July17_2030          192.199.66.21;192.199.66.22    CD_11_monkeynode13_adm                        1858            4699       1681
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    /dev/sda                                       3677            9714       3381
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    /dev/sdb                                       3677            9714       3381
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    /dev/sdm                                       3677            9714       3381
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_00_monkeynode14_adm                        1853            4752       1685
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_01_monkeynode14_adm                        1822            4750       1693
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_02_monkeynode14_adm                        1874            5117       1725
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_03_monkeynode14_adm                        1813            4593       1665
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_04_monkeynode14_adm                        1810            4736       1637
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_05_monkeynode14_adm                        1869            4575       1717
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_06_monkeynode14_adm                        1789            4769       1669
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_07_monkeynode14_adm                        1840            4634       1734
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_08_monkeynode14_adm                        1838            4480       1666
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_09_monkeynode14_adm                        1753            4656       1641
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_10_monkeynode14_adm                        1729            5019       1574
     22875 July17_2000          July17_2030          192.199.66.23;192.199.66.24    CD_11_monkeynode14_adm                        1849            4579       1710
     22876 July17_2030          July17_2100          192.199.66.17;192.199.66.18    /dev/sda                                       2658           10231       2347
     22876 July17_2030          July17_2100          192.199.66.17;192.199.66.18    /dev/sdb                                       2658           10231       2347
     22876 July17_2030          July17_2100          192.199.66.17;192.199.66.18    /dev/sdm                                       2658           10231       2347
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    /dev/sda                                       3156           12112       4142
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    /dev/sdb                                       3156           12112       4142
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    /dev/sdm                                       3156           12112       4142
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_01_monkeynode12_adm                        1602            5917       2090
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_04_monkeynode12_adm                        1633            5773       2100
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_07_monkeynode12_adm                        1609            5877       2073
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_08_monkeynode12_adm                        1632            5778       2088
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_09_monkeynode12_adm                        1602            6016       2066
     22876 July17_2030          July17_2100          192.199.66.19;192.199.66.20    CD_10_monkeynode12_adm                        1594            5964       2040
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    /dev/sda                                       2449           10793       2281
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    /dev/sdb                                       2449           10793       2281
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    /dev/sdm                                       2449           10793       2281
     22876 July17_2030          July17_2100          192.199.66.21;192.199.66.22    CD_05_monkeynode13_adm                        1569            6433       2025
     22876 July17_2030          July17_2100          192.199.66.23;192.199.66.24    /dev/sda                                       2657           10505       2409
     22876 July17_2030          July17_2100          192.199.66.23;192.199.66.24    /dev/sdb                                       2657           10505       2409
     22876 July17_2030          July17_2100          192.199.66.23;192.199.66.24    /dev/sdm                                       2657           10505       2409

DB_ID_FOR_CURRENT_DB
--------------------
          2490671309

CELL THREAD HISTORY - LAST FEW MINUTES

This query only works in 12.1 and above.

  COUNT(*) SQL_ID        CELL_NAME                      JOB_TYPE                         DATABASE_ID INSTANCE_ID
---------- ------------- ------------------------------ -------------------------------- ----------- -----------
      1598               192.199.66.19;192.199.66.20    UnidentifiedJob                            0           0
      1555               192.199.66.23;192.199.66.24    UnidentifiedJob                            0           0
      1237               192.199.66.17;192.199.66.18    UnidentifiedJob                            0           0
      1148               192.199.66.21;192.199.66.22    UnidentifiedJob                            0           0
       834               192.199.66.21;192.199.66.22    NetworkPoll                                0           0
       824               192.199.66.17;192.199.66.18    NetworkPoll                                0           0
       804               192.199.66.19;192.199.66.20    NetworkPoll                                0           0
       804               192.199.66.23;192.199.66.24    NetworkPoll                                0           0
       417               192.199.66.21;192.199.66.22    OCL System Message Thread                  0           0
       417               192.199.66.21;192.199.66.22    Storage index eviction                     0           0
       412               192.199.66.17;192.199.66.18    OCL System Message Thread                  0           0
       412               192.199.66.17;192.199.66.18    Storage index eviction                     0           0
       402               192.199.66.19;192.199.66.20    OCL System Message Thread                  0           0
       402               192.199.66.19;192.199.66.20    Storage index eviction                     0           0
       402               192.199.66.23;192.199.66.24    OCL System Message Thread                  0           0
       402               192.199.66.23;192.199.66.24    Storage index eviction                     0           0
       332               192.199.66.23;192.199.66.24    NetworkRead                                0           0
       281               192.199.66.21;192.199.66.22    NetworkRead                                0           0
       254               192.199.66.17;192.199.66.18    NetworkRead                                0           0
       193 9ujkk29vay3bz 192.199.66.23;192.199.66.24    PredicateDiskRead                  584354840           1
       168 9ujkk29vay3bz 192.199.66.17;192.199.66.18    PredicateDiskRead                  584354840           1
       165 9ujkk29vay3bz 192.199.66.19;192.199.66.20    PredicateDiskRead                  584354840           1
       152 9ujkk29vay3bz 192.199.66.21;192.199.66.22    PredicateDiskRead                  584354840           1
       135               192.199.66.19;192.199.66.20    NetworkRead                                0           0
       112               192.199.66.17;192.199.66.18    CacheGet                                   0           0
        86               192.199.66.19;192.199.66.20    CacheGet                                   0           0
        83               192.199.66.21;192.199.66.22    CacheGet                                   0           0
        66               192.199.66.23;192.199.66.24    CacheGet                                   0           0
        55 aym6pqm5uzd90 192.199.66.17;192.199.66.18    PredicateDiskRead                  584354840           1
        52 aym6pqm5uzd90 192.199.66.21;192.199.66.22    PredicateDiskRead                  584354840           1
        51 aym6pqm5uzd90 192.199.66.23;192.199.66.24    PredicateDiskRead                  584354840           1
        50 oj2tdhpjgnvus 192.199.66.23;192.199.66.24    CacheGet                           584354840           1
        47 aym6pqm5uzd90 192.199.66.19;192.199.66.20    PredicateDiskRead                  584354840           1
        37 oj2tdhpjgnvus 192.199.66.17;192.199.66.18    CacheGet                           584354840           1
        34 oj2tdhpjgnvus 192.199.66.19;192.199.66.20    CacheGet                           584354840           1
        29 oj2tdhpjgnvus 192.199.66.21;192.199.66.22    CacheGet                           584354840           1
        18               192.199.66.17;192.199.66.18    PredicateFilter                            0           0
        17               192.199.66.23;192.199.66.24    PredicateFilter                            0           0
        16               192.199.66.19;192.199.66.20    PredicateFilter                            0           0
        16 221fz5z4guyxu 192.199.66.17;192.199.66.18    PredicateDiskRead                 1377831170           1
        15               192.199.66.21;192.199.66.22    PredicateFilter                            0           0
        15               192.199.66.23;192.199.66.24    CachePut                                   0           0
        14 c10mc2tzkhbwb 192.199.66.23;192.199.66.24    PredicateDiskRead                 1377831170           1
        13               192.199.66.23;192.199.66.24    CachePut                          1377831170           1
        12               192.199.66.21;192.199.66.22    CachePut                                   0           0
        11               192.199.66.17;192.199.66.18    CachePut                                   0           0
        10               192.199.66.19;192.199.66.20    CachePut                                   0           0
        10               192.199.66.19;192.199.66.20    PredicateCacheGet                          0           0
         9               192.199.66.21;192.199.66.22    CachePut                           584354840           1
         9 221fz5z4guyxu 192.199.66.19;192.199.66.20    PredicateDiskRead                 1377831170           1

CELL CONFIG


CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
192.199.66.17;192.199.66.18


  1.0
  1526659672886
  
  
    monkeynode11_adm
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    1614NM70A0
    2
    ib0
    ib1
    0.016666666666666666
    192.199.66.17/22
    192.199.66.18/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 01"
    monkeynode11-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
    FALSE
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    18.0
    normal
    22 days, 1:03
    normal
  


192.199.66.19;192.199.66.20


CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------

  1.0
  1526662533109
  
  
    monkeynode12_adm
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    1614NM705Y
    2
    ib0
    ib1
    0.0
    192.199.66.19/22
    192.199.66.20/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 02"
    monkeynode12-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    18.0
    normal
    22 days, 1:03
    normal
  


192.199.66.21;192.199.66.22


  1.0
  1526661149874

CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
  
  
    monkeynode13_adm
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    9717PD519T
    2
    ib0
    ib1
    0.0
    192.199.66.21/22
    192.199.66.22/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 03"
    monkeynode13-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    18.0
    normal
    21 days, 23:03
    normal
  


192.199.66.23;192.199.66.24


  1.0
  1526660912516
  
  
    monkeynode14_adm

CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
    remoteLoginEpdiled
    normal
    OSS_12.2.1.1.6_LINUX.X64_180125.1
    40/40
    7
    text
    8/8
    normal
    WriteThrough
    1635NM70Y9
    2
    ib0
    ib1
    0.016666666666666666
    192.199.66.23/22
    192.199.66.24/22
    4.1.12-94.7.8.el6uek.x86_64
    off
    Oracle Corporation ORACLE SERVER X6-2L High Capacity
    126
    7
    mail
    critical,warning,clear
    
    2/2
    normal
    success
    12.2.1.1.6.180125.1
    cell-12.2.1.1.6_LINUX.X64_180125.1-1.x86_64
    27351071
    12.1.2.3.6.170713
    "PRODA Exadata Cell 04"
    monkeynode14-adm
    11.20.238.25
    pdiPD_pdiSTRATUS.droid@pdi.com.PD
     host=11.47.4.195,port=162,community=public,type=asr,asrmPort=16161
    online
    17.0
    normal
    21 days, 22:09
    normal
  



IORM CONFIG


CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
192.199.66.17;192.199.66.18


  1.0
  1526659674006
  
  
    monkeynode11_adm_IORMPLAN
    
    
    basic
    active
  


192.199.66.19;192.199.66.20


  1.0
  1526662534081
  
  
    monkeynode12_adm_IORMPLAN
    
    
    basic
    active
  


192.199.66.21;192.199.66.22


  1.0
  1526661150877
  
  
    monkeynode13_adm_IORMPLAN
    
    
    basic
    active
  


192.199.66.23;192.199.66.24


  1.0
  1526660913655
  
  
    monkeynode14_adm_IORMPLAN
    
    

CELLNAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------
CONFVAL
----------------------------------------------------------------------------------------------------------------------------------------------------------------
    basic
    active
  



TIME
-----------------------
July19 02:57:21
1133011440 1122005524      32768     1719451
19/JULY/18 04:39:45.763 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131   91591739      32768     1669777
19/JULY/18 04:39:53.783 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     1606474
19/JULY/18 04:47:20.930 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 1467922874      32768     1598316
19/JULY/18 04:47:21.101 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 1467922874      32768     1723089
19/JULY/18 04:48:26.040 PM 192.199.66.23;192.199.66.24          2 cell single block physical read          3802949213  842883291      32768     2359311
19/JULY/18 05:17:36.317 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  125724152      32768     1743703
19/JULY/18 05:23:11.293 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  552535141      32768     1631948
19/JULY/18 05:23:19.727 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131   91591739      32768     1741381
19/JULY/18 06:23:57.158 PM 192.199.66.19;192.199.66.20          1 cell single block physical read          1318032131 1374369774      32768     1769798
19/JULY/18 08:03:38.870 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213 2701596410      32768     1622627
19/JULY/18 08:03:38.870 PM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440 2251483371      32768     2032547
19/JULY/18 08:03:39.874 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  474454945      32768     2024898
19/JULY/18 08:03:39.874 PM 192.199.66.23;192.199.66.24          2 cell smart incremental backup            3802949213          0          0     1775912
19/JULY/18 08:04:18.964 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 4112515153      32768     3333391
19/JULY/18 08:04:19.964 PM 192.199.66.21;192.199.66.22          2 cell smart incremental backup            2990700133          0          0     5560085
19/JULY/18 08:04:28.984 PM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 3409456992      32768     2098081
19/JULY/18 08:04:36.984 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     2388370
19/JULY/18 08:07:04.178 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1233700156      32768     2102692
19/JULY/18 08:09:11.705 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133  870191132      32768     1753415
19/JULY/18 10:13:12.946 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3256856773      32768     1777250

ASH HISTORY CELL PERFORMANCE SUMMARY

This query will look at the average cell wait times for each cell in ASH

CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------------------------ --------------- --------------
192.199.66.19;192.199.66.20       4320370317.0       144803.9
192.199.66.17;192.199.66.18       3720276327.0       126342.3
192.199.66.21;192.199.66.22       3302364789.0       116198.6
192.199.66.23;192.199.66.24       2214253857.0        86457.1

20 WORST CELL PERFORMANCE MINUTES IN ASH HISTORY:

APPROACH: These are the minutes where the avg cell perf time
was the highest.  See which cell had the longest waits and
during what time minute.

MINUTE       CELL_PATH                      TOTAL_WAIT_TIME  AVG_WAIT_TIME
------------ ------------------------------ --------------- --------------
July17_0458   192.199.66.21;192.199.66.22          2403872.0      2403872.0
July17_0454   192.199.66.23;192.199.66.24          2150347.0      2150347.0
July09_1416   192.199.66.17;192.199.66.18          1887469.0      1887469.0
July08_2209   192.199.66.21;192.199.66.22          1693688.0      1693688.0
July17_0200   192.199.66.23;192.199.66.24          1597102.0      1597102.0
July17_1655   192.199.66.19;192.199.66.20          1567615.0      1567615.0
July11_0400   192.199.66.19;192.199.66.20          1373801.0      1373801.0
July16_1343   192.199.66.17;192.199.66.18          1356319.0      1356319.0
July07_2001   192.199.66.21;192.199.66.22          6774340.0      1354868.0
July10_2006   192.199.66.23;192.199.66.24         10787174.0      1348396.8
July04_1431   192.199.66.19;192.199.66.20          5180228.0      1295057.0
July17_0454   192.199.66.19;192.199.66.20          2579203.0      1289601.5
July09_1901   192.199.66.19;192.199.66.20          1260900.0      1260900.0
July11_0400   192.199.66.23;192.199.66.24          1255291.0      1255291.0
July08_1437   192.199.66.19;192.199.66.20          2439786.0      1219893.0
July17_0457   192.199.66.23;192.199.66.24          1177614.0      1177614.0
July04_0906   192.199.66.23;192.199.66.24          1153969.0      1153969.0
July18_0123   192.199.66.17;192.199.66.18          1153542.0      1153542.0
July17_1957   192.199.66.21;192.199.66.22          1112349.0      1112349.0
July11_0958   192.199.66.23;192.199.66.24          4421561.0      1105390.3

50 LONGEST CELL WAITS IN ASH HISTORY ORDERED BY WAIT TIME

APPROACH: These are the top 50 individual cell waits in ASH
history in wait time order.

SAMPLE_TIME               CELL_PATH                      INST_ID EVENT                                     CELLHASH#  DISKHASH#      BYTES TIME_WAITED
------------------------- ------------------------------ ------- ---------------------------------------- ---------- ---------- ---------- -----------
07/July/18 08:01:45.575 PM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 2891233531      32768     4381405
16/July/18 02:55:09.946 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 2279513079      32768     3489839
19/JULY/18 08:24:30.057 AM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     3027146
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2994219
14/July/18 11:07:27.489 AM 192.199.66.17;192.199.66.18          1 cell single block physical read          1133011440  185394440      32768     2583166
16/July/18 02:56:30.076 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2572340
09/July/18 10:55:35.129 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3530077886      32768     2524157
09/July/18 10:30:37.201 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1317632796      32768     2511210
09/July/18 07:04:57.612 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 3835758527      32768     2508541
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2504764
15/July/18 02:23:59.797 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3835758527      32768     2498919
09/July/18 07:15:15.200 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 1122005524      32768     2496073
16/July/18 10:01:28.060 AM 192.199.66.21;192.199.66.22          1 cell single block physical read          2990700133 3770061594      32768     2481597
08/July/18 05:17:56.113 PM 192.199.66.23;192.199.66.24          1 cell single block physical read          3802949213  842883291      32768     2481323
09/July/18 11:37:22.166 AM 192.199.66.19;192.199.66.20          2 cell single block physical read          1318032131 1019816052      32768     2478192
16/July/18 12:18:57.851 PM 192.199.66.21;192.199.66.22          2 cell single block physical read          2990700133 2984361190      32768     2476915
16/July/18 03:00:25.122 AM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440 3266942716      32768     2474114
16/July/18 07:54:01.716 PM 192.199.66.17;192.199.66.18          2 cell single block physical read          1133011440  758773684      32768     2473662
16/July/18 08:15:35.001 PM 192.199.66.19;192.199.66.20  

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | 2 Comments »

DBMS_XPLAN and its different options/arguments available

Posted by FatDBA on May 10, 2018

Hi Mates,

I am back after a long time to write something that is very vital piece of information when you are about to start any troubleshooting task, yes that is the EXECUTION PLAN, well there are many ways to generate the CBO plans (i.e. AUTOTRACE, extended/debug traces, utlxpls.sql, V$SQL_PLAN, Few of the specialized SQL specific AWR reports like awrsqrpt.sql, STS etc.) but the most common and best way of doing thisng in this subecjt is to use DBMS_XPLAN & its function DISPLAY_CURSOR.

So, yes today’s topic is to understand what all options do we have to generate a more interactive, detailed, elaborative plans.

Okay will start it with very rudimentary (BASIC) styled plan and slowly will use rest of the arguments/options available. The plan includes the operation, options, and the object name (table, index, MV, etc)


SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0h79fq6vx4p99',0,'BASIC'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

------------------------------------
| Id  | Operation         | Name   |
------------------------------------
|   0 | SELECT STATEMENT  |        |
|   1 |  TABLE ACCESS FULL| DIXEMP |
------------------------------------

Next is the most common way of generating the plans that’s with the DISPLAY function which allows us to display the execution plan stored in the plan table. First we explain a SQL statement.
Next we use the DBMS_XPLAN.DISPLAY function to display the execution plan.


SQL> explain plan for select * from dixemp;
Explained.

SQL> SELECT * FROM TABLE(dbms_xplan.display);


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3980174453

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |    14 |   532 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DIXEMP |    14 |   532 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Next is the ‘ALLSTATS LAST’ option for FORMAT parameter. Lets see how to use it.
ALLSTATS is a shortcut for IOSTATS and MEMSTATS (e.q. BUFFER information) information and the other keyword LAST can be specified to see only the statistics for the last execution.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |     14 |00:00:00.01 |      10 |
--------------------------------------------------------------------------------------

But if you take a look at the above plan you’ll see that the plan doesn’t contain few of the vital columns or stats like COST and bytes processed (BYTES) as it doesn’t comes by default with that, you actually have to add few more predicates to get that info. That is +cost and +bytes with your FORMAT parameter.

Below is how you can get that missing info from the plan.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0h79fq6vx4p99',format=>'ALLSTATS LAST +cost +bytes'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

-----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |       |     3 (100)|     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |   532 |     3   (0)|     14 |00:00:00.01 |      10 |
-----------------------------------------------------------------------------------------------------------

And the ALL parameter will give you the general plan but rest all of the details like Query Block Name / Object Alias, Predicate Information, column projection details.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0h79fq6vx4p99',0,'ALL'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |       |       |     3 (100)|          |
|   1 |  TABLE ACCESS FULL| DIXEMP |    14 |   532 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DIXEMP@SEL$1

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "DIXEMP"."EMPNO"[NUMBER,22], "DIXEMP"."ENAME"[VARCHAR2,10],
       "DIXEMP"."JOB"[VARCHAR2,9], "DIXEMP"."MGR"[NUMBER,22],
       "DIXEMP"."HIREDATE"[DATE,7], "DIXEMP"."SAL"[NUMBER,22],
       "DIXEMP"."COMM"[NUMBER,22], "DIXEMP"."DEPTNO"[NUMBER,22]

ALLSTATS is a shortcut for IOSTATS and MEMSTATS (e.q. BUFFER information) with your plan.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0h79fq6vx4p99',0,'ALLSTATS'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      7 |        |     98 |00:00:00.01 |      70 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      7 |     14 |     98 |00:00:00.01 |      70 |
--------------------------------------------------------------------------------------

Now, if you want to view additional details of your plan, for example set of hints to reproduce the statement or OUTLINE, you can use it in your format parameter.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0h79fq6vx4p99',format=>'ALLSTATS LAST +outline'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |     14 |00:00:00.01 |      10 |
--------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "DIXEMP"@"SEL$1")
      END_OUTLINE_DATA
  */

Adding or removing any data/stats from the plan, that’s possible using + or – signs follwed by argument.
Example: if you want t view cost and bytes information use +cost, +bytes in your plan or if you want to remove the same info in your run of dbms_xplan use -cost, -bytes.


SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0h79fq6vx4p99',format=>'ALLSTATS LAST +cost +bytes'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

-----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      1 |        |       |     3 (100)|     14 |00:00:00.01 |      10 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      1 |     14 |   532 |     3   (0)|     14 |00:00:00.01 |      10 |
-----------------------------------------------------------------------------------------------------------

You can write a mix of both as below


SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(sql_id => '0h79fq6vx4p99', cursor_child_no => 0, FORMAT => 'TYPICAL -ROWS -BYTES +COST +PARALLEL +PARTITION +IOSTATS +MEMSTATS +ALIAS +PEEKED_BINDS +OUTLINE +PREDICATE -PROJECTION +REMOTE +NOTE'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------
SQL_ID  0h79fq6vx4p99, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ * from dixemp

Plan hash value: 3980174453

--------------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name   | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |      7 |        |     3 (100)|          |     98 |00:00:00.01 |      70 |
|   1 |  TABLE ACCESS FULL| DIXEMP |      7 |     14 |     3   (0)| 00:00:01 |     98 |00:00:00.01 |      70 |
--------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DIXEMP@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "DIXEMP"@"SEL$1")
      END_OUTLINE_DATA
  */

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

No active extraction maps – Golden Gate STATS command, what’s that ?

Posted by FatDBA on April 8, 2018

Hey Mates,

Not sure if you guys have ever encountered a situation where you’ve got a return message “No active extraction maps” from Golden Gate STATS command against your Golden Gate process.
For GG newcomers, STATS command is used to display statistics for one or more Extract,Pump/Replicat groups and the output includes DML and DDL operations that are included in the Oracle GoldenGate configuration.

Now coming back to the error scenario. Let’s see where we got that message and what does that means.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 20> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     GGTUNEX1    00:00:03      00:00:08
EXTRACT     RUNNING     GGTUNPU1    00:00:00      00:00:01
REPLICAT    RUNNING     GGTUNRP1    00:00:00      00:00:03

Now when i tried to check statistics for my processes, we’ve got a message which says that there are “No active extraction maps”

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 18> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 22> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

So what does that means ??
The message that is displayed indicates that nothing has been processed in the extract or replicat. Once data for the tables in the extract and replicat are processed (captured from or applied to DB) the STATS command produces processing statistics.

Let me try to do some manipulations at the source table which is part of replication and see if this brings something to STATS results.

SQL> update emp set ENAME='KARTIKEY' where EMPNO=8090;
1 row updated.

SQL> commit;
Commit complete.

Let’s try now.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 43> stats GGTUNEX1

Sending STATS request to EXTRACT GGTUNEX1 ...

Start of Statistics at 2018-04-05 04:03:08.

Output to /acfsmounts/acfsggv1/ggdir/dirdat/s1:
Extracting from SCOTT.EMP to SCOTT.EMP:

*** Total statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Latest statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.

Yup, we’ve got stats for the process now after the change.

Hope It Helps
Prashant

Posted in Basics | Tagged: | Leave a Comment »

Convert Standard ASM to Flex ASM.

Posted by FatDBA on March 15, 2018

Hi Mates,

Okay, you have an ASM instance crashed and at the same time the db instance failed on the instance … Expected behavior and many of us have faced this scenario in production RAC setups.
Answer to the question is ‘Flex ASM’ which provides us with something that was previously unattainable: the ability to run multiple, independent in cardinality, ASM instances. You can think of it what SCAN is to Database 11gR2.

Its been a while we have the Flex ASM available for 12c users, now the question – How to convert a Non-Flex ASM setup to Flex enabled ASM.
Below is the method to it, i performed a POC for one of the customer some time back and here are the steps.

Configuration:
RHEL 6, 64 Bit
2 Node 12cR1 RAC setup.
Hostname: rac1, rac2.
DB Instances: dixitdb1, dixitdb2

Let’s first check the network information of the cluster, the network interfaces and their IPv4 addresses, you can collect this info using oifcfg tool.

[oracle@rac1 ~]$ oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.10.0  global  cluster_interconnect
[oracle@rac1 ~]$

Next lets check the ASM information and current mode.

[oracle@rac1 ~]$ srvctl status asm
ASM is running on rac1,rac2

[oracle@rac1 ~]$  srvctl config asm
ASM home: 
Password file: +DATA/orapwASM
ASM listener: LISTENER

[oracle@rac1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled
[oracle@rac1 ~]$

Okay, now lets do the conversion, we will be doing the silent conversion. You can use the ASMCA GUIas well to do the same.
Here used 192.168.10.0 as the IP and a free port for ASM LISTENER, we will use 1526 port here for listening all requests.

[oracle@rac1 ~]$ asmca -silent -convertToFlexASM -asmNetworks eth1/192.168.10.0 -asmListenerPort 1526

To complete ASM conversion, run the following script as privileged user in local node.
/u01/app/oracle/cfgtoollogs/asmca/scripts/converttoFlexASM.sh

Okay, so the last step generated an auto script which needs to be executed from root to do the real work. This will bounce all RAC components one by one on each node. By the end of the step we will have a new LISTENER exclusively created for the ASM instance and both of the two instances (ASM1, ASM2) will be registered with it.

[oracle@rac1 ~]$ su - root
Password:
[root@rac1 ~]# /u01/app/oracle/cfgtoollogs/asmca/scripts/converttoFlexASM.sh
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac1'
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.dixitdb.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'rac1' succeeded
CRS-2676: Start of 'ora.cvu' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac2'
CRS-2677: Stop of 'ora.dixitdb.db' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
CRS-2677: Stop of 'ora.mgmtdb' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac1'
CRS-2677: Stop of 'ora.scan3.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac2'
CRS-2676: Start of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac2'
CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2676: Start of 'ora.scan3.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac2'
CRS-2676: Start of 'ora.MGMTLSNR' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac2'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac2'
CRS-2676: Start of 'ora.oc4j' on 'rac2' succeeded
CRS-2676: Start of 'ora.mgmtdb' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
Oracle Grid Infrastructure restarted in node rac1
PRCC-1014 : ASMNET1LSNR_ASM was already running
PRCR-1004 : Resource ora.ASMNET1LSNR_ASM.lsnr is already running
PRCR-1079 : Failed to start resource ora.ASMNET1LSNR_ASM.lsnr
CRS-5702: Resource 'ora.ASMNET1LSNR_ASM.lsnr' is already running on 'rac1'
CRS-5702: Resource 'ora.ASMNET1LSNR_ASM.lsnr' is already running on 'rac2'
ASM listener ASMNET1LSNR_ASM running already
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac2'
CRS-2673: Attempting to stop 'ora.dixitdb.db' on 'rac2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2'
CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac1'
CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2'
CRS-2677: Stop of 'ora.scan3.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.dixitdb.db' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
CRS-2677: Stop of 'ora.mgmtdb' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac2'
CRS-2676: Start of 'ora.scan3.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac1'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac1'
CRS-2676: Start of 'ora.scan2.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac1'
CRS-2676: Start of 'ora.rac2.vip' on 'rac1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac1' succeeded
CRS-2676: Start of 'ora.MGMTLSNR' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac1'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'
CRS-2676: Start of 'ora.mgmtdb' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2677: Stop of 'ora.FRA.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac2'
CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2'
CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.storage' on 'rac2'
CRS-2677: Stop of 'ora.storage' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac2'
CRS-2676: Start of 'ora.storage' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
Oracle Grid Infrastructure restarted in node rac2
[root@rac1 ~]#

Okay, so it is done with the reboot of clusterware components and back to the prompt.
Let’s verify if it has been done or not …

[root@rac1 ~]# srvctl status asm
ASM is running on rac1,rac2

[root@rac1 ~]# asmcmd showclustermode
ASM cluster : Flex mode enabled           >>>> Flex Mode is ON now.

[root@rac1 ~]#  srvctl config asm
ASM home: 
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: ALL
Cluster ASM listener: ASMNET1LSNR_ASM

And we have a new LISTENER named ‘ASMNET1LSNR_ASM’ created for ASM.

[root@rac2 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE


[root@rac1 ~]# ps -ef|grep tns
root        10     2  0 15:59 ?        00:00:00 [netns]
oracle   22167     1  0 18:46 ?        00:00:00 /u01/app/12.1.0/grid_1/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
oracle   22291     1  0 18:46 ?        00:00:00 /u01/app/12.1.0/grid_1/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle   22532     1  0 18:46 ?        00:00:00 /u01/app/12.1.0/grid_1/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
oracle   22535     1  0 18:46 ?        00:00:00 /u01/app/12.1.0/grid_1/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracle   22544     1  0 18:46 ?        00:00:00 /u01/app/12.1.0/grid_1/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
root     30044 19089  0 18:52 pts/1    00:00:00 grep tns

[root@rac1 ~]# ps -ef|grep pmon
oracle   21494     1  0 18:46 ?        00:00:00 asm_pmon_+ASM1
oracle   22439     1  0 18:46 ?        00:00:00 ora_pmon_dixitdb1
oracle   22839     1  0 18:46 ?        00:00:00 mdb_pmon_-MGMTDB
root     30089 19089  0 18:52 pts/1    00:00:00 grep pmon
[root@rac1 ~]#


[root@rac1 ~]# lsnrctl status ASMNET1LSNR_ASM

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 13-MAR-2018 18:54:09

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))
STATUS of the LISTENER
------------------------
Alias                     ASMNET1LSNR_ASM
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                13-MAR-2018 18:46:25
Uptime                    0 days 0 hr. 7 min. 43 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/rac1/asmnet1lsnr_asm/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=ASMNET1LSNR_ASM)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.1)(PORT=1526)))
Services Summary...
Service "+ASM" has 2 instance(s).
  Instance "+ASM1", status READY, has 2 handler(s) for this service...
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
The command completed successfully
[root@rac1 ~]#


[root@rac1 ~]# srvctl config listener -l ASMNET1LSNR_ASM
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.10.0
Home: 
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

[root@rac1 ~]# srvctl status listener -l ASMNET1LSNR_ASM
Listener ASMNET1LSNR_ASM is enabled
Listener ASMNET1LSNR_ASM is running on node(s): rac1,rac2

Let’s do some testing, i will here try to stop one of the ASM instance (+ASM1) on Node1 and will see if the DB Instance still alive and listens to requests.

[root@rac1 ~]# srvctl status database -db dixitdb -f -v
Instance dixitdb1 is running on node rac1. Instance status: Open.
Instance dixitdb2 is running on node rac2. Instance status: Open.

[root@rac1 ~]# srvctl modify asm -count 1
PRCA-1123 : The specified ASM cardinality 1 is less than the minimum cardinality of 2.

Well, this is an expected error because we are running on a 2 Node RAC and Flex ASM (Same as SCAN Listeners) needs at-least 2 Instance up and running which is not possible here in my case. But i will now kill the asm instance manually (Killing the PMON)

[root@rac1 ~]# ps -ef|grep pmon
root      4167  4142  0 19:41 pts/1    00:00:00 grep pmon
oracle   21494     1  0 18:46 ?        00:00:00 asm_pmon_+ASM1
oracle   22439     1  0 18:46 ?        00:00:00 ora_pmon_dixitdb1
oracle   22839     1  0 18:46 ?        00:00:00 mdb_pmon_-MGMTDB

[root@rac1 ~]# kill -9 21494
[root@rac1 ~]#  ps -ef|grep pmon
root      4200  4142  0 19:42 pts/1    00:00:00 grep pmon
oracle   22439     1  0 18:46 ?        00:00:00 ora_pmon_dixitdb1
oracle   22839     1  0 18:46 ?        00:00:00 mdb_pmon_-MGMTDB

Next, let’s see the ASM client connections info on avaialble instance (+ASM2)

SQL> select GROUP_NUMBER, DB_NAME, STATUS, INSTANCE_NAME from  v$asm_client;

GROUP_NUMBER DB_NAME  STATUS       INSTANCE_NAME
------------ -------- ------------ ----------------------------------------------------------------
           1 +ASM     CONNECTED    +ASM2
           2 +ASM     CONNECTED    +ASM2
           1 dixitdb  CONNECTED    dixitdb1
           2 dixitdb  CONNECTED    dixitdb1
           1 dixitdb  CONNECTED    dixitdb2
           2 dixitdb  CONNECTED    dixitdb2
           1 _mgmtdb  CONNECTED    -MGMTDB

7 rows selected.

And we have the Instance 1 (dixitdb1) connected with the +ASM2 instance, as +ASM1 is crashed/dead.
It’s listening all requests via ASM LISTENER, same can be verified or checked in asm listener logs.

13-MAR-2018 19:47:10 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=rac2.localdomain)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=ASMNET1LSNR_ASM)(VERSION=202375680)) * status * 0
13-MAR-2018 19:47:14 * (CONNECT_DATA=(SERVICE_NAME=+ASM)(INSTANCE_NAME=+ASM2)(CID=(PROGRAM=oracle)(HOST=rac1.localdomain)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.1)(PORT=39062)) * establish * +ASM * 0
 

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: