Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for April, 2022

Compare two executions plans in Oracle 19c using DBMS_XPLAN.COMPARE_PLANS

Posted by FatDBA on April 26, 2022

Hi All,

Today’s post is about one of the good featured added into Oracle 19c for DBMS_XPLAN package that provides an easy way to display the output of the EXPLAIN PLAN using a new function called COMPARE_PLANS. This function compares each plan in the list to a reference plan and returns a report and makes life of a DBA little easy.

The following example illustrates the usage of the COMPARE_PLANS function in 19c.

-- Lets create a sample table
SQL> CREATE TABLE bigtab (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);  

Table created.

-- Insert some dummy rows into it
SQL> INSERT INTO bigtab (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/  

84217 rows created.

SQL> 


-- Lets query the table with our conditions 
SQL> select count(*) from bigtab where ID between 280 and 500;

  COUNT(*)
----------
     18564

SQL>
SQL> select count(*) from bigtab where ID not between 280 and 500;

  COUNT(*)
----------
     65653


-- All set lets get the SQLID for both of the SQLs that we ran
SQL> set linesize 400 pagesize 400
SQL> select sql_id, sql_text from v$sqlarea where sql_text like '%from bigtab where ID%';

SQL> 

SQL_ID        SQL_TEXT
------------- --------------------------------------------------------------------------------
3k1b0y7scc8bd select count(*) from bigtab where ID not between :"SYS_B_0" and :"SYS_B_1"
51tfy4wqb5sg3 select count(*) from bigtab where ID between :"SYS_B_0" and :"SYS_B_1"

Now we have SQLIDs for both of the SQLs I ran in last session, let’s call DBMS_XPLAN.COMPARE_PLANS and compare both of the plans and see how the compare reports looks like and what all information it presents. There are multiple parameters available with the reference_plan and compare_plan_list, but to keep it simple I will be using only cursor_cache_object (to get all plans in the cursor cache generated for SQL ID) and plan_object_list(cursor_cache_object) which is a TYPE

Other available plan sources are:
plan_table_object(owner, plan_table_name, statement_id, plan_id) : for schedules
cursor_cache_object(sql_id, child_number) : For cursor cache
awr_object(sql_id, dbid, con_dbid, plan_hash_value) : for Hours or if you want to check for two different PHVs
sqlset_object (sqlset_owner, sqlset_name, sql_id, plan_hash_value) : for SQL Tuning Set
spm_object (sql_handle, plan_name) : For SQL Plan management
sql_profile_object (profile_name) : For SQL configuration file

Let’s go simple and will use cursor_cache_object with plan_object_list to get a simple plain execution plan compare report.

SQL>
SQL> VARIABLE v_rep CLOB
SQL> 
SQL> BEGIN
:v_rep := DBMS_XPLAN.COMPARE_PLANS(
reference_plan => cursor_cache_object('3k1b0y7scc8bd', NULL),
compare_plan_list => plan_object_list(cursor_cache_object('51tfy4wqb5sg3', NULL)),
type => 'TEXT',
level => 'TYPICAL',
section => 'ALL');
END;
/  

PL/SQL procedure successfully completed.

SQL>



-- Now lets call the variable to see contents inside, the compare report.
SQL> set long 100000
SQL> COLUMN report FORMAT a200
SQL>
SQL> SELECT :v_rep REPORT FROM DUAL;

REPORT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

COMPARE PLANS REPORT
---------------------------------------------------------------------------------------------
  Current user           : SYS
  Total number of plans  : 2
  Number of findings     : 2
---------------------------------------------------------------------------------------------

COMPARISON DETAILS
---------------------------------------------------------------------------------------------
 Plan Number            : 1 (Reference Plan)
 Plan Found             : Yes
 Plan Source            : Cursor Cache
 SQL ID                 : 3k1b0y7scc8bd
 Child Number           : 0
 Plan Database Version  : 19.0.0.0
 Parsing Schema         : "SYS"
 SQL Text               : select count(*) from bigtab where ID not between
                        :"SYS_B_0" and :"SYS_B_1"

Plan
-----------------------------

 Plan Hash Value  : 2140185107

--------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes  | Cost | Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |       |        |   82 |          |
|   1 |   SORT AGGREGATE     |        |     1 |     13 |      |          |
| * 2 |    TABLE ACCESS FULL | BIGTAB | 54028 | 702364 |   82 | 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - filter(("ID"<:SYS_B_0 OR "ID">:SYS_B_1))


Notes
-----
- Dynamic sampling used for this statement ( level = 2 )


---------------------------------------------------------------------------------------------
 Plan Number            : 2
 Plan Found             : Yes
 Plan Source            : Cursor Cache
 SQL ID                 : 51tfy4wqb5sg3
 Child Number           : 0
 Plan Database Version  : 19.0.0.0
 Parsing Schema         : "SYS"
 SQL Text               : select count(*) from bigtab where ID between
                        :"SYS_B_0" and :"SYS_B_1"

Plan
-----------------------------

 Plan Hash Value  : 3865534252

---------------------------------------------------------------------------
| Id  | Operation             | Name   | Rows  | Bytes  | Cost | Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |        |       |        |   79 |          |
|   1 |   SORT AGGREGATE      |        |     1 |     13 |      |          |
| * 2 |    FILTER             |        |       |        |      |          |
| * 3 |     TABLE ACCESS FULL | BIGTAB | 17969 | 233597 |   79 | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - filter(:SYS_B_1>=:SYS_B_0)
* 3 - filter(("ID">=:SYS_B_0 AND "ID"<=:SYS_B_1))


Notes
-----
- Dynamic sampling used for this statement ( level = 2 )


Comparison Results (2):
-----------------------------
 1. Query block SEL$1, Alias "BIGTAB"@"SEL$1": Some lines (id: 2) in the
    current plan are missing in the reference plan.
 2. Query block SEL$1, Alias "BIGTAB"@"SEL$1": Some columns (ID, PARENT_ID,
    DEPTH) do not match between the reference plan (id: 2) and the current plan
    (id: 3).


---------------------------------------------------------------------------------------------


SQL>
SQL>

This is great feature when the execution plans are messy and behemoth and just a glance is not enough and you want to compare two such execution plans. This will present a ‘comparison result’ at the end of the report like in our example the report says the Query Block with name SEL$1 which is an alias of BIGTAB in the current plan doesn’t have the line ID 2 (FILTER method in our case) and similar other observations.

Hope It Helped!
Prashant Dixit

Advertisement

Posted in Uncategorized | 8 Comments »

How to prioritize an Oracle Database background process ?

Posted by FatDBA on April 18, 2022

Recently while looking into a system (was running on 19.3.0.0.0 standalone) where ‘log file sync’ was bugging the database, and after we tried all other possible solutions, we thought to increase the priroty of the LGWR background process to see if that helps.

Increasing the LGWR priority is putting the LGWR process in the Round-Robin (SCHED_RR) class. You can increase process’s priority both using OS (renice, nice commands) or Database methods, but this post is about setting the priority using ‘_high_priority_process’ an undocumented/hidden parameter that prioritizes your database managed processes.

I am using Oracle 19.3 for the test where the LGWR is not by default comes with any priority in the DB, starting from 21.3.0.0.0 LGWR process is part of _high_priority_processes group along with VKTM & LMS* processes.
Note: This being a hidden/undocumented parameter I advise to consult with Oracle support before going and changing the parameter value. Try other possible ways to reduce log file sync, before jumping into this crude method of prioritizing LGWR over others.

[oracle@oracleontario ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 10 03:36:06 2022
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> @hidden

Enter value for param: high_prio
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%priority_processes%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_highest_priority_processes                   VKTM                      VKTM                      Highest Priority Process Name Mask
_high_priority_processes                      LMS*|VKTM                 LMS*|VKTM                 High Priority Process Name Mask 

And by default in Oracle version 19.3.0.0 the parameter is set to prioritize VKTM (Virtual keeper of time) and LMS (Lock Manager, a RAC process). Let me check VKTM’s current priority class, and it is set to RR class (SCHED_RR scheduling class) for the process as its defined via _high_priority_processes parameter.

[oracle@oracleontario ~]$ ps -eo pid,class,pri,nice,time,args |grep vktm
 23871 RR   41   - 00:00:53 ora_vktm_dixitdb

About the LGWR process, and it is set to TS (SCHED_OTHER) class and it has no priority class attached to it.

[oracle@oracleontario ~]$ ps -eo pid,class,pri,nice,time,args |grep ora_lg*
 23990 TS   19   0 00:00:07 ora_lgwr_dixitdb

Let’s change the priority and reboot the database to persistent the change!

SQL> alter system set "_high_priority_processes"='LMS*|VKTM|LGWR' scope=spfile;

System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 1593831936 bytes
Fixed Size                  8897024 bytes
Variable Size            1107296256 bytes
Database Buffers          469762048 bytes
Redo Buffers                7876608 bytes
Database mounted.
Database opened.

SQL> @hidden
Enter value for param: high_priority_processes
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%high_priority_processes%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_high_priority_processes                      LMS*|VKTM|LGWR            LMS*|VKTM|LGWR            High Priority Process Name Mask

At the same time I can see the same was logged into the Alert log file.

2022-04-10T03:54:31.488732-04:00
LGWR started with pid=8, OS id=26058 at elevated (RT) priority

So, we have reniced the priority of LGWR on the system, I mean the higher value of priority actually makes the process lower priority; it means the process demands fewer system resources (and therefore is a “nicer” process). Now lets check the scheduling class of the process at the OS, it should be now changed to RR from TS.

SQL> !ps -eo pid,class,pri,nice,time,args |grep ora_lm*
 26058 RR   41   - 00:00:00 ora_lgwr_dixitdb

Let me check at the OS Level what has changed now.

[oracle@oracleontario 26058]$ pwd
/proc/26058
[oracle@oracleontario 26058]$ more sched
ora_lgwr_dixitd (26058, #threads: 1)
-------------------------------------------------------------------
se.exec_start                                :      26820431.663015
se.vruntime                                  :            -2.963799
se.sum_exec_runtime                          :          1858.211503
se.nr_migrations                             :                    0
nr_switches                                  :                 4038
nr_voluntary_switches                        :                 4023
nr_involuntary_switches                      :                   15
se.load.weight                               :                 1024
policy                                       :                    2      -----> Policy, the 0-99 are real-time priorities
prio                                         :                   98
clock-delta                                  :                   59
mm->numa_scan_seq                            :                    0
numa_migrations, 0
numa_faults_memory, 0, 0, 1, 0, -1
numa_faults_memory, 1, 0, 0, 0, -1


-- output from top utility
top - 05:09:14 up  7:32,  3 users,  load average: 0.14, 0.10, 0.11
Tasks:   2 total,   0 running,   2 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
GiB Mem :      4.3 total,      0.0 free,      1.1 used,      3.1 buff/cache
GiB Swap:      3.9 total,      3.9 free,      0.0 used.      2.1 avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
 26027 oracle    -2   0 2016104  20096  17064 S  1.7  0.4   1:28.22 ora_vktm_dixitdb      ---> Look at the PR (priority) column with value -2 (higher pri)
 26058 oracle    -2   0 2017136  30360  26768 S  0.0  0.7   0:01.86 ora_lgwr_dixitdb      ---> Look at the PR (priority) column with value -2 (higher pri)

So, when nothing was working for us, this workaround helped and we were able to reduce LFS waits by more than 80% …

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | 4 Comments »

Adaptive Log File Sync is not always good for you …

Posted by FatDBA on April 11, 2022

Last week I was part of one system/database stability short term assignment where customer running a critical Telco application on 2-Node RAC Cluster (RHEL) on 11.2.0.3.1 reported slowness in few of the critical data processing modules, which in turn had slowed down their entire system. Todays post is about buggy behavior of an adaptive feature which had caused a huge mess in the system in the form of ‘Log File Sync‘ waits.

After few of the initial calls with the customer, we come to know that the database has a history of high ‘log file sync‘ waits, but they simply come and go, and it seemed that the events were never handled correctly or being analyzed! And interestingly, restarting the cluster resolves the issue for few hours, sometimes for few days. LFS event was quite prominently seen in this database and on an average found consuming > 38% of the total DB Time% available with very high average wait times (248 ms).

Below are few of the database statistics captured from the problem time.

About the event, When a user session commits, all redo records generated by that session’s transaction need to be flushed from memory to the redo logfile to insure changes to the database made by that transaction become permanent. The time between the user session posting the LGWR and the LGWR posting the user after the write has completed is the wait time for ‘log file sync’ that the user session will show. This event is known as the time lost as a result of the LGWR process waiting while users initiate a Transaction Commit or Rollback.

Next in order to get more idea about the event, I ran the lfsdiag.sql (Oracle provided script to catch diag info on it – Doc ID 1064487.1)). The script will look at the important parameters involved in log file sync waits, wait histogram data, and at the worst average LFS times in the active session history data and AWR data and dump information to help determine why those times were the highest.

Below are the ASH LFS background process waits caught during its worst minute and it had some really bad stats captured for the worst minute for the same time when customer had worst application performance.

MINUTE          INST_ID EVENT                            TOTAL_WAIT_TIME      WAITS   AVG_TIME_WAITED
------------ ---------- ------------------------------ ----------------- ---------- -----------------
Apr06_1742            2 log file sync                        2819744.300       1973          1429.166
Apr06_1745            2 log file sync                        1219765.027       1200          1016.471
Apr06_1747            2 log file sync                        2045077.213       1745          1171.964
Apr06_1748            2 log file sync                        1417639.236       1170          1211.657
Apr06_1749            2 log file sync                        2202804.958       2190          1005.847
Apr06_1753            2 log file sync                        1967863.159       1871          1051.771
Apr06_1756            2 log file sync                        1096747.638        336          3264.130
Apr06_1843            2 log file sync                        1710602.016        531          3221.473
Apr06_1846            2 log file sync                         814607.205        247          3298.005

Another intersting section was the Histogram data for LFS and other related events. Here was can see the LFS waits at “wait_time_milli” and specially the high wait times to correlate them with other wait events. From below stats its evident that Node 2 of this RAC cluster was severely impacted with the LFS waits., with very high wait counts and wait times (ms), and lot of ‘gcs log flush sync’ along with LFS events that pushes LGWR process to write data to the disk.

The stats are very bad, with highest wait time of 1048576 ms on Node 2 and average of 99864 ms (1.6 mins), both ‘log file parallel write’ and ‘gcs log flush sync’ were quite high too.

   INST_ID EVENT                                    WAIT_TIME_MILLI WAIT_COUNT
---------- ---------------------------------------- --------------- ----------
         1 log file sync                                          1   76126281
         1 log file sync                                          2   31805429
         1 log file sync                                          4   18893320
         1 log file sync                                          8   29604071
         1 log file sync                                         16   21903517
         1 log file sync                                         32    7252292
         1 log file sync                                         64    3692615
         1 log file sync                                        128    1615552
         1 log file sync                                        256     262632
         1 log file sync                                        512      25341
         1 log file sync                                       1024       5431
         1 log file sync                                       2048        901
         1 log file sync                                       4096        188
         1 log file sync                                       8192        102
         1 log file sync                                      16384         33
         2 log file sync                                          1   56003954
         2 log file sync                                          2   19903760
         2 log file sync                                          4   12749437
         2 log file sync                                          8   17572031
         2 log file sync                                         16   12266342
         2 log file sync                                         32    3209853
         2 log file sync                                         64    1124852
         2 log file sync                                        128    2912977
         2 log file sync                                        256    5516739
         2 log file sync                                        512    3226632
         2 log file sync                                       1024     783055
         2 log file sync                                       2048     119096
         2 log file sync                                       4096      24974
         2 log file sync                                       8192       8841
         2 log file sync                                      16384       3782
         2 log file sync                                      32768       1640
         2 log file sync                                      65536        578
         2 log file sync                                     131072        259
         2 log file sync                                     262144         78
         2 log file sync                                     524288         22
         2 log file sync                                    1048576          4


         1 log file parallel write                                1  198650048
         1 log file parallel write                                2   16777612
         1 log file parallel write                                4    9934905
         1 log file parallel write                                8    4511957
         1 log file parallel write                               16     808734
         1 log file parallel write                               32     107699
         1 log file parallel write                               64      17255
         1 log file parallel write                              128       1837
         1 log file parallel write                              256        139
         1 log file parallel write                              512         24
         1 log file parallel write                             1024          2
         2 log file parallel write                                1  150805280
         2 log file parallel write                                2    9299589
         2 log file parallel write                                4    4816280
         2 log file parallel write                                8    3145522
         2 log file parallel write                               16     793765
         2 log file parallel write                               32     116312
         2 log file parallel write                               64      12469
         2 log file parallel write                              128       1298
         2 log file parallel write                              256        118
         2 log file parallel write                              512         14
         2 log file parallel write                             1024          1


         1 gcs log flush sync                                     1   85952301
         1 gcs log flush sync                                     2    8052174
         1 gcs log flush sync                                     4    4224838
         1 gcs log flush sync                                     8    4017059
         1 gcs log flush sync                                    16    1226469
         1 gcs log flush sync                                    32      88359
         1 gcs log flush sync                                    64       4263
         1 gcs log flush sync                                   128         90
         1 gcs log flush sync                                   256          1
         2 gcs log flush sync                                     1   60282021
         2 gcs log flush sync                                     2    5102517
         2 gcs log flush sync                                     4    1951187
         2 gcs log flush sync                                     8    2470197
         2 gcs log flush sync                                    16    1088285
         2 gcs log flush sync                                    32     214273
         2 gcs log flush sync                                    64      26585
         2 gcs log flush sync                                   128        147

As a possible solution, we tried few of the quick ones i.e. batching redo (commit_logging = batch) for the log writer (I know it has its own risks) to reduce LFS, but that didn’t worked either.

Next thing I’d generated the system state dump to understand the system and situation bettern, and the LFS events were caught in the system state dump as well with wait chains pointing to ‘rdbms ipc message'<=’log file sync’.

Process traces are always considered a wealth of diagnostic information, So I’d checked the LGWR process traces and thats where I saw some strangeness with frequent entries related with switching between post/wait and polling method which is an adaptive way to control switching between post/wait (older way) and polling (new method) for log file syncs. This gave me little hint about the possible reason on why so many LFS waits.

Talking about the adaptive log file sync, there are 2 methods by which LGWR and foreground processes can communicate in order to acknowledge that a commit has completed:
Post/wait: traditional method available in previous Oracle releases LGWR explicitly posts all processes waiting for the commit to complete. The advantage of the post/wait method is that sessions should find out almost immediately when the redo has been flushed to disk.
Polling: Foreground processes sleep and poll to see if the commit is complete, this was introduced to free high CPU usage by the LGWR.

This behavior is controlled by the parameter “_use_adaptive_log_file_sync” and was introduced in 11gR2 and controls whether adaptive switching between post/wait and polling is enabled. In 11.2.0.1 and 11.2.0.2 the default value for the parameter is false. From 11.2.0.3, the default value has been changed to true.

-- LGWR traces 
WARNING:io_submit failed due to kernel limitations MAXAIO for process=128 pending aio=128
WARNING:asynch I/O kernel limits is set at AIO-MAX-NR=1048576 AIO-NR=169402

*** 2022-04-07 06:03:31.916
Warning: log write broadcast wait time 2612477ms (SCN 0xad1.f8c170fe)

*** 2022-04-07 06:03:31.916
Warning: log write broadcast wait time 2598008ms (SCN 0xad1.f8c21251)
kcrfw_update_adaptive_sync_mode: post->poll long#=33 sync#=202 sync=5963 poll=8730 rw=383 rw+=383 ack=3982 min_sleep=1135

*** 2022-04-07 07:46:20.018
Log file sync switching to polling --------------------->>>>>>> It shows current method is polling 
Current scheduling delay is 1 usec 
Current approximate redo synch write rate is 67 per sec

*** 2022-04-07 07:47:13.877
kcrfw_update_adaptive_sync_mode: poll->post current_sched_delay=0 switch_sched_delay=1 current_sync_count_delta=63 switch_sync_count_delta=202

*** 2022-04-07 07:47:13.877
Log file sync switching to post/wait ------------------>>>>>>>> It shows current method is post/wait
Current approximate redo synch write rate is 21 per sec


-- Below stats shows that the POLLING is happening on the database for LGWR wrtes 
SQL> select name,value from v$sysstat where name in ('redo synch poll writes','redo synch polls');

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo synch poll writes                                             10500129
redo synch polls                                                   10773618

As there were no other symptoms of issues with I/O or in other areas, the problem could be with excessive switching between post/wait and polling wait methods.

After consulting with Oracle support, we found the issue was happening due to BUG (25178179) and the issue gets severe when only while using log file sync “polling mode“. To prevent the problem from happening, they suggested us to turn off the _use_adaptive_log_file_sync by setting it to FALSE in either in the spfile and restarting the database or dynamically in memory.
This will force the log file sync waits to use (the traditional) “post/wait mode” rather than the automatically switching between “post/wait mode” and “polling mode” based on performance statistics. Changing the said parameter to FALSE disabled adaptive LFS in the database and that resolved the issue and system performance was restored.

-- Set the parameter _use_adaptive_log_file_sync = false and restart the database:
SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync" = FALSE;

-- In cases where a restart is not feasible, then you can set in memory and also in the SP file for when a restart does occur:
SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync"=FALSE; -- to set in memory and spfile as by default scope=both
SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync"=FALSE scope=sfile sid='*'; -- to set on spfile, so parameter is used at next restart.

Though In the vast majority of cases adaptive log file sync improves the overall performance of log file synchronization, but there are few bugs associated with this feature i.e. 13707904, 13074706 and 25178179.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | 1 Comment »

Getting ‘error while loading shared libraries’ while calling oratop on version 19c …

Posted by FatDBA on April 4, 2022

Recently someone asked me, How to run oratop utility on Oracle database version that is above 12.1 ? He was using Oracle version 19.3.0.0.0 and was getting error ‘error while loading shared libraries’ when tried to call the oratop, and it was quite an obvious error as the script trying to locate a shared object file called libclntshcore.so under $ORACLE_HOME/lib directory for version 12.1, whereas it was version 19.1 and hence the shared object version was libclntshcore.so.19.1

[oracle@oracleontario ~]$ ./oratop.RDBMS_12.1_LINUX_X64 -d -f -i 3 / as sysdba
./oratop.RDBMS_12.1_LINUX_X64: error while loading shared libraries: libclntshcore.so.12.1: cannot open shared object file: No such file or directory
[oracle@oracleontario ~]$

Solution to the problem is, starting from Oracle 19c, oratop is part of Oracle supplied tool and is present under ORACLE_HOME/suptools directory, and is not version specific. Other tools that you will see along with oratop in suptools directory are orachk and tfa. Let’s call it and see if it runs!

[oracle@oracleontario ~]$ cd $ORACLE_HOME/suptools/
[oracle@oracleontario suptools]$ ls
orachk  oratop  tfa
[oracle@oracleontario suptools]$ cd oratop/
[oracle@oracleontario oratop]$ ls
oratop
[oracle@oracleontario oratop]$
[oracle@oracleontario oratop]$

[oracle@oracleontario oratop]$ ./oratop  -f -i 3 / as sysdba

oratop: Release 15.0.0 Production on Thu Mar 10 07:33:49 2022
Copyright (c) 2011, Oracle.  All rights reserved.

Connecting ..
Processing ...

Oracle 19c - Primary dixitd 07:33:45 up: 5.2h,  1 ins,    0 sn,   0 us, 1.5G sga,    0% fra,   1 er,                        0.5%db
ID CPU  %CPU %DCP LOAD  AAS  ASC  ASI  ASW  IDL  ASP  LAT  MBPS IOPS  R/S  W/S  LIO  GCPS %FR  PGA TEMP UTPS  UCPS  RT/X DCTR DWTR
 1   1   9.1  0.2  0.2  0.0    0    0    0    0    0    0   0.1    3    3    0    3     0  45 153M    0    0     0  110m  118    8

EVENT (C)                                                         TOTAL WAITS   TIME(s)  AVG_MS  PCT                    WAIT_CLASS
db file sequential read                                                  5175        65    12.6   32                      User I/O
DB CPU                                                                               42           21
oracle thread bootstrap                                                    81        40   503.8   20                         Other
db file scattered read                                                    593        29    50.4   15                      User I/O
external table read                                                         1        23 23470.8   12                      User I/O

ID   SID     SPID USERNAME  PROGRAM    SRV  SERVICE  PGA  SQLID/BLOCKER OPN  E/T  STA  STE  WAIT_CLASS  EVENT/*LATCH           W/T
 1    58    14884 SYS       sqlplus@o  DED  SYS$USE 1.4M           1:77 UPD 8.0s  ACT  WAI  Applicatio  enq: TX - row lock co 7.8s


[oracle@oracleontario oratop]$
[oracle@oracleontario oratop]$

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | 2 Comments »

 
%d bloggers like this: