Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Posts Tagged ‘Tuning’

Part 2 : Creating custom SQL Profile for a slow SQL in a Data Guard physical standby database

Posted by FatDBA on January 19, 2022

Someone asked me lately if its possible to fix a plan for a bad/slow SELECT SQL in a Data Guard environment ? The guy was asking as he was using ‘coe_xfr_sql_profile.sql‘, the script that you can use to create custom SQL Profiles to correct optimizer cost estimates to encourage good execution plans for a SQL.

Well, the post is not about using it to fix SQL plans, but is to understand how to fix a plan for a SQL that is slow on standby environment. Because if you go and try to execute the script that SQLT generates you will get ‘ORA-00600: internal error’ as dataguard is always opened in the RO mode and you cannot write anything on the database, so the question is – How to fix the plan then for a standby database ?

Here in the example, I’ve got a ADG setup with name ‘TESTSTANDBY’ and one of the SELECT statement with SQL ID 0y521mggg73pk that is slow and it has 4 different plan hash values (PHV) and there is plan flip that happens and due to that sometimes it picks the wrong plan and that’s when it goes slow. Here 1469174702 is the PHV that is WORST and both 3847062775 and 2797333706 PHVs are BEST and we can any of the two.

SQL> @1

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
TESTSTANDBY                    READ ONLY WITH APPLY PHYSICAL STANDBY



SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
0y521mggg73pk         32         .03        9.34      143.6554


SQL> @2
Enter value for sql_id: 0y521mggg73pk

SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME AVG_CPU_TIME        AVG_LIO      AVG_PIO
------------- --------------- ------------ ------------ ------------ -------------- ------------
0y521mggg73pk      3847062775           10         .032         .021        1,448.0           .0
0y521mggg73pk      2797333706           10         .032         .021        1,448.0           .0
0y521mggg73pk      1687052623           10         .034         .025        1,448.0           .0
0y521mggg73pk      1469174702            2        9.340        1.021      541,054.0      4,155.5



-- This is what you will receive if try to run it on STANDBY database. 
SQL> @coe_xfr_sql_profile_0y521mggg73pk_3847062775.sql

DECLARE
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [2652], [108], [108], [0], [0], [3891200], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_SQLTUNE_INTERNAL", line 18876
ORA-06512: at "SYS.DBMS_SQLTUNE", line 9368
ORA-06512: at "SYS.DBMS_SQLTUNE", line 9328
ORA-06512: at line 37

So, what you have to do here, you need to move the file (do scp, ftp, cp whatever) ‘coe_xfr_sql_profile_0y521mggg73pk_3847062775.sql‘ that is produced by the COE file, to the PRIMARY database and run that file from there and not on the standby.

-- Run the script on PRIMARY database
SQL>
SQL> @coe_xfr_sql_profile_0y521mggg73pk_3847062775.sql
SQL> REM
SQL> REM $Header: 215187.1 coe_xfr_sql_profile_0y521mggg73pk_3847062775.sql 11.4.4.4 2022/01/13 carlos.sierra $
SQL> REM
SQL> REM Copyright (c) 2000-2012, Oracle Corporation. All rights reserved.
SQL> REM
SQL> REM AUTHOR
SQL> REM   carlos.sierra@oracle.com
SQL> REM
SQL> REM SCRIPT
SQL> REM   coe_xfr_sql_profile_0y521mggg73pk_3847062775.sql
SQL> REM
SQL> REM DESCRIPTION
SQL> REM   This script is generated by coe_xfr_sql_profile.sql
SQL> REM   It contains the SQL*Plus commands to create a custom
SQL> REM   SQL Profile for SQL_ID 0y521mggg73pk based on plan hash
SQL> REM   value 3847062775.
SQL> REM   The custom SQL Profile to be created by this script
SQL> REM   will affect plans for SQL commands with signature
SQL> REM   matching the one for SQL Text below.
SQL> REM   Review SQL Text and adjust accordingly.
SQL> REM
SQL> REM PARAMETERS
SQL> REM   None.
SQL> REM
SQL> REM EXAMPLE
SQL> REM   SQL> START coe_xfr_sql_profile_0y521mggg73pk_3847062775.sql;
SQL> REM
SQL> REM NOTES
SQL> REM   1. Should be run as SYSTEM or SYSDBA.
SQL> REM   2. User must have CREATE ANY SQL PROFILE privilege.
SQL> REM   3. SOURCE and TARGET systems can be the same or similar.
SQL> REM   4. To drop this custom SQL Profile after it has been created:
SQL> REM         EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_0y521mggg73pk_3847062775');
SQL> REM   5. Be aware that using DBMS_SQLTUNE requires a license
SQL> REM         for the Oracle Tuning Pack.
SQL> REM   6. If you modified a SQL putting Hints in order to produce a desired
SQL> REM         Plan, you can remove the artifical Hints from SQL Text pieces below.
SQL> REM         By doing so you can create a custom SQL Profile for the original
SQL> REM         SQL but with the Plan captured from the modified SQL (with Hints).
SQL> REM
SQL> WHENEVER SQLERROR EXIT SQL.SQLCODE;
SQL> REM
SQL> VAR signature NUMBER;
SQL> VAR signaturef NUMBER;
SQL> REM
SQL> DECLARE
  2  sql_txt CLOB;
  3  h       SYS.SQLPROF_ATTR;
  4  PROCEDURE wa (p_line IN VARCHAR2) IS
  5  BEGIN
  6  DBMS_LOB.WRITEAPPEND(sql_txt, LENGTH(p_line), p_line);
  7  END wa;
  8  BEGIN
  9  DBMS_LOB.CREATETEMPORARY(sql_txt, TRUE);
 10  DBMS_LOB.OPEN(sql_txt, DBMS_LOB.LOB_READWRITE);
 11  -- SQL Text pieces below do not have to be of same length.
 12  -- So if you edit SQL Text (i.e. removing temporary Hints),
 13  -- there is no need to edit or re-align unmodified pieces.
 14  wa(q'[/* SQL Analyze(63,1) */ DELETE FROM PRTINSTANCEITER PII WHERE EX]');
 15  wa(q'[ISTS ( SELECT 1 FROM PRTPROCESS PP WHERE PP.INSTANCEID = PII.INS]');
 16  wa(q'[TANCEID||':'||PII.LASTITERATION AND CURRENTTIME < :B1 AND (STATU]');
 17  wa(q'[S = 4 OR STATUS = 5)) AND ROWNUM <=10000]');
 18  DBMS_LOB.CLOSE(sql_txt);
 19  h := SYS.SQLPROF_ATTR(
 20  q'[BEGIN_OUTLINE_DATA]',
 21  q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
 22  q'[OPTIMIZER_FEATURES_ENABLE('12.2.0.1')]',
 23  q'[DB_VERSION('12.2.0.1')]',
 24  q'[ALL_ROWS]',
 25  q'[OUTLINE_LEAF(@"SEL$3BA1AD7C")]',
 26  q'[UNNEST(@"SEL$1")]',
 27  q'[OUTLINE(@"DEL$1")]',
 28  q'[OUTLINE(@"SEL$1")]',
 29  q'[INDEX_RS_ASC(@"SEL$3BA1AD7C" "PP"@"SEL$1" ("PRTPROCESS"."CURRENTTIME" "PRTPROCESS"."STATUS"))]',
 30  q'[BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$3BA1AD7C" "PP"@"SEL$1")]',
 31  q'[FULL(@"SEL$3BA1AD7C" "PII"@"DEL$1")]',
 32  q'[LEADING(@"SEL$3BA1AD7C" "PP"@"SEL$1" "PII"@"DEL$1")]',
 33  q'[USE_HASH(@"SEL$3BA1AD7C" "PII"@"DEL$1")]',
 34  q'[END_OUTLINE_DATA]');
 35  :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
 36  :signaturef := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt, TRUE);
 37  DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
 38  sql_text    => sql_txt,
 39  profile     => h,
 40  name        => 'coe_0y521mggg73pk_3847062775',
 41  description => 'coe 0y521mggg73pk 3847062775 '||:signature||' '||:signaturef||'',
 42  category    => 'DEFAULT',
 43  validate    => TRUE,
 44  replace     => TRUE,
 45  force_match => FALSE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
 46  DBMS_LOB.FREETEMPORARY(sql_txt);
 47  END;
 48  /

PL/SQL procedure successfully completed.

SQL> WHENEVER SQLERROR CONTINUE
SQL> SET ECHO OFF;

            SIGNATURE
---------------------
  3905242471056150906


           SIGNATUREF
---------------------
  3905242471056150906


... manual custom SQL Profile has been created

not spooling currently

COE_XFR_SQL_PROFILE_0y521mggg73pk_3847062775 completed
SQL>
SQL>

Perfect, it ran successfully on the PRIMARY database, and the same will be transferred on to the STANDBY database by its own. Let’s check if its created or moved to the STANDBY database or not. Let’s query dba_sql_profiles to see if its there …

-- Lets check it on Standby
SQL> select NAME, SQL_TEXT, CREATED, STATUS from dba_sql_profiles order by created;

NAME                           SQL_TEXT                                                                         CREATED                        STATUS
------------------------------ -------------------------------------------------------------------------------- ------------------------------ --------
coe_0y521mggg73pk_3847062775   DELETE FROM BIGTABDIXITTAB PII WHERE EXISTS ( SELECT 1                          13-JAN-22 06.28.03.000000 AM   ENABLED

So, the issue is fixed on the standby and the SQL immediately picked the right plan that we asked it to follow.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

Part 1 : Running SQL Tuning Advisor for a slow SQL in a Read Only Standby Database

Posted by FatDBA on January 15, 2022

Here goes my maiden post blog post from series about “Troubleshooting SQLs & other Issues on a Read Only Standby/Dataguard Environments” …

Recently I helped one of my customer stabilize performance of one of their critical DWH/BI decision support system workload running on an ADG (Physical Standby). At one time I have to generate SQL tuning advisory reports for few slow SQLs on the database, but that being a RO dataguard, it always ends with an error "ORA-13792: This operation requires a database link." when I try to call the SQL tuning advisor directly on the standby database.

So, this post is about how to run SQL Tuning advisor on a dataguard environment. Let’s assume we want to get SQLTA recommendation for this below SQL that is going for a FULL TABLE SCAN and is a costly SQL.

SQL> explain plan for select count(*) from bigtab where weight in ( select distinct( weight) from bigtab);

Explained.

SQL> select * from  table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------
SQL_ID  0mauvkjsvmcmj, child number 1


Plan hash value: 2140185107

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     1 |    13 |    74   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |        |     1 |    13 |            |          |
|*  2 |   TABLE ACCESS FULL| BIGTAB | 80223 |  1018K|    74   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("WEIGHT" IS NOT NULL)

This is what I have received on the standby database when tried to run the SQL Tuning advisor directly on it where it says that it requires a database link in order to run the advisory.

-- On standby database:
SQL>
SQL> @?/rdbms/admin/sqltrpt.sql


Specify the Sql id
~~~~~~~~~~~~~~~~~~
Enter value for sqlid: 0y521mggg73pk

Sql Id specified: 0y521mggg73pk

Tune the sql
~~~~~~~~~~~~
DECLARE
*
ERROR at line 1:
ORA-13792: This operation requires a database link.
ORA-06512: at line 36
ORA-06512: at "SYS.DBMS_SQLTUNE_UTIL1", line 1571
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_SQLTUNE_UTIL1", line 1563
ORA-06512: at "SYS.DBMS_SQLTUNE", line 798
ORA-06512: at line 20


ERROR:
ORA-13608: The specified name NULL is invalid.
ORA-06512: at "SYS.DBMS_SQLTUNE_UTIL1", line 1571
ORA-06512: at "SYS.PRVT_ADVISOR", line 7080
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.PRVT_ADVISOR", line 85
ORA-06512: at "SYS.PRVT_ADVISOR", line 5938
ORA-06512: at "SYS.PRVT_ADVISOR", line 7011
ORA-06512: at "SYS.DBMS_SQLTUNE_UTIL1", line 1535
ORA-06512: at "SYS.DBMS_SQLTUNE", line 1324
ORA-06512: at line 1

In remote tuning, the database on which you initiate a tuning task differs from the database in which the tuning process executes or in which results are stored. For example, a standby database can have its own workload of queries, some of which may require tuning. You can issue SQL Tuning Advisor statements on a standby database. A standby-to-primary database link enables DBMS_SQLTUNE to write data to and read data from the primary database. The link is necessary because the standby database, which is read-only, cannot write the SQL tuning data.

Okay, for that first you need to create the DB Link between primary and the standby database, and same will be used by the SQL tuning advisor where it will do the write operations on the primary database. We will check its connectivity from the standby database if its working or not …

-- On Primary database
SQL> select username,common,account_status from dba_users where username ='SYS$UMF';

USERNAME        COM ACCOUNT_STATUS
--------------- --- --------------------------------
SYS$UMF         YES OPEN

-- On Primary database:
SQL> create database link lnk_to_pri connect to "SYS$UMF" identified by "oracle90" using 'DXTPRI';

Database link created.


-- On Standby database:
SQL> select  db_unique_name from v$database@lnk_to_pri;

DB_UNIQUE_NAME
------------------------------
DXTPRI

1 row selected.

SQL>


To tune a standby workload on a primary database, specify the database_link_to parameter in DBMS_SQLTUNE procedures. By default, the database_link_to parameter is null, which means that tuning is local. The database_link_to parameter must specify a private database link. This link must be owned by SYS and accessed by the default privileged user SYS$UMF. You issue all statements on the standby database. DBMS_SQLTUNE uses the database link both to fetch data from the primary database, and store data in the primary database.
Let’s create the TUNING TASK for the SQL Text.

-- run it on the STANDBY Database.
SQL>
SQL> 
DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
my_sqltext := 'select count(*) from bigtab where weight in ( select distinct( weight) from bigtab)';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text => my_sqltext,
task_name => 'TEST_sql_tuning_task1',
database_link_to => 'lnk_to_pri');
END;
/  
PL/SQL procedure successfully completed.

SQL>

-- Lets check if the tuning task is created 
SQL> SELECT task_name, STATUS, EXECUTION_start, EXECUTION_end FROM dba_advisor_log;

TASK_NAME
--------------------------------------------------------------------------------
STATUS      EXECUTION EXECUTION
----------- --------- ---------
TEST_sql_tuning_task1
EXECUTING   13-JAN-22

Next you need to execute the tuning task using the task_name you have assigned in the last step while creating the tuning task.

-- On Standby database
SQL> EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '&&your_sta_taks_name', database_link_to => 'lnk_to_pri');
Enter value for your_sta_taks_name: TEST_sql_tuning_task1

PL/SQL procedure successfully completed.

SQL>

Next, lets do the final step, generate the tuning task report with all recommendations for the SQL that is slow in the standby database.

-- Run it on the Standby database.
SQL>
SQL> SET LINES 150
SQL> SET pages 50000
SQL> SET long 5000000
SQL> SET longc 5000000
SQL> select dbms_sqltune.report_tuning_task('&&your_sta_taks_name', database_link_to => 'lnk_to_pri') from dual;
old   1: select dbms_sqltune.report_tuning_task('&&your_sta_taks_name', database_link_to => 'lnk_to_pri') from dual
new   1: select dbms_sqltune.report_tuning_task('TEST_sql_tuning_task1', database_link_to => 'lnk_to_pri') from dual

DBMS_SQLTUNE.REPORT_TUNING_TASK('TEST_SQL_TUNING_TASK1',DATABASE_LINK_TO=>'LNK_TO_PRI')
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
----------------------------------------------------


RECOMMENDATIONS
------------------------------------------------------------------------------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name   : sql_tuning_task_5mxdwvuf9j3vp
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 60
Completion Status  : COMPLETED
Started at         : 01/13/2021 22:05:52
Completed at       : 01/13/2021 22:06:25

-------------------------------------------------------------------------------
Schema Name: SYS
SQL ID     : pdxdwvuf9j3co
SQL Text   : select count(*) from bigtab where weight in ( select distinct( weight) from bigtab)

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this statement.

  Recommendation (estimated benefit: 99.94%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name =>
            'sql_tuning_task_pdxdwvuf9j3co', task_owner => 'SYS', replace =>
            TRUE);

  Validation results
  ------------------

So, that’t it, you finally have the tuning recommendations for the slow SQL in the STANDBY database.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | 2 Comments »

What are those strange columns in my execution plan OMem 1Mem O/1/M and Used-Mem ?

Posted by FatDBA on January 12, 2022

Few of the readers after my last post asked me – What are those strange columns named ‘OMem’, ‘1Mem’ and ‘O/1/M’ or ‘Used-Mem’ in execution plan ? This is something what you will see in the execution plan if called the DBMS_XPLAN using ‘+memstats’ or the ‘+allstats’ options.

----------------------------           -----------------------------
|  OMem |  1Mem |  O/1/M   |   OR      |  OMem |  1Mem | Used-Mem   |
----------------------------           -----------------------------
--
--
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('0m329sngnhv1p', 0, 'ALLSTATS'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
SQL_ID  0m329sngnhv1p, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM (xxxxxxxxxxxxxx

Plan hash value: xxxxxx

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | O/1/M|
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |xxxxxxxx  |      1 |        |     15 |00:00:00.62 |     353K|       |       |      |
|*  1 |  FILTER                                    |xxxxxxxx  |      1 |        |     15 |00:00:00.62 |     353K|       |       |      |
|*  2 |   VIEW                                     |xxxxxxxx  |      1 |   5530 |     15 |00:00:00.62 |     353K|       |       |      |
|   3 |    SORT ORDER BY                           |xxxxxxxx  |      1 |   5530 |   7873 |00:00:00.62 |     353K|   549K|   457K| 1/0/0|
|   4 |     COUNT                                  |xxxxxxxx  |      1 |        |   7873 |00:00:00.46 |     353K|       |       |      |
|   5 |      NESTED LOOPS                          |xxxxxxxx  |      1 |   5530 |   7873 |00:00:00.46 |     353K|       |       |      |
|   6 |       NESTED LOOPS                         |xxxxxxxx  |      1 |   5530 |    266K|00:00:00.44 |     240K|       |       |      |
|   7 |        NESTED LOOPS                        |xxxxxxxx  |      1 |   5530 |    266K|00:00:00.19 |   46039 |       |       |      |
|*  8 |         TABLE ACCESS FULL                  |xxxxxxxx  |      1 |   4392 |     90 |00:00:00.01 |     878 |       |       |      |
|   9 |         TABLE ACCESS BY INDEX ROWID BATCHED|xxxxxxxx  |     90 |      1 |    266K|00:00:00.17 |   45161 |       |       |      |
|* 10 |          INDEX RANGE SCAN                  |xxxxxxxx  |     90 |      1 |    266K|00:00:00.04 |    3871 |       |       |      |
|* 11 |        INDEX UNIQUE SCAN                   |xxxxxxxx  |    266K|      1 |    266K|00:00:00.20 |     194K|       |       |      |
|* 12 |       TABLE ACCESS BY INDEX ROWID          |xxxxxxxx  |    266K|      1 |   7873 |00:00:00.20 |     113K|       |       |      |
|  13 |        TABLE ACCESS BY INDEX ROWID BATCHED |xxxxxxxx  |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |      |
|* 14 |         INDEX RANGE SCAN                   |xxxxxxxx  |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |      |
----------------------------------------------------------------------------------------------------------------------------------------

First talking about column ‘Starts’, that is the number of times that operation actually happened. The column with title ‘Buffers’ refers to the amount of buffer read/write (IO) performed. Now comes the column ‘OMem’, and it is the memory estimate needed to perform the operation in memory only. This is also called the optimal execution.
Next column with title ‘1Mem’ is the memory estimate needed to perform the operation in a single pass (Read/Write from disk (temp) only once), called one-pass execution. A multi-pass execution is when the same data is written to and read from disk more than once. Think about the sorting, where the database has to do a sort on large amount data in a small PGA or sort area.

Last column with title ‘0/1/M’ and sometimes ‘Used-Mem‘ is the ACTUAL amount of memory used for the operation. You also see some numbers in the brackets for this column. There is a significance for them – If the number is 0, then it was an optimal execution, used only memory and no temporary space. If the number is 1, then it was a one-pass execution. If the number is > 1, it was a multi-pass execution, and that number represents the number of passes.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

Use gather_plan_statistics hint to understand optimizer’s estimations and much more ..

Posted by FatDBA on January 10, 2022

Lately someone told me that he ran the gather_plan_statistics hint with his SQL, but he is not getting the detailed execution plan, I mean all extra stats that you see i.e. starts, estimated time, starts, buffers, actual and estimated number of rows were not there and he was getting the regular/simple execution plan.

But I found he was trying it in a wrong way! The /*+ gather_plan_statistics */ hint does not save data into PLAN_TABLE, but it stores execution statistics in V$SQL_PLAN performance view. To display these data you can use (dbms_xplan.display_cursor (format=>’ALLSTATS LAST’)), but this not always work, because you must execute the second command immediately after the SQL query. The better method is to query V$SQL or V$SQLAREA or any useful view to obtain SQL_ID of the query, and then use DISPLAY_CURSOR function, for example in this way …

SQL>
SQL> SELECT /*+ gather_plan_statistics */ * FROM (SELECT A.MANDNA_KKAJ_DI,A.NAME,ROWNUM IDX FROM DIXDROI.INF_KRA_PRIMAR A,DIXDROI.INF_KRA_PRIMAR_B B,DIXDROI.INF_KRA_PRIMAR_RANGE C WHERE A.TYPE='MOBILE'
AND A.STATUS='Available' AND A.ASSIGNED_CSN_ID IS NULL AND B.OWNING_OBJECT_ID=A.MANDNA_KKAJ_DI AND
B.BINDING_NAME='com.paratapata.killer.mfs.oare.mainframe.locking.ADAKingstonToKillerRangeStst' AND C.MANDNA_KKAJ_DI=B.TARGET_OBJECT_ID
AND C.STATUS = NVL('Active', C.STATUS) AND C.CATEGORY = 'Range'
AND 'Internal' = NVL(C.NUMBER_USAGE,'External') AND (A.lock_id IS NULL OR A.LOCK_DATE <= (sysdate - NVL(A.LOCK_PERIOD,(SELECT mark_raar_value 
FROM DIXDROI.PAM_KRIAIS_MAINFATRAARA_AHAH WHERE mark_raar_name = 'DEFAULT_LOCK_PERIOD' )) / 60 / 24))
ORDER BY TO_NUMBER(A.NAME) ) RESULTS WHERE RESULTS.IDX BETWEEN 1 AND 15; 

MANDNA_KKAJ_DI NAME                                                      IDX
-------------- -------------------------------------------------- ----------
      91891302 0444915115                                                 12
      91891309 0444915122                                                 13
      91891310 0444915123                                                 14
      91891314 0444915127                                                 15
      91891723 0444915536                                                  2
      91891724 0444915537                                                  3
      91891726 0444915539                                                  4
      91891730 0444915543                                                  5
      91891739 0444915552                                                  6
      91891748 0444915561                                                  7
      91891766 0444915579                                                  8
      91891768 0444915581                                                  9
      91891807 0444915620                                                 10
      91891854 0444915667                                                 11
      17116808 04466962472                                                 1

15 rows selected.

SQL> select sql_id, plan_hash_value, executions, sql_text from gv$sqlarea where sql_fulltext like '%gather_plan_statistics%';

SQL_ID        PLAN_HASH_VALUE EXECUTIONS SQL_TEXT
------------- --------------- ---------- ------------------------------------------------------------
0m329sngnhv1p      2185860753          1 SELECT /*+ gather_plan_statistics */ * FROM (SELECT A.OBJECT
                                         _INST_ID,A.NAME,ROWNUM IDX FROM DIXDROI.INF_KRA_PRIMAR A,INS
                                         TALL.INF_KRA_PRIMAR_B B,DIXDROI.INF_KRA_PRIMAR_RANGE C WHERE
                                          A.TYPE=:"SYS_B_00" AND A.STATUS=:"SYS_B_01" AND A.ASSIGNED_
                                         CSN_ID IS NULL AND B.OWNING_OBJECT_ID=A.MANDNA_KKAJ_DI AND B
                                         .BINDING_NAME=:"SYS_B_02" AND C.MANDNA_KKAJ_DI=B.TARGET_OBJE
                                         CT_ID AND C.STATUS = NVL(:"SYS_B_03", C.STATUS) AND C.CATEGO
                                         RY = :"SYS_B_04" AND :"SYS_B_05" = NVL(C.NUMBER_USAGE,:"SYS_
                                         B_06") AND (A.lock_id IS NULL OR A.LOCK_DATE <= (sysdate - N
                                         VL(A.LOCK_PERIOD,(SELECT mark_raar_value FROM DIXDROI.EAI_HY
                                         BRIS_CONFIGURATION_DATA WHERE mark_raar_name = :"SYS_B_07" )
                                         ) / :"SYS_B_08" / :"SYS_B_09")) ORDER BY TO_NUMBER(A.NAME) )
                                          RESULTS WHERE RESULTS.IDX BETWEEN :"SYS_B_10" AND :"SYS_B_1
                                         1"

The above query returns SQL_ID=0m329sngnhv1p and CHILD_NUMBER=0(child number is just a cursor number). Use these values to query the collected plan. This will show you statistics such as the actual number of rows processed (A-Rows), rather than just the estimates (E-Rows). A-Rows is the total number of rows produced by all starts of that operation. But E-Rows is an estimate of the number of rows produced by a single start of an operation. If you want to read about E-Rows and A-Rows columns, please read a brilliant post by Jonathan Lewis.

Okay, the plan also includes a column called Starts, which tells you how many times each step was executed. A-Rows, E-Rows and Starts are all incredibly useful if you want to understand a plan.

SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('0m329sngnhv1p', 0, 'ALLSTATS'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0m329sngnhv1p, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM (SELECT
A.MANDNA_KKAJ_DI,A.NAME,ROWNUM IDX FROM DIXDROI.INF_KRA_PRIMAR
A,DIXDROI.INF_KRA_PRIMAR_B B,DIXDROI.INF_KRA_PRIMAR_RANGE C WHERE
A.TYPE=:"SYS_B_00" AND A.STATUS=:"SYS_B_01" AND A.ASSIGNED_CSN_ID IS
NULL AND B.OWNING_OBJECT_ID=A.MANDNA_KKAJ_DI AND
B.BINDING_NAME=:"SYS_B_02" AND C.MANDNA_KKAJ_DI=B.TARGET_OBJECT_ID AND
C.STATUS = NVL(:"SYS_B_03", C.STATUS) AND C.CATEGORY = :"SYS_B_04" AND
:"SYS_B_05" = NVL(C.NUMBER_USAGE,:"SYS_B_06") AND (A.lock_id IS NULL OR
A.LOCK_DATE <= (sysdate - NVL(A.LOCK_PERIOD,(SELECT mark_raar_value
FROM DIXDROI.PAM_KRIAIS_MAINFATRAARA_AHAH WHERE mark_raar_name =
:"SYS_B_07" )) / :"SYS_B_08" / :"SYS_B_09")) ORDER BY TO_NUMBER(A.NAME)
) RESULTS WHERE RESULTS.IDX BETWEEN :"SYS_B_10" AND :"SYS_B_11"


Plan hash value: 2185860753


-----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                               |      1 |        |     15 |00:00:00.62 |     353K|       |       |      |
|*  1 |  FILTER                                    |                               |      1 |        |     15 |00:00:00.62 |     353K|       |       |      |
|*  2 |   VIEW                                     |                               |      1 |   5530 |     15 |00:00:00.62 |     353K|       |       |      |
|   3 |    SORT ORDER BY                           |                               |      1 |   5530 |   7873 |00:00:00.62 |     353K|   549K|   457K| 1/0/0|
|   4 |     COUNT                                  |                               |      1 |        |   7873 |00:00:00.46 |     353K|       |       |      |
|   5 |      NESTED LOOPS                          |                               |      1 |   5530 |   7873 |00:00:00.46 |     353K|       |       |      |
|   6 |       NESTED LOOPS                         |                               |      1 |   5530 |    266K|00:00:00.44 |     240K|       |       |      |
|   7 |        NESTED LOOPS                        |                               |      1 |   5530 |    266K|00:00:00.19 |   46039 |       |       |      |
|*  8 |         TABLE ACCESS FULL                  | INF_KRA_PRIMAR_RANGE          |      1 |   4392 |     90 |00:00:00.01 |     878 |       |       |      |
|   9 |         TABLE ACCESS BY INDEX ROWID BATCHED| INF_KRA_PRIMAR_B              |     90 |      1 |    266K|00:00:00.17 |   45161 |       |       |      |
|* 10 |          INDEX RANGE SCAN                  | EAI_NUMBER_REL_1              |     90 |      1 |    266K|00:00:00.04 |    3871 |       |       |      |
|* 11 |        INDEX UNIQUE SCAN                   | PK_INF_KRA_PRIMAR             |    266K|      1 |    266K|00:00:00.20 |     194K|       |       |      |
|* 12 |       TABLE ACCESS BY INDEX ROWID          | INF_KRA_PRIMAR                |    266K|      1 |   7873 |00:00:00.20 |     113K|       |       |      |
|  13 |        TABLE ACCESS BY INDEX ROWID BATCHED | PAM_KRIAIS_MAINFATRAARA_AHAH  |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |      |
|* 14 |         INDEX RANGE SCAN                   | IDXGETNUMBERPD1               |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |      |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:SYS_B_11>=:SYS_B_10)
   2 - filter(("RESULTS"."IDX">=:SYS_B_10 AND "RESULTS"."IDX"<=:SYS_B_11))
   8 - filter((NVL("C"."CATEGORY",'Range')=:SYS_B_04 AND NVL("C"."NUMBER_USAGE",:SYS_B_06)=:SYS_B_05 AND "C"."STATUS"=NVL(:SYS_B_03,"C"."STATUS")))
  10 - access("C"."MANDNA_KKAJ_DI"="B"."TARGET_OBJECT_ID" AND "B"."BINDING_NAME"=:SYS_B_02)
  11 - access("B"."OWNING_OBJECT_ID"="A"."MANDNA_KKAJ_DI")
  12 - filter(("A"."TYPE"=:SYS_B_00 AND "A"."STATUS"=:SYS_B_01 AND "A"."ASSIGNED_CSN_ID" IS NULL AND ("A"."LOCK_ID" IS NULL OR
              "A"."LOCK_DATE"<=SYSDATE@!-NVL("A"."LOCK_PERIOD",)/:SYS_B_08/:SYS_B_09)))
  14 - access("mark_raar_NAME"=:SYS_B_07)


49 rows selected.


There are multiple other ways how you can add or remove more details to your execution plan, lets take a look on few of those methods.

-- To get additional COST and BYTES column into plan

SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0m329sngnhv1p',format=>'ALLSTATS LAST +cost +bytes'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0m329sngnhv1p, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM (SELECT
A.MANDNA_KKAJ_DI,A.NAME,ROWNUM IDX FROM DIXDROI.INF_KRA_PRIMAR
A,DIXDROI.INF_KRA_PRIMAR_B B,DIXDROI.INF_KRA_PRIMAR_RANGE C WHERE
A.TYPE=:"SYS_B_00" AND A.STATUS=:"SYS_B_01" AND A.ASSIGNED_CSN_ID IS
NULL AND B.OWNING_OBJECT_ID=A.MANDNA_KKAJ_DI AND
B.BINDING_NAME=:"SYS_B_02" AND C.MANDNA_KKAJ_DI=B.TARGET_OBJECT_ID AND
C.STATUS = NVL(:"SYS_B_03", C.STATUS) AND C.CATEGORY = :"SYS_B_04" AND
:"SYS_B_05" = NVL(C.NUMBER_USAGE,:"SYS_B_06") AND (A.lock_id IS NULL OR
A.LOCK_DATE <= (sysdate - NVL(A.LOCK_PERIOD,(SELECT mark_raar_value
FROM DIXDROI.PAM_KRIAIS_MAINFATRAARA_AHAH WHERE mark_raar_name =
:"SYS_B_07" )) / :"SYS_B_08" / :"SYS_B_09")) ORDER BY TO_NUMBER(A.NAME)
) RESULTS WHERE RESULTS.IDX BETWEEN :"SYS_B_10" AND :"SYS_B_11"

Plan hash value: 2185860753

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                          | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                               |      1 |        |       |       |  3753 (100)|     15 |00:00:00.62 |     353K|       |       |          |
|*  1 |  FILTER                                    |                               |      1 |        |       |       |            |     15 |00:00:00.62 |     353K|       |       |          |
|*  2 |   VIEW                                     |                               |      1 |   5530 |   286K|       |  3753   (1)|     15 |00:00:00.62 |     353K|       |       |          |
|   3 |    SORT ORDER BY                           |                               |      1 |   5530 |   918K|  1064K|  3753   (1)|   7873 |00:00:00.62 |     353K|   549K|   457K|  487K (0)|
|   4 |     COUNT                                  |                               |      1 |        |       |       |            |   7873 |00:00:00.46 |     353K|       |       |          |
|   5 |      NESTED LOOPS                          |                               |      1 |   5530 |   918K|       |  3544   (1)|   7873 |00:00:00.46 |     353K|       |       |          |
|   6 |       NESTED LOOPS                         |                               |      1 |   5530 |   918K|       |  3544   (1)|    266K|00:00:00.44 |     240K|       |       |          |
|   7 |        NESTED LOOPS                        |                               |      1 |   5530 |   615K|       |  2437   (1)|    266K|00:00:00.19 |   46039 |       |       |          |
|*  8 |         TABLE ACCESS FULL                  | INF_KRA_PRIMAR_RANGE          |      1 |   4392 | 92232 |       |   240   (1)|     90 |00:00:00.01 |     878 |       |       |          |
|   9 |         TABLE ACCESS BY INDEX ROWID BATCHED| INF_KRA_PRIMAR_B              |     90 |      1 |    93 |       |     1   (0)|    266K|00:00:00.17 |   45161 |       |       |          |
|* 10 |          INDEX RANGE SCAN                  | EAI_NUMBER_REL_1              |     90 |      1 |       |       |     1   (0)|    266K|00:00:00.04 |    3871 |       |       |          |
|* 11 |        INDEX UNIQUE SCAN                   | PK_INF_KRA_PRIMAR             |    266K|      1 |       |       |     1   (0)|    266K|00:00:00.20 |     194K|       |       |          |
|* 12 |       TABLE ACCESS BY INDEX ROWID          | INF_KRA_PRIMAR                |    266K|      1 |    56 |       |     1   (0)|   7873 |00:00:00.20 |     113K|       |       |          |
|  13 |        TABLE ACCESS BY INDEX ROWID BATCHED | PAM_KRIAIS_MAINFATRAARA_AHAH  |      1 |      1 |    28 |       |     1   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|* 14 |         INDEX RANGE SCAN                   | IDXGETNUMBERPD1               |      1 |      1 |       |       |     1   (0)|      1 |00:00:00.01 |       1 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:SYS_B_11>=:SYS_B_10)
   2 - filter(("RESULTS"."IDX">=:SYS_B_10 AND "RESULTS"."IDX"<=:SYS_B_11))
   8 - filter((NVL("C"."CATEGORY",'Range')=:SYS_B_04 AND NVL("C"."NUMBER_USAGE",:SYS_B_06)=:SYS_B_05 AND "C"."STATUS"=NVL(:SYS_B_03,"C"."STATUS")))
  10 - access("C"."MANDNA_KKAJ_DI"="B"."TARGET_OBJECT_ID" AND "B"."BINDING_NAME"=:SYS_B_02)
  11 - access("B"."OWNING_OBJECT_ID"="A"."MANDNA_KKAJ_DI")
  12 - filter(("A"."TYPE"=:SYS_B_00 AND "A"."STATUS"=:SYS_B_01 AND "A"."ASSIGNED_CSN_ID" IS NULL AND ("A"."LOCK_ID" IS NULL OR
              "A"."LOCK_DATE"<=SYSDATE@!-NVL("A"."LOCK_PERIOD",)/:SYS_B_08/:SYS_B_09)))
  14 - access("mark_raar_NAME"=:SYS_B_07)


49 rows selected.







-- To get OUTLINE data into your execution plan

SQL>
SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'0m329sngnhv1p', format=>'ALLSTATS LAST +outline'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0m329sngnhv1p, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM (SELECT
A.MANDNA_KKAJ_DI,A.NAME,ROWNUM IDX FROM DIXDROI.INF_KRA_PRIMAR
A,DIXDROI.INF_KRA_PRIMAR_B B,DIXDROI.INF_KRA_PRIMAR_RANGE C WHERE
A.TYPE=:"SYS_B_00" AND A.STATUS=:"SYS_B_01" AND A.ASSIGNED_CSN_ID IS
NULL AND B.OWNING_OBJECT_ID=A.MANDNA_KKAJ_DI AND
B.BINDING_NAME=:"SYS_B_02" AND C.MANDNA_KKAJ_DI=B.TARGET_OBJECT_ID AND
C.STATUS = NVL(:"SYS_B_03", C.STATUS) AND C.CATEGORY = :"SYS_B_04" AND
:"SYS_B_05" = NVL(C.NUMBER_USAGE,:"SYS_B_06") AND (A.lock_id IS NULL OR
A.LOCK_DATE <= (sysdate - NVL(A.LOCK_PERIOD,(SELECT mark_raar_value
FROM DIXDROI.PAM_KRIAIS_MAINFATRAARA_AHAH WHERE mark_raar_name =
:"SYS_B_07" )) / :"SYS_B_08" / :"SYS_B_09")) ORDER BY TO_NUMBER(A.NAME)
) RESULTS WHERE RESULTS.IDX BETWEEN :"SYS_B_10" AND :"SYS_B_11"

Plan hash value: 2185860753

-----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                               |      1 |        |     15 |00:00:00.62 |     353K|       |       |          |
|*  1 |  FILTER                                    |                               |      1 |        |     15 |00:00:00.62 |     353K|       |       |          |
|*  2 |   VIEW                                     |                               |      1 |   5530 |     15 |00:00:00.62 |     353K|       |       |          |
|   3 |    SORT ORDER BY                           |                               |      1 |   5530 |   7873 |00:00:00.62 |     353K|   549K|   457K|  487K (0)|
|   4 |     COUNT                                  |                               |      1 |        |   7873 |00:00:00.46 |     353K|       |       |          |
|   5 |      NESTED LOOPS                          |                               |      1 |   5530 |   7873 |00:00:00.46 |     353K|       |       |          |
|   6 |       NESTED LOOPS                         |                               |      1 |   5530 |    266K|00:00:00.44 |     240K|       |       |          |
|   7 |        NESTED LOOPS                        |                               |      1 |   5530 |    266K|00:00:00.19 |   46039 |       |       |          |
|*  8 |         TABLE ACCESS FULL                  | INF_KRA_PRIMAR_RANGE          |      1 |   4392 |     90 |00:00:00.01 |     878 |       |       |          |
|   9 |         TABLE ACCESS BY INDEX ROWID BATCHED| INF_KRA_PRIMAR_B              |     90 |      1 |    266K|00:00:00.17 |   45161 |       |       |          |
|* 10 |          INDEX RANGE SCAN                  | EAI_NUMBER_REL_1              |     90 |      1 |    266K|00:00:00.04 |    3871 |       |       |          |
|* 11 |        INDEX UNIQUE SCAN                   | PK_INF_KRA_PRIMAR             |    266K|      1 |    266K|00:00:00.20 |     194K|       |       |          |
|* 12 |       TABLE ACCESS BY INDEX ROWID          | INF_KRA_PRIMAR                |    266K|      1 |   7873 |00:00:00.20 |     113K|       |       |          |
|  13 |        TABLE ACCESS BY INDEX ROWID BATCHED | PAM_KRIAIS_MAINFATRAARA_AHAH  |      1 |      1 |      1 |00:00:00.01 |       2 |       |       |          |
|* 14 |         INDEX RANGE SCAN                   | IDXGETNUMBERPD1               |      1 |      1 |      1 |00:00:00.01 |       1 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      OPT_PARAM('optimizer_index_cost_adj' 10)
      FIRST_ROWS(1)
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      NO_ACCESS(@"SEL$1" "RESULTS"@"SEL$1")
      FULL(@"SEL$2" "C"@"SEL$2")
      INDEX_RS_ASC(@"SEL$2" "B"@"SEL$2" ("INF_KRA_PRIMAR_B"."TARGET_OBJECT_ID" "INF_KRA_PRIMAR_B"."BINDING_NAME"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$2" "B"@"SEL$2")
      INDEX(@"SEL$2" "A"@"SEL$2" ("INF_KRA_PRIMAR"."MANDNA_KKAJ_DI"))
      LEADING(@"SEL$2" "C"@"SEL$2" "B"@"SEL$2" "A"@"SEL$2")
      USE_NL(@"SEL$2" "B"@"SEL$2")
      USE_NL(@"SEL$2" "A"@"SEL$2")
      NLJ_BATCHING(@"SEL$2" "A"@"SEL$2")
      PUSH_SUBQ(@"SEL$3")
      INDEX_RS_ASC(@"SEL$3" "PAM_KRIAIS_MAINFATRAARA_AHAH"@"SEL$3" ("PAM_KRIAIS_MAINFATRAARA_AHAH"."mark_raar_NAME"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$3" "PAM_KRIAIS_MAINFATRAARA_AHAH"@"SEL$3")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:SYS_B_11>=:SYS_B_10)
   2 - filter(("RESULTS"."IDX">=:SYS_B_10 AND "RESULTS"."IDX"<=:SYS_B_11))
   8 - filter((NVL("C"."CATEGORY",'Range')=:SYS_B_04 AND NVL("C"."NUMBER_USAGE",:SYS_B_06)=:SYS_B_05 AND "C"."STATUS"=NVL(:SYS_B_03,"C"."STATUS")))
  10 - access("C"."MANDNA_KKAJ_DI"="B"."TARGET_OBJECT_ID" AND "B"."BINDING_NAME"=:SYS_B_02)
  11 - access("B"."OWNING_OBJECT_ID"="A"."MANDNA_KKAJ_DI")
  12 - filter(("A"."TYPE"=:SYS_B_00 AND "A"."STATUS"=:SYS_B_01 AND "A"."ASSIGNED_CSN_ID" IS NULL AND ("A"."LOCK_ID" IS NULL OR
              "A"."LOCK_DATE"<=SYSDATE@!-NVL("A"."LOCK_PERIOD",)/:SYS_B_08/:SYS_B_09)))
  14 - access("mark_raar_NAME"=:SYS_B_07)


77 rows selected.

SQL>







-- Lets try a more sophisticated one to get all sorts of information available

SQL>
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(sql_id => '0m329sngnhv1p', FORMAT => 'ADVANCED +ROWS +BYTES +COST +PARALLEL +PARTITION +IOSTATS +MEMSTATS +ALIAS +PEEKED_BINDS +OUTLINE +PREDICATE +PROJECTION +REMOTE +NOTE'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0m329sngnhv1p, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM (SELECT
A.MANDNA_KKAJ_DI,A.NAME,ROWNUM IDX FROM DIXDROI.INF_KRA_PRIMAR
A,DIXDROI.INF_KRA_PRIMAR_B B,DIXDROI.INF_KRA_PRIMAR_RANGE C WHERE
A.TYPE=:"SYS_B_00" AND A.STATUS=:"SYS_B_01" AND A.ASSIGNED_CSN_ID IS
NULL AND B.OWNING_OBJECT_ID=A.MANDNA_KKAJ_DI AND
B.BINDING_NAME=:"SYS_B_02" AND C.MANDNA_KKAJ_DI=B.TARGET_OBJECT_ID AND
C.STATUS = NVL(:"SYS_B_03", C.STATUS) AND C.CATEGORY = :"SYS_B_04" AND
:"SYS_B_05" = NVL(C.NUMBER_USAGE,:"SYS_B_06") AND (A.lock_id IS NULL OR
A.LOCK_DATE <= (sysdate - NVL(A.LOCK_PERIOD,(SELECT mark_raar_value
FROM DIXDROI.PAM_KRIAIS_MAINFATRAARA_AHAH WHERE mark_raar_name =
:"SYS_B_07" )) / :"SYS_B_08" / :"SYS_B_09")) ORDER BY TO_NUMBER(A.NAME)
) RESULTS WHERE RESULTS.IDX BETWEEN :"SYS_B_10" AND :"SYS_B_11"

Plan hash value: 2185860753

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                          | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                               |      1 |        |       |       |  3753 (100)|          |     15 |00:00:00.62 |     353K|       |       |          |
|*  1 |  FILTER                                    |                               |      1 |        |       |       |            |          |     15 |00:00:00.62 |     353K|       |       |          |
|*  2 |   VIEW                                     |                               |      1 |   5530 |   286K|       |  3753   (1)| 00:00:01 |     15 |00:00:00.62 |     353K|       |       |          |
|   3 |    SORT ORDER BY                           |                               |      1 |   5530 |   918K|  1064K|  3753   (1)| 00:00:01 |   7873 |00:00:00.62 |     353K|   549K|   457K|     1/0/0|
|   4 |     COUNT                                  |                               |      1 |        |       |       |            |          |   7873 |00:00:00.46 |     353K|       |       |          |
|   5 |      NESTED LOOPS                          |                               |      1 |   5530 |   918K|       |  3544   (1)| 00:00:01 |   7873 |00:00:00.46 |     353K|       |       |          |
|   6 |       NESTED LOOPS                         |                               |      1 |   5530 |   918K|       |  3544   (1)| 00:00:01 |    266K|00:00:00.44 |     240K|       |       |          |
|   7 |        NESTED LOOPS                        |                               |      1 |   5530 |   615K|       |  2437   (1)| 00:00:01 |    266K|00:00:00.19 |    46039|       |       |          |
|*  8 |         TABLE ACCESS FULL                  | INF_KRA_PRIMAR_RANGE          |      1 |   4392 | 92232 |       |   240   (1)| 00:00:01 |     90 |00:00:00.01 |      878|       |       |          |
|   9 |         TABLE ACCESS BY INDEX ROWID BATCHED| INF_KRA_PRIMAR_B              |     90 |      1 |    93 |       |     1   (0)| 00:00:01 |    266K|00:00:00.17 |    45161|       |       |          |
|* 10 |          INDEX RANGE SCAN                  | EAI_NUMBER_REL_1              |     90 |      1 |       |       |     1   (0)| 00:00:01 |    266K|00:00:00.04 |     3871|       |       |          |
|* 11 |        INDEX UNIQUE SCAN                   | PK_INF_KRA_PRIMAR             |    266K|      1 |       |       |     1   (0)| 00:00:01 |    266K|00:00:00.20 |     194K|       |       |          |
|* 12 |       TABLE ACCESS BY INDEX ROWID          | INF_KRA_PRIMAR                |    266K|      1 |    56 |       |     1   (0)| 00:00:01 |   7873 |00:00:00.20 |     113K|       |       |          |
|  13 |        TABLE ACCESS BY INDEX ROWID BATCHED | PAM_KRIAIS_MAINFATRAARA_AHAH  |      1 |      1 |    28 |       |     1   (0)| 00:00:01 |      1 |00:00:00.01 |        2|       |       |          |
|* 14 |         INDEX RANGE SCAN                   | IDXGETNUMBERPD1               |      1 |      1 |       |       |     1   (0)| 00:00:01 |      1 |00:00:00.01 |        1|       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   2 - SEL$2 / RESULTS@SEL$1
   3 - SEL$2
   8 - SEL$2 / C@SEL$2
   9 - SEL$2 / B@SEL$2
  10 - SEL$2 / B@SEL$2
  11 - SEL$2 / A@SEL$2
  12 - SEL$2 / A@SEL$2
  13 - SEL$3 / PAM_KRIAIS_MAINFATRAARA_AHAH@SEL$3
  14 - SEL$3 / PAM_KRIAIS_MAINFATRAARA_AHAH@SEL$3

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      OPT_PARAM('optimizer_index_cost_adj' 10)
      FIRST_ROWS(1)
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      NO_ACCESS(@"SEL$1" "RESULTS"@"SEL$1")
      FULL(@"SEL$2" "C"@"SEL$2")
      INDEX_RS_ASC(@"SEL$2" "B"@"SEL$2" ("INF_KRA_PRIMAR_B"."TARGET_OBJECT_ID" "INF_KRA_PRIMAR_B"."BINDING_NAME"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$2" "B"@"SEL$2")
      INDEX(@"SEL$2" "A"@"SEL$2" ("INF_KRA_PRIMAR"."MANDNA_KKAJ_DI"))
      LEADING(@"SEL$2" "C"@"SEL$2" "B"@"SEL$2" "A"@"SEL$2")
      USE_NL(@"SEL$2" "B"@"SEL$2")
      USE_NL(@"SEL$2" "A"@"SEL$2")
      NLJ_BATCHING(@"SEL$2" "A"@"SEL$2")
      PUSH_SUBQ(@"SEL$3")
      INDEX_RS_ASC(@"SEL$3" "PAM_KRIAIS_MAINFATRAARA_AHAH"@"SEL$3" ("PAM_KRIAIS_MAINFATRAARA_AHAH"."mark_raar_NAME"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$3" "PAM_KRIAIS_MAINFATRAARA_AHAH"@"SEL$3")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): 'MOBILE'
   2 - :2 (VARCHAR2(30), CSID=873): 'Available'
   3 - (VARCHAR2(30), CSID=873): 'com.paratapata.killer.mfs.oare.mainframe.locking.ADAKingstonToKillerRangeStst'
   4 - (VARCHAR2(30), CSID=873): 'Active'
   5 - (VARCHAR2(30), CSID=873): 'Range'
   6 - (VARCHAR2(30), CSID=873): 'Internal'
   7 - (VARCHAR2(30), CSID=873): 'External'

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:SYS_B_11>=:SYS_B_10)
   2 - filter(("RESULTS"."IDX">=:SYS_B_10 AND "RESULTS"."IDX"<=:SYS_B_11))
   8 - filter((NVL("C"."CATEGORY",'Range')=:SYS_B_04 AND NVL("C"."NUMBER_USAGE",:SYS_B_06)=:SYS_B_05 AND "C"."STATUS"=NVL(:SYS_B_03,"C"."STATUS")))
  10 - access("C"."MANDNA_KKAJ_DI"="B"."TARGET_OBJECT_ID" AND "B"."BINDING_NAME"=:SYS_B_02)
  11 - access("B"."OWNING_OBJECT_ID"="A"."MANDNA_KKAJ_DI")
  12 - filter(("A"."TYPE"=:SYS_B_00 AND "A"."STATUS"=:SYS_B_01 AND "A"."ASSIGNED_CSN_ID" IS NULL AND ("A"."LOCK_ID" IS NULL OR
              "A"."LOCK_DATE"<=SYSDATE@!-NVL("A"."LOCK_PERIOD",)/:SYS_B_08/:SYS_B_09)))
  14 - access("mark_raar_NAME"=:SYS_B_07)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "RESULTS"."MANDNA_KKAJ_DI"[NUMBER,22], "RESULTS"."NAME"[VARCHAR2,50], "RESULTS"."IDX"[NUMBER,22]
   2 - "RESULTS"."MANDNA_KKAJ_DI"[NUMBER,22], "RESULTS"."NAME"[VARCHAR2,50], "RESULTS"."IDX"[NUMBER,22]
   3 - (#keys=1) TO_NUMBER("NAME")[22], "A"."MANDNA_KKAJ_DI"[NUMBER,22], "A"."NAME"[VARCHAR2,50], ROWNUM[22]
   4 - "A"."MANDNA_KKAJ_DI"[NUMBER,22], "NAME"[VARCHAR2,50], ROWNUM[8]
   5 - "A"."MANDNA_KKAJ_DI"[NUMBER,22], "NAME"[VARCHAR2,50]
   6 - "A".ROWID[ROWID,10], "A"."MANDNA_KKAJ_DI"[NUMBER,22]
   7 - "B"."OWNING_OBJECT_ID"[NUMBER,22]
   8 - "C"."MANDNA_KKAJ_DI"[NUMBER,22]
   9 - "B"."OWNING_OBJECT_ID"[NUMBER,22]
  10 - "B".ROWID[ROWID,10]
  11 - "A".ROWID[ROWID,10], "A"."MANDNA_KKAJ_DI"[NUMBER,22]
  12 - "NAME"[VARCHAR2,50]
  13 - "mark_raar_VALUE"[NUMBER,22]
  14 - "PAM_KRIAIS_MAINFATRAARA_AHAH".ROWID[ROWID,10]


120 rows selected.

SQL>

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , , | 1 Comment »

Materialized Views Refresh slow ? This is what you can try …

Posted by FatDBA on November 22, 2021

Hi All,

Entire last week I was looking in to one system wide performance issue where customer reported slowness in their database since they’d moved into a new infrastructure, after data migration. Though this wasn’t the production system, but kind of equally important, and the issue was impacting their acceptance tests. I was updated that few of the Materialized Views are there and their refresh times are now extended by a great margin, since they moved to this new system.

As soon I got the access on their system, I did a quick check on the database and I found a huge list of MV refresh jobs are scheduled and almost all of them are running with a huge delay, the performance was not at all consistent, sometimes they gets refreshed within few minutes and another run goes for few hours. This initially sounded little bizarre to me, until I did a comparive stuy between the previous (fast) and the new (slow) databases. What I observed is that there is more than 500% of extra data there in the business tables in this new (slow) database and with such a huge data, all those complex MV refresh queries ultimately got slowed down here in the new system.

For an example, one of the critical MV taking ~ 200 mins to refresh an is created with COMPLETE REFRESH mode.

create MATERIALIZED VIEW MV_MYTEST_BIGREFRESH
BUILD DEFERRED
REFRESH ON DEMAND
COMPLETE
AS
select abc,xyz,cast(value as ajaaj) as test ....
............
.................



SELECT * 
FROM ( SELECT OWNER, 
              MVIEW_NAME, 
              CONTAINER_NAME, 
              REFRESH_MODE, 
              LAST_REFRESH_TYPE, 
              STALENESS, 
              round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS 
       FROM ALL_MVIEWS 
       WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
     ) 
ORDER BY REFRESH_TIME_MINS DESC;
 
OWNER   MVIEW_NAME                       CONTAINER_NAME                   REFRESH_MODE LAST_REFRESH_TYPE STALENESS           REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
PRASH   MV_MYTEST_BIGREFRESH             MV_MYTEST_BIGREFRESH             DEMAND        COMPLETE      COMPLETE                196.75

Though I did multiple optimizations in order to stabilize performance, but would like to talk about few of the refresh tuning techniques that I have applied and finally worked for me.

Atomic_Refresh with value FALSE.
What caught my eye are all those long running DELETE statements running underneath those costly MV refresh jobs. This was the very first thing I tried with those complex and long running COMPLETE MV refresh, this is the optional parameter atomic_refresh of the procedure dbms_mview.refresh and is very useful. If the parameter is set to FALSE, the materialized view is deleted with a much faster TRUNCATE command. I immediately got the gain, as the runtime was dropped ~ 60% .. At the same time I have observed a great relax in redo generation as DELETE is replaced with TRUNCATE, and hence a great drop in ‘checkpoint incomplete‘ and ‘Log File Sync‘ waits.

The parameter atomic_refresh=FALSE works only with “complete” refresh, so “truncate” is only possible with “complete“. But there is one demerit too for this method, and is that no data is visible to the users during the refresh, so, choose wisely. If this is feasible in your environment, you can use the following command for a Complete Refresh:

BEGIN
dbms_mview.refresh(‘MV_MYTEST_BIGREFRESH_JOB’, method => ‘C’, atomic_refresh => FALSE);
END;


-- See below the REFRESH_TIME_MINS got dropped by ~ 60% after using setting atomic_refres = false
SELECT * 
FROM ( SELECT OWNER, 
              MVIEW_NAME, 
              CONTAINER_NAME, 
              REFRESH_MODE, 
              LAST_REFRESH_TYPE, 
              STALENESS, 
              round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS 
       FROM ALL_MVIEWS 
       WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
     ) 
ORDER BY REFRESH_TIME_MINS DESC;
 
OWNER   MVIEW_NAME                       CONTAINER_NAME                   REFRESH_MODE LAST_REFRESH_TYPE STALENESS           REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
PRASH   MV_MYTEST_BIGREFRESH             MV_MYTEST_BIGREFRESH             DEMAND        COMPLETE      COMPLETE                79.12

Adding PX (Parallelism) to the base tables or to the refresh command itself.
Though with atomic_refrsh FALSE, I was able to cut down a great percentage of the overall elapsed time, but still customer asked for me, maybe some 30% more …
I decided to try PX to the base tables or to the refresh statement, among other parameters, this procedure takes one parameter PARALLELISM. At the outset, it appears that the PARALLELISM parameter should invoke a parallel refresh of the materialized view. However, it does not; it simply changes the requested degree of parallelism if it already executes in parallel. You can invoke this procedure as:

BEGIN
dbms_mview.refresh(‘MV_MYTEST_BIGREFRESH_JOB’, method => ‘C’, atomic_refresh => FALSE, PARALLELISM=>4); 
END;



-- See below the REFRESH_TIME_MINS got dropped by further ~30% after using PARALLELISM to the master/base MV tables.
SELECT * 
FROM ( SELECT OWNER, 
              MVIEW_NAME, 
              CONTAINER_NAME, 
              REFRESH_MODE, 
              LAST_REFRESH_TYPE, 
              STALENESS, 
              round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS 
       FROM ALL_MVIEWS 
       WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
     ) 
ORDER BY REFRESH_TIME_MINS DESC;
 
OWNER   MVIEW_NAME                       CONTAINER_NAME                   REFRESH_MODE LAST_REFRESH_TYPE STALENESS           REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
PRASH   MV_MYTEST_BIGREFRESH             MV_MYTEST_BIGREFRESH             DEMAND        COMPLETE      COMPLETE                 55.08
  

You can also add PX to the base tables. When the master table of the materialized view has the PARALLEL attribute set to > 1, then the creation as well as the refresh processes will be parallelized. Whether or not you specify the PARALLELISM parameter in the REFRESH clause, doesn’t matter.

ALTER TABLE DIXIT_TEST_TABLE PARALLEL (DEGREE 4);

You can also add PX hint to the MV definition too! When the materialized view is created with a PARALLEL hint, then the creation as well as the refresh processes, will be parallelized. Whether or not you specify the PARALLELISM parameter in the REFRESH clause, doesn’t matter.
This is how you can ad a parallel hint in the materialized view definition!

CREATE MATERIALIZED VIEW TEST_MV
AS
SELECT /*+ PARALLEL(DIXIT_TEST_TABLE, 4) */ MONTHLY_ID, SALARY_DATE, SUM(ANNUAL_COUNTS)
FROM DIXIT_TEST_TABLE GROUP BY MONTHLY_ID, SALARY_DATE;

Stats Collection:
This is very vital that all of the tables, Indexes which your MV refresh references via the code, should be fresh and well collected. Look of for possibilities to add column level stats i.e Histograms wherever there is a scope, specially for low cardinality columns. You can also leave this to the optimizer to decide, by using METHOD_OPT=>’FOR ALL COLUMNS SIZE AUTO’

Other techniques:

  • Try for FAST/Incremental Refresh mode. With FAST refresh, only the changes since the last refresh are applied to the materialized view. Personally I have found the FAST refresh slower than the COMPLETE refresh mode, maybe because I used atomic_refresh parameter that replaced those costly DELETEs with fast TRUNCATE … Not sure why they called it FAST 🙂 To be sure that my materialized view can be fast refresh, we can also use explain_mview procedure and check the capability_name called “REFRESH_FAST”. The MV_CAPABILITIES TABLE is created in the owner’s schema by running the following script ..
SQL> @$ORACLE_HOME/rdbms/admin/utlxmv.sql

SQL> exec dbms_mview.explain_mview('MV_MYTEST_BIGREFRESH_JOB');
PL/SQL procedure successfully completed.
 
SQL> select capability_name,possible,related_text,msgtxt from mv_capabilities_table;

CAPABILITY_NAME                POSSIBLE             RELATED_TEXT            MSGTXT
------------------------------ -------------------- ----------------------- ----------------------------------------------------------------------------
PCT                            N
REFRESH_COMPLETE               Y
REFRESH_FAST                   Y <<<<<<<<---------------
REWRITE                        Y
PCT_TABLE                      N                    ID_ID_CANADA_HISTORY_T1 relation is not a partitioned table
PCT_TABLE                      N                    ID_ID_CANADA_HISTORY_TV relation is not a partitioned table
PCT_TABLE                      N                    DWH_SITE                relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT      Y
REFRESH_FAST_AFTER_ONETAB_DML  Y
REFRESH_FAST_AFTER_ANY_DML     Y
REFRESH_FAST_PCT               N                                            PCT is not possible on any of the detail tables in the materialized view
REWRITE_FULL_TEXT_MATCH        Y
REWRITE_PARTIAL_TEXT_MATCH     Y
REWRITE_GENERAL                Y
REWRITE_PCT                    N                                            general rewrite is not possible or PCT is not possible on any of the detail tables
PCT_TABLE_REWRITE              N                    ID_ID_CANADA_HISTORY_T1 relation is not a partitioned table
PCT_TABLE_REWRITE              N                    ID_ID_CANADA_HISTORY_TV relation is not a partitioned table
PCT_TABLE_REWRITE              N                    DWH_SITE                relation is not a partitioned table
 
18 rows selected.
  • Out-of-place refresh is a new option on DBMS_MVIEW. REFRESH in Oracle 12c. The complete refresh process builds a new table which seamlessly becomes the materialized view, the old table is dropped. It only works in conjunction with non-atomic refresh.
  • DBMS_ADVISOR contains a procedure called TUNE_MVIEW that is used when working with the query rewrite mechanism. DBMS_MVIEW.EXPLAIN_MVIEW and DBMS_MVIEW.EXPLAIN_REWRITE that could be used to diagnose why a materialized view wasn’t being used for query rewrite. The DBMS_ADVISOR.TUNE_MVIEW procedure has the capability to add additional aggregate columns and materialized view logs to the view definition so that it becomes fast refreshable.

Well, all after I’ve performed above three methods, I was able to reduce the MV refresh downtimes by more than 92% … 🙂

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

10053 Trace Viewer : A life savior when handling colossal optimizer traces

Posted by FatDBA on October 18, 2021

Hi Everyone,

I am sure that my last post about 10053 debug traces has sparked some interest in optimizer cost calculations and estimations 🙂 As you guys are familiar that these traces aren’t that easy to digest and interpret, as they are pretty complicated, a humongous pile of internal cryptic information’s. One of the reader asked – If there are any tool that can help to at least format the trace and its sections ? Yes, there are few and one of my favorite is the 10053 viewer, and is what I am using from last few years now (lucky that I found that great blog post by Jonathan Lewis).

Click here if want to download it!

The tool is pretty easy to use! You have to click on ‘open trace file’ button and browse the 10053 trace from the system and click on ‘show trace file’ (next button).

Now load the trace file.

Now you’ll have a drop down view to select from. Once the trace is loaded, you can access sections by using ‘+’ to expand and ‘‘-‘ to minimize the section.

Expand to get more details about any particular section.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Oracle event 10046 debug traces, they really aren’t that ‘complicated’ as we think – A 10046 trace break apart!

Posted by FatDBA on October 16, 2021

Hi Everyone,

Oracle has a long list of internal debug codes and this tracing is an art and a real craft. 10046 debug traces is one of the popular method for collecting extended SQL trace (like SQL_TRACE=TRUE) information for Oracle sessions. This we specially use to determine or distinguish the nature of a SQL Tuning problem. By setting this event, you can get detailed trace information of Oracle’s internal execution system analysis, call, wait, and bind variables, which plays a very important role in analyzing the performance of the system. This provides a great source and different levels of details about SQLs.

This post is all about breaking the parts of the trace and understand some of the critical sections to help understanding about SQL stats that it captures. I am not going to show how to generate them as the steps are pretty straight forward and are available on metalink.

Though there are multiple use cases of 10046, but I recently used them to understand a complicated and costly PL/SQL program which has got more than 1000 different SQL’s inside that it calls and I was interested to check on the costliest among them and why is that …. Though there are surely other ways to get the details of same level like using SQL Profiler, SQL Traces etc. but none of them provides the level of details what 10046 gives) and elapsed time parsing (prsela).

Above used snippet is from live 10046 sorted traces from a production system running on 10.2.0.5.0 (Yes, an old application). There isn’t much different that you will notice if you execute it on any new version of database too, there are very few changes that you will notice in case running on a latest oracle DB version. Okay let’s first understand few of the keywords or column names used in the result.

TKPROF: Release 10.2.0.5.0 - Production on Fri Sep 27 03:31:42 2021

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Trace file: dixitdb_ora_28282_10046_for_spdixitM.trc
Sort options: exeela  fchela  prsela  
********************************************************************************

SELECT COUNT(*) 
FROM
 CANONTALAB.DIXIT1_SAMPLE WHERE DIXIT1_SAMPLE_NO=:B1 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      134      0.00       0.00          0          0          0           0
Execute    862      0.03       0.03          0          0          0           0
Fetch      862     46.59      45.52          0    1235246          0         862
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1858     46.63      45.55          0    1235246          0         862

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 296  (CANONTALAB)   (recursive depth: 3)

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  SORT AGGREGATE (cr=2866 pr=0 pw=0 time=114898 us)
      2   INDEX FAST FULL SCAN PK_DIXIT1_SAMPLE (cr=2866 pr=0 pw=0 time=114888 us)(object id 125001)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
      2   SORT (AGGREGATE)
      2    INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'PK_DIXIT1_SAMPLE' 
               (INDEX (UNIQUE))

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        60       12.17         43.54
  
********************************************************************************

COUNT – Represents the umber of times a SQL statement was parsed, executed, or was fetched.
CPU – Total CPU time in seconds for all parse, execute, or fetch calls for the SQL.
ELAPSED – Total elapsed time in seconds for all parse, execute, or fetch calls for the SQL statement.
DISK – Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls.
QUERY – Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls. Usually, buffers are retrieved in consistent mode for queries. A Consistent Get is where oracle returns a block from the block buffer cache but has to take into account checking to make sure it is the block current at the time the query started. Or is a normal reading of a block from the buffer cache. A check will be made if the data needs reconstructing from rollback info to give you a consistent view
CURRENT – Total number of buffers retrieved in current mode. Buffers are retrieved in current mode for statements such as INSERT, UPDATE, and DELETE. A DB block get (or current get in TKPROF) not only gets the block as it is right now, but it stops anyone else from getting that block (in current mode!) until we change it and release it. If someone else got there first, we wait.
ROWS – Total number of rows processed by the SQL statement. This total does not include rows processed by subqueries of the SQL statement. Also the rows gives statistics about 3 calls: Parse, Execute & Fetch
PARSE – Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects. This is where the physical and logical transformations and optimizations happens.
EXECUTE – Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.
FETCH – Retrieves rows returned by a query. Fetches are only performed for SELECT statements.

Okay, now when all the column names and table entries are explained, let me try to explain what it represents in the form of those numbers.

It says 1235246 blocks were received in consistent mode during fetch operation. Since this is a SELECT statement, the blocks are shown during Fetch operation. If its a DML statement then blocks will be shown during Execute operation. Misses in library cache during each call. If there is no miss then it wont be mentioned. The 1 miss for the SQL is very much acceptable since when a SQL runs for the first time it need to be parsed and executed and execution plan will be stored. So parse and execute call will have 1 misses. If you see the statement parse call happened 134 times but the miss count is only 1, meaning that the statement was parsed only once and was stored in library cache. For next 133 parses the same parsed statement from library cache was used. So we have miss 1 and hit 133. Similarly execution plan was made only once and 861 times Oracle used same execution plan from library cache.

Now next jump to the row source operations and codes that it uses. cr = Consistent Reads, pr = Physical Reads, pw = Physical Writes, time = time taken by this step in microseconds. You might see some other codes used i.e. cost = cost incured by the step, size = represent the size of data in that step and card = cardinality.

So, the query was found doing an UNIQUE SCAN on it’s primary key index with name PK_DIXIT1_SAMPLE in ALL_ROWS mode and which is pretty understood as an equality predicate was used and the unique or primary key constraint was sufficient by itself to produce an index unique scan. And finally it shows wait event details and is pretty straight forward and says that it waits on ‘db file sequential read‘ with max wait time of 12.17

With the values above we need make a decision to whether to tune the SQL or not. Unless we have a locking problem or bad performing SQLs we shouldn’t worry about CPU time or the elapsed time. because timings come into consideration only when we have bad performing SQLs. The important factor is the number of block visits, both query (that is, subject to read consistency) and current (that is, not subject to read consistency). Segment headers and blocks that are going to be updated are acquired in current mode, but all query and subquery processing requests the data in query mode.

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | 4 Comments »

A ‘simple’ dissection of an optimizer decision and cost using Oracle 10053 trace event … Yes, it’s simple!

Posted by FatDBA on October 10, 2021

Hi Guys,

In today’s post I will try to explain the cost, stats (system & object level), join order, cardinality & the selectivity calculations about execution plan entries and will take help of the infamous 10053 optimizer traces. Though 10053 is notoriously complicated, detailed and extensive even for simple SQL statements but always has the wealth of information about optimizer decisions like plan directives, peeked bind values, predicate move around, Join elimination stats, query transformations, fix controls, join eliminations, join orders (easily complicated) & other computations or calculations. So, I will try to keep it as simple as I can.

About ways of generating the report, you can access it via ALTER SESSION, ORADEBUG or even through DBMS_SQLDIAG.DUMP_TRACE (does not require diagnostic pack). There are multiple use cases of using 10053, but the most common is when you want to compare any best or worst plans and you want to see the difference in cost calculations.

Alright, so this is the complete SQL statement followed up by its execution plan and predicate details.

SELECT * FROM (SELECT n.object_inst_id, n.name, n.type, rownum idx
FROM testsch.PDI_FBI_HASHAH1 n join testsch.PDI_FBI_HASHAH1_b on  TESTID_name = :"SYS_B_00"  and owning_object_id = n.object_inst_id
and n.name = NVL(null, n.name) AND n.status = NVL(:"SYS_B_01", :"SYS_B_02")
AND n.type = NVL(:"SYS_B_03", n.type) AND NVL(n.test_allocation,:"SYS_B_04") = NVL(NVL(null, n.test_allocation),:"SYS_B_05")
AND  NVL(n.number_usage, :"SYS_B_06")=:"SYS_B_07" AND ( n.owner IS NULL OR instr(lower(n.owner), :"SYS_B_08") > :"SYS_B_09" )
AND n.assigned_csn_id IS NULL AND n.locked_on IS NULL AND n.lock_owner IS NULL
AND (n.lock_id IS NULL OR n.LOCK_DATE <= (sysdate - NVL(n.lock_period,
(SELECT dixi_trait_values FROM testsch.PDI_HASHIS_MERGEDINFO_DATA WHERE HASH_BATTR_SHA = :"SYS_B_10")) / :"SYS_B_11" / :"SYS_B_12"))
AND target_object_id = :"SYS_B_13" ORDER BY TO_NUMBER(n.name)) OUTPUTS WHERE OUTPUTS.idx between :"SYS_B_14" and :"SYS_B_15"


----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                               |     2 |   364 |   107   (0)| 00:00:01 |
|*  1 |  VIEW                          |                               |     2 |   364 |   107   (0)| 00:00:01 |
|   2 |   COUNT                        |                               |       |       |            |          |
|   3 |    NESTED LOOPS                |                               |     2 |   340 |   107   (0)| 00:00:01 |
|*  4 |     TABLE ACCESS BY INDEX ROWID| PDI_FBI_HASHAH1               |  6719 |   505K|   106   (0)| 00:00:01 |
|*  5 |      INDEX FULL SCAN           | PDI_FBI_NNAME_UTYPE_STA_NPFIX |  3065 |       |     7   (0)| 00:00:01 |
|*  6 |      TABLE ACCESS FULL         | PDI_HASHIS_MERGEDINFO_DATA    |     1 |    28 |     3   (0)| 00:00:01 | --> How the CBO decided cost 3 ? 
|*  7 |     INDEX RANGE SCAN           | IDX_TRNNUMBERB_SMOKE          |     1 |    93 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("OUTPUTS"."IDX"<=10 AND "OUTPUTS"."IDX">=1)
   4 - filter("N"."TYPE"='MOBILE' AND ("N"."OWNER" IS NULL OR INSTR(LOWER("N"."OWNER"),'dna oyj')>0)
              AND NVL("N"."NUMBER_USAGE",'External')='Internal' AND "N"."ASSIGNED_CSN_ID" IS NULL AND
              NVL("N"."TEST_ALLOCATION",'EAI')=NVL(NVL(NULL,"N"."TEST_ALLOCATION"),'EAI') AND "N"."LOCKED_ON" IS NULL
              AND "N"."LOCK_OWNER" IS NULL AND ("N"."LOCK_ID" IS NULL OR
              "N"."LOCK_DATE"<=SYSDATE@!-NVL("N"."LOCK_PERIOD", (SELECT "DIXI_TRAIT_VALUES" FROM
              "TESTSCH"."PDI_HASHIS_MERGEDINFO_DATA" "PDI_HASHIS_MERGEDINFO_DATA" WHERE
              "HASH_BATTR_SHA"='MY_CALL_DURATION'))/60/24))
   5 - access("N"."STATUS"='Available')
       filter("N"."STATUS"='Available')
   6 - filter("HASH_BATTR_SHA"='MY_CALL_DURATION')
   7 - access("TESTID_NAME"='ind.fatdba.application.dss.more.production.BESTID.PDINumberBestSubClass'
               AND "OWNING_OBJECT_ID"="N"."OBJECT_INST_ID" AND "TARGET_OBJECT_ID"=21)

Let’s try to understand how optimizer come up with the cost numbers, how it derives these values, on what basis and what all it considers during the calculations. I will explaining about one of the access operation at Line ID 6 of the plan where it’s doing a TABLE ACCESS FULL on table PDI_HASHIS_MERGEDINFO_DATA and has a final cost of 3 associated with it.

So, what it is ? what all was happened under the hood by the optimizer to get this value of 3 for this operation ?

This is the subquery for which we will be checking the cost calculations within 10053 trace files.

(SELECT dixi_trait_values FROM testsch.PDI_HASHIS_MERGEDINFO_DATA WHERE HASH_BATTR_SHA = :"SYS_B_10")) / :"SYS_B_11" / :"SYS_B_12"))

Let me replace the sys generated values with their original values, for this I will be checking under “Peeked values of the binds in SQL statement” section. The same you can get from ‘predicate information’ section of the execution plan too, but let’s stick to 10053 for the moment.

*******************************************
Peeked values of the binds in SQL statement
*******************************************
----- Bind Info (kkscoacd) -----
 Bind#0
  oacdty=01 mxl=128(73) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0100 frm=01 csi=873 siz=128 off=0
  kxsbbbfp=7f3791745ee0  bln=128  avl=73  flg=09
  value="ind.fatdba.application.dss.more.production.BESTID.PDINumberBestSubClass"
...
......
........

 Bind#10
  oacdty=01 mxl=32(19) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0100 frm=01 csi=873 siz=32 off=0
  kxsbbbfp=7f3791745d48  bln=32  avl=19  flg=09
  value="MY_CALL_DURATION"
 Bind#11
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0100 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f3791745d20  bln=22  avl=02  flg=09
  value=60
 Bind#12
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0100 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f3791745cf8  bln=22  avl=02  flg=09
  value=24

Okay, let’s place them in the subquery.

SELECT dixi_trait_values FROM testsch.PDI_HASHIS_MERGEDINFO_DATA WHERE HASH_BATTR_SHA='MY_CALL_DURATION/60/24';

These are the contents of the column.

SQL> SELECT dixi_trait_values FROM testsch.PDI_HASHIS_MERGEDINFO_DATA;

DIXI_TRAIT_VALUES
-----------------
             60
            120
            200
             10
             60

Let’s check for access path analysis for this SQL.

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: PDI_HASHIS_MERGEDINFO_DATA  Alias: PDI_HASHIS_MERGEDINFO_DATA
  #Rows: 5  SSZ: 0  LGR: 0  #Blks:  5  AvgRowLen:  31.00  NEB: 0  ChainCnt:  0.00  SPC: 0  RFL: 0  RNF: 0  CBK: 0  CHR: 0  KQDFLG: 1
  #IMCUs: 0  IMCRowCnt: 0  IMCJournalRowCnt: 0  #IMCBlocks: 0  IMCQuotient: 0.000000
try to generate single-table filter predicates from ORs for query block SEL$3 (#0)
finally: "PDI_HASHIS_MERGEDINFO_DATA"."DIXI_TRAIT_VALUES"=:B1

=======================================
SPD: BEGIN context at query block level
=======================================
Query Block SEL$3 (#0)
Applicable DS directives:
Return code in qosdSetupDirCtx4QB: NODIR
=====================================
SPD: END context at query block level
=====================================
Access path analysis for PDI_HASHIS_MERGEDINFO_DATA
***************************************
SINGLE TABLE ACCESS PATH 
  Single Table Cardinality Estimation for PDI_HASHIS_MERGEDINFO_DATA[PDI_HASHIS_MERGEDINFO_DATA] 
  SPD: Return code in qosdDSDirSetup: NOQBCTX, estType = TABLE
  Column (#1): CONF_ATTR_NAME(VARCHAR2)
    AvgLen: 24 NDV: 5 Nulls: 0 Density: 0.200000
  Table: PDI_HASHIS_MERGEDINFO_DATA  Alias: PDI_HASHIS_MERGEDINFO_DATA
    Card: Original: 5.000000  Rounded: 1  Computed: 1.000000  Non Adjusted: 1.000000
  Scan IO  Cost (Disk) =   3.000000
  Scan CPU Cost (Disk) =   36377.200000
  Total Scan IO  Cost  =   3.000000 (scan (Disk))
                         + 0.000000 (io filter eval) (= 0.000000 (per row) * 5.000000 (#rows))
                       =   3.000000
  Total Scan CPU  Cost =   36377.200000 (scan (Disk))
                         + 250.000000 (cpu filter eval) (= 50.000000 (per row) * 5.000000 (#rows))
                       =   36627.200000
  Access Path: TableScan
    Cost:  3.002419  Resp: 3.002419  Degree: 0
      Cost_io: 3.000000  Cost_cpu: 36627
      Resp_io: 3.000000  Resp_cpu: 36627
  Best:: AccessPath: TableScan
         Cost: 3.002419  Degree: 1  Resp: 3.002419  Card: 1.000000  Bytes: 0.000000

.....
.......

OPTIMIZER STATISTICS AND COMPUTATIONS
PJE:  Bypassed; QB has a single table SEL$3 (#0)
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  PDI_HASHIS_MERGEDINFO_DATA[PDI_HASHIS_MERGEDINFO_DATA]#0
***********************
Best so far:  Table#: 0  cost: 3.002419  card: 1.000000  bytes: 28.000000
***********************

If you check above you will see it considers the CPU & IO to generate the final cost values and that’s what we already know. Cost is the amount of work the optimizer estimates to run your query via a specific plan. The optimizer generally prefers lower cost plans. The optimizer’s cost model accounts for the IO, CPU, and network resources that will be used by the query.
IO_COST: I/O cost of the operation as estimated by the query optimizer’s approach. The value of this column is proportional to the number of data blocks read by the operation. For statements that use the rule-based approach, this column is null.
CPU_COST: CPU cost of the operation as estimated by the query optimizer’s approach. The value of this column is proportional to the number of machine cycles required for the operation. For statements that use the rule-based approach, this column is null.

At the same time it checks for dictionary stats to get table level information i.e. average row lenght, blocks, total rows and takes help of metrics NDV (nuber of distinct values), nulls, density of rows etc. to calculate the original cardinality of 5 (number of distinct entries) and the rounded cardinality of 1 – this SQL has resulted only into 1 row as an output, hence 1 is the rounded cardinality.

So, the main elements are ‘Scan IO Cost (Disk)’, ‘Scan CPU Cost (Disk)’ which finally derives the ‘Total Scan IO Cost’ value of 3.

That’s how the optimizer has calculated this value of 3 for this specific access operation, similarly you understand any of the other access entries in plan, and their associated costs.

If you want to read more on 10053 traces, I recommend you a great article by Wolfgang Breitling, Jonathan Lewis (his book cost-based Oracle fundamentals) & few posts from Maria Colgan.

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , | 1 Comment »

Quarantine a SQL ? This can save from a system performance pandemic …

Posted by FatDBA on July 6, 2021

Hi Guys,

Last weekend I was asked to examine a performance problem where customer was observing sporadic resource spikes on CPU & PIOs (Physical IO scans), and had no idea how to handle the situation. They already have identified the problematic SQL and asked me to take a look, while doing initial investigation on the system, I found the resource manager was enabled and was killing the ill SQL every time when it tries to breach the limits set on CPU & other resources.

I spent some time understanding query behavior and performance, and I found this a scenario of SQL plan (PHV) flip where the query optimizer toggles between a good and a worst plan. This being a time sensitive issue and customer needs a quick fix before I identify the reason behind the PHV flip, we have to think of any technique on how we can stop the SQL execution with bad PHVs. This was an Oracle EE 19c system running on Exadata, I immediately proposed for a solution using new feature of ‘SQL Quarantine‘, which helps to prevent reuse of same execution plan which was terminated by resource managers due to resource limits.

Let me show you how to do that, here I will demonstrate how to quarantine a SQL & all its execution plans.

Well this is just a use case, you can also quarantine the SQL for all its PHVs or execution plans or even through the SQL Text. Here I will show how you can quarantine a SQL for all its execution plans.

Note: This feature is currently available only on Enterprise Edition on Engineered Systems, like Exadata and Exadata Cloud Service, but for this demo, I will be enabling the ‘_exadata_feature_on‘ parameter for demo purposes on my sandbox setup.

** Please don’t touch this parameter in your production environments if you’re not running on Oracle EE on the top of Exadata or Exadata Cloud services.

[oracle@canttowinsec ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 3 23:33:41 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           FALSE                     FALSE                     Exadata Feature On

SQL>
SQL>


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>


SQL> startup
ORACLE instance started.

Total System Global Area  922745216 bytes
Fixed Size                  8903040 bytes
Variable Size             843055104 bytes
Database Buffers           67108864 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL>



SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           TRUE                      TRUE                      Exadata Feature On

SQL>




-- For this demo I have created one table with 1000000 rows. 

SQL> select count(*) from bigtab;

  COUNT(*)
----------
   1000000

Next check the SQLID, PHV and TEXT of the SQL for which you want to setup this QUARANTINE feature and will create the quarantine configuration for it. You can set it for SQL ID, SQL ID + PHV or for SQL TEXT.

SQL> select sql_id, plan_hash_value, sql_text from v$sqlarea where sql_text like '%select * from bigtab%';

SQL_ID        PLAN_HASH_VALUE
------------- ---------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6fwqzurbc8y7k       441133017
select * from bigtab

SQL>


-- You can create CREATE_QUARANTINE BY SQL_ID 
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id')
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id', PLAN_HASH_VALUE => '&PLAN_HASH_VALUE');

-- You can create CREATE_QUARANTINE_BY_SQL_TEXT.
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_TEXt(SQL_TEXT => to_clob('select count(*) from emp'));



-- This command creates a quarantine configuration for SQL ID '6fwqzurbc8y7k' and all its execution plans.
SQL>
SQL> DECLARE
quarantine_sql VARCHAR2(30);
BEGIN
quarantine_sql :=   DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '6fwqzurbc8y7k');
END;
/  

PL/SQL procedure successfully completed.

SQL>



-- Check if quarantine config/profile is created.

SQL> select NAME, SQL_TEXT, ELAPSED_TIME, cpu_time,CREATED, ENABLED from dba_sql_quarantine;


NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           ALWAYS               ALWAYS                         04-JUL-21 12.09.25.567422 AM   YES


Now next we will use the DBMS_SQLQ.ALTER_QUARANTINE procedure to put limits on resource usage i.e. Elapsed time, CPU, IO (MBs), Number of physical (PIOs) requests, Number of logical (LIOs) requests.

SQL>
SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'CPU_TIME',
       PARAMETER_VALUE => '5');

    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'ELAPSED_TIME',
       PARAMETER_VALUE => '10');
END;
/  2    3    4    5    6    7    8    9   10   11   12

PL/SQL procedure successfully completed.

SQL>


-- let's verify limits that we have set against this SQL Quarantine profile.

NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           10                   5                              04-JUL-21 12.12.41.918609 AM   YES

Next we will setup the RESOURCE MANAGER.

-- Create a pending area.
SQL> begin
 dbms_resource_manager.create_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

-- Create a consumer group with name 'DIXITPOC_GROUP'.
SQL> begin
 dbms_resource_manager.create_consumer_group(CONSUMER_GROUP=>'DIXITPOC_GROUP',COMMENT=>'To test SQL quarantine feature for one SQL');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.


-- Now bind the consumer group with the user 'DIXIT', I mean this could be any group through which you will execute the query.
SQL> begin
 dbms_resource_manager.set_consumer_group_mapping(attribute => 'ORACLE_USER',value => 'DIXIT',consumer_group =>'DIXITPOC_GROUP' );
 end;
  /  2    3    4

PL/SQL procedure successfully completed.


-- Create a resource plan with name 'POC_FOR_QUARANTINE'.
SQL> begin
dbms_resource_manager.create_plan(plan => 'POC_FOR_QUARANTINE',comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds');
end;
/  2    3    4

PL/SQL procedure successfully completed.


-- Create a plan directive by allocating resource plan 'POC_FOR_QUARANTINE' to consumer group 'DIXITPOC_GROUP'. I am setting execution limit of 5 seconds for the SQL.
SQL> begin
 dbms_resource_manager.create_plan_directive(
 plan => 'POC_FOR_QUARANTINE',
 group_or_subplan => 'DIXITPOC_GROUP',
 comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds',
 switch_group=>'CANCEL_SQL',
 switch_time => 5,
 switch_estimate=>false);
 end;
 /  2    3    4    5    6    7    8    9   10

PL/SQL procedure successfully completed.




-- Allocate full resources to rest of the sessions which are not part of this plan.
SQL> begin
 dbms_resource_manager.create_plan_directive(PLAN=> 'POC_FOR_QUARANTINE', GROUP_OR_SUBPLAN=>'OTHER_GROUPS',COMMENT=>'leave rest as is');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.



-- Validate and submit pending area.
SQL> begin
 dbms_resource_manager.validate_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL> begin
 dbms_resource_manager.submit_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL>


-- Grant switch privilege to the DIXIT user which will switch it to 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager_privs.grant_switch_consumer_group('DIXIT','DIXITPOC_GROUP',false);
end;
/  2    3    4

PL/SQL procedure successfully completed.

-- Create initial consumer group for DIXIT user for 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager.set_initial_consumer_group('DIXIT','DIXITPOC_GROUP');
end;
/  2    3    4

PL/SQL procedure successfully completed.

SQL>

Next set the resource manager at the database level, we will have to set the RESOURCE_MANAGER_PLAN to value or name of the plan ‘POC_FOR_QUARANTINE’.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string
SQL> 


SQL> alter DIXITtem set RESOURCE_MANAGER_PLAN = 'POC_FOR_QUARANTINE';
DIXITtem altered.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string      POC_FOR_QUARANTINE
SQL>

Execute the SQL for the first time, you will get ‘active time limit exceeded’ due to RM.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-00040: active time limit exceeded - call aborted

Now, if you run the SQL for the second time, you will see your quarantine plan/profile will be used and will abort the execution of the SQL.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-56955: quarantined plan used

Hope It Helped
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

Getting SQL Net message from dblink, have you tried DRIVING_SITE hint ?

Posted by FatDBA on June 27, 2021

Hi Guys,

Last week I was asked to take a look in to one performance problem where customer DBA reported excessive ‘SQL*Net message from dblink’ in the system. As I don’t have direct access to the box, I asked him to share both AWR and ASH performance reports for the time when they observe the problem. And yes, he was right, there were few waits on this event but with very high average elapsed times or single execution times. Next I verified the ASH data for the period and I was able to identify this SQL (SELECT) which is frequently waiting on this wait tvent.

About this event, as you guys know SQL Net messages are classified as Network classed events and happens when some part of the final dataset is coming from a remote database which is accessed via DBLink. Alright, so we know the problem and we have the problematic SQL, let’s simulate the problem and the solution.

I will first create the DBLink that I will be using to access the remote data which is present in a different database on a different server/host and next I will create the sample table with some test data both on Site A and Site B.

Reference used:
BIGTAB: Table on Site 1, local site.
BIGTAB2: Table on site 2, remote location

SQL>
SQL> CREATE DATABASE LINK dixitdlink1
  2     CONNECT TO dixit IDENTIFIED BY oracle90
   USING '(DESCRIPTION=
            (ADDRESS = (PROTOCOL = TCP)(HOST = canttowinsec.quebecdomain)(PORT = 1521))
            (CONNECT_DATA=(SERVICE_NAME=dhavaldb.quebecdomain))
          )';  

Database link created.

SQL>


-- On source database, on local host.
CREATE TABLE BIGTAB (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.

INSERT INTO BIGTAB (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.


SQL> delete from prashant.BIGTAB where rownum < 60000;

59999 rows deleted.




-- This I have created on target database, on a remote host.
CREATE TABLE BIGTAB2 (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.


INSERT INTO BIGTAB2 (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.

Alright, the test data is created on both the sites, and in order to mimic the exact case that I’d faced, I have intentionally deleted 59999 table from table that exists in site 1. So now we have table BIGTAB with 7735 records and on remote database we have a table with name BIGTAB2 with 67447 rows. Let’s execute few queries and do a Inner Join on both the tables.

Test 1: With no hint or tuning/tweaking of any kind. Asking to bring all data from remote site here to local site and then perform the join operation or this can also be done if we put the hint and mention name of the local table, or table that is small i.e. /*+ DRIVING_SITE(BIGTAB) */

SQL> select * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
........
............
...............
.................


Execution Plan
----------------------------------------------------------
Plan hash value: 2705338834

----------------------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN         |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL| BIGTAB  |  6948 |   237K|    68   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | BIGTAB2 | 19930 |   681K|    76   (0)| 00:00:01 | DIXIT~ | R->S |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ADATE"="DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT "ID","WEIGHT","DATEOFREGISTER" FROM "DIXIT"."BIGTAB2" "BIGTAB2"
       (accessing 'DIXITDLINK1.ONTADOMAIN' )


Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

So, this was happening there on that day, on that system with that query waiting on ‘SQL*Net message from dblink‘, now this is the right time I should introduce the DRIVING_SITE hint, I have been trying and experimenting with this hint for a long time now, maybe since my 9i days.

This hint is useful for distributed queries to force optimizer to use a particular site as a driving site, or it instructs the optimizer to execute the query at a different site than that selected by the database. Like in previous test case all rows from remote host which matches the condition are sent to the local site and the join is finally executed on the local site.

Now, I am going to use the DRIVING_SITE (will mention name of the remote table) and force the optimizer to perform all join operations at the remote site, hence the query will be executed there and the result set is returned to the local site.

So, you should try with the DRIVING_SITE hint on the site where the huge table resides!

Test 2: With DRIVING_HINT on remote table BIGTAB2 which is big and has 67447 records.

SQL> select /*+ DRIVING_SITE(BIGTAB2) */ * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
.....
.......
........
..........



Execution Plan
----------------------------------------------------------
Plan hash value: 2214131741

--------------------------------------------------------------------------------------------------
| Id  | Operation              | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN             |         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|   2 |   REMOTE               | BIGTAB  |    82 |  2870 |     2   (0)| 00:00:01 |      ! | R->S |
|   3 |   TABLE ACCESS FULL    | BIGTAB2 | 53126 |  1815K|    68   (0)| 00:00:01 | DHAVA~ |      |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A2"."ADATE"="A1"."DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   2 - SELECT "ID","WEIGHT","ADATE" FROM "PRASHANT"."BIGTAB" "A2" (accessing '!' )


Note
-----
   - fully remote statement
   - dynamic statistics used: dynamic sampling (level=2)

Look at the cost, rows processed and bytes (data processed), all of them are reduced by a great extent. Of course not much elapsed time difference you see here in this example as the dataset is small and is a less complex query, you can try it yourself and see the magic!

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

 
%d bloggers like this: