Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘performance’

Real Application Testing for Capture and Replay in a PDB, a great addition was made in 19c

Posted by FatDBA on November 20, 2022

I have used RAT many times in the past and is no doubt the best tool to simulate Oracle database workloads, it empowers us, the DBAs, to make proactive changes while knowing exactly what to expect when a change is implemented and minimizes the likelihood of any negative repercussions.

Once again I’d got a chance to use it during a migration project where we moved to Multitenant 19.15 environment. Database Release 19c, you can capture and replay the workload from within an individual pluggable database (PDB). This enhancement enables you to capture and replay workloads at the PDB level. This leads to better testing, less downtime, and more effective and efficient change control. This was earlier not possible and was only limited to capture and replay multitenant databases at the root multitenant container database (CDB) level.

Todays post is to show how to do that using a short demo on one of my test build.

-- For testing purpose, writing a shell script to capture workload. 
[oracle@oracleontario ~]$cat pdbtest_fatdba.sh
while sleep 1; do \
sqlplus <USERNAME1>/<PASSWORD1>@<PDB1_CONNECTION_STRING> &>/dev/null <<EOF
set head off
set pages 0
select count(*) from dixit.bigtab;
exit
EOF
done




-- Create a capture directory on the source database server. This is where all your capture specific files will reside. 
[oracle@oracleontario ~]$sqlplus <USERNAME1>/<PASSWORD1>@<PDB1_CONNECTION_STRING>

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Nov 19 09:05:12 2022
Version 19.15.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Sat Nov 19 2022 09:03:41 +00:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

SQL> CREATE or replace DIRECTORY mycapture_dir AS '/ontadomain/home/capture_pdb_fatdbatest';

Directory created.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0


-- Lets start the test, call the shell script which we prepared in the begining. 
[oracle@oracleontario ~]$sh ./pdbtest_fatdba.sh&
[1] 2189


And now capture the workload, use DBMS_WORKLOAD_CAPTURE.start_capture to collect workload data. Once completed, copy all the files from source to target server where they will be replayed.

[oracle@oracleontario ~]$sqlplus <USERNAME1>/<PASSWORD1>@<PDB1_CONNECTION_STRING>

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Nov 19 09:05:41 2022
Version 19.15.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Sat Nov 19 2022 09:05:39 +00:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

SQL> BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (name => 'PDB_666', dir => 'mycapture_dir', duration => 900);
END;
/

PL/SQL procedure successfully completed.

SQL> SQL> col name for a10
col status for a22
SELECT id, name, STATUS FROM dba_workload_captures WHERE name='PDB_666';

        ID NAME       STATUS
---------- ---------- --------------------
        31 PDB_666      IN PROGRESS

SQL> BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;
/  

PL/SQL procedure successfully completed.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

Next we have to create the replay directory structure, process captured files and initialzie the replay.

[oracle@oracleontario ~]$sqlplus <USERNAME1>/<PASSWORD1>@<PDB1_CONNECTION_STRING>

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Mar 1 01:20:18 2022
Version 19.15.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Sat Nov 19 2022 09:06:37 +00:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

SQL> CREATE or REPLACE DIRECTORY replaydir_fatdbatest1 AS '/ontadomain/home/replay_pdb_fatdbatest';

Directory created.

SQL>




-- Process all files that we have captured during the capture step. 
SQL> BEGIN
DBMS_WORKLOAD_REPLAY.process_capture('replaydir_fatdbatest1');
END;
/  

PL/SQL procedure successfully completed.

SQL>





-- Initialize the replay once above step of capture is sucessfully completed. 
SQL> BEGIN
DBMS_WORKLOAD_REPLAY.initialize_replay (replay_name => 'REPLAY_FATDBA1', replaydir_fatdbatest1 => 'replaydir_fatdbatest1');
END;
/  

PL/SQL procedure successfully completed.

SQL>

Next we have to remap connections, prepare the replay process and start the replay clients.

SQL> select conn_id,schedule_cap_id,capture_conn,replay_conn  from dba_workload_connection_map;

   CONN_ID SCHEDULE_CAP_ID CAPTURE_CONN                                       REPLAY_CONN
---------- --------------- -------------------------------------------------- --------------------------------------------------
         1                 (DESCRIPTION=(CONNECT_DATA=(SERVER=DEDICATED)(SERV
                           ICE_NAME=<PDB_SERVICE_NAME>)(CID=(PROGRAM=sqlplus)
                           (HOST=<HOSTNAME>)(USER=oracle)))(A
                           DDRESS=(PROTOCOL=TCP)(HOST=<IP_ADDRESS>)(PORT=<port>
                           )(HOSTNAME=<HOSTNAME>)))


SQL> BEGIN
  DBMS_WORKLOAD_REPLAY.REMAP_CONNECTION (connection_id => 1,
                           replay_connection => 'DESCRIPTION=(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=xxxx)(CID=(PROGRAM=sqlplus)(HOST=<HOSTNAME>)(USER=xxxxx)))(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx)(PORT=xxxx)(HOSTNAME=xxxx)))');
END;
/

PL/SQL procedure successfully completed.




-- Prepare your replay. 
SQL> BEGIN
  DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY;
END;
/

PL/SQL procedure successfully completed.

SQL>



-- start the replay client. 
[oracle@oracleontario ~]$wrc <USERNAME1>/<PASSWORD1>@<PDB1_CONNECTION_STRING> replaydir=/refresh/home/replay_pdb1 mode=replay CONNECTION_OVERRIDE=TRUE

Workload Replay Client: Release 19.15.0.0.0 - Production on Sun Mar 1 01:29:11 2022

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.


Wait for the replay to start (01:29:11)

Start the replay process on the target to fire the workload on the target and once completed generate the replay or any comparison reports.

SQL> BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/  

PL/SQL procedure successfully completed.

SQL>

-- Verify its status
SQL> COL NAME FOR A10
SQL> SELECT id, name, STATUS FROM dba_workload_replays where name = 'REPLAY_FATDBA1';

        ID NAME            STATUS
---------- --------------- ----------------------------------------
         1 REPLAY_FATDBA1  IN PROGRESS

SQL>
SQL> SELECT id, name, STATUS FROM dba_workload_replays where name = 'REPLAY_FATDBA1';

        ID NAME            STATUS
---------- --------------  ----------------------------------------
         1 REPLAY_FATDBA1  COMPLETED

SQL>




-- Now when its completed we can generate the REPLAY report or any other reportings like compare report etc. 
SQL> set long 50000
set pages 50000
set lines 500SQL> SQL>
SQL> select dbms_workload_replay.report (replay_id => 1, format=> 'TEXT') from dual;

DBMS_WORKLOAD_REPLAY.REPORT(REPLAY_ID=>1,FORMAT=>'TEXT')
--------------------------------------------------------------------------------
DB Replay Report for REPLAY_FATDBA1
--------------------------------------------------------------------------------
-------------------

--------------------------------------------------------------------------
| DB Name  | DB Id      | Release    | RAC | Replay Name | Replay Status |
--------------------------------------------------------------------------
| FATDBA12 | 1218318131 | 19.15.0.0.0 | NO  | REPLAY_FATDBA1    | COMPLETED     |
--------------------------------------------------------------------------
... ...
..........
.............
......................

Hope It Helped!
Prashant Dixit

Advertisement

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

TRANLOGOPTIONS, a crucial performance parameter, Golden Gate 21c, a BUG and a quick workaround …

Posted by FatDBA on October 23, 2022

One of the crucial performance parameter for Golden gate extract process is TRANLOGOPTIONS which controls the way that it interacts with the transaction log. You can use multiple TRANLOGOPTIONS statements in the same parameter file, or you can specify multiple options within the same TRANLOGOPTIONS statement.

There are lot of performance related options i.e. INCLUDEAUX (AUX trails when reading audit trails), DBLOGREADERBUFSIZE etc. that you can use with TRANLOGOPTIONS parameter, but recently I’d tried one of the tuning parameter PERFORMANCEPROFILE with our medium intensity workload. It can be set to HIGH and MEDIUM (default). It helps achieve better performance by grouping the parameters that affect performance. Once the performance profile is set up, this option automatically configures the applicable parameters, to achieve the desired throughput and latency.

We’d used this parameter in one of our 21c (21.7.0) GG installation with TRANLOGOPTIONS PERFORMANCEPROFILE HIGH, but immediately we’d started seeing spikes in extract’s latency. This was might be because it increases the Extract’s read buffer size to 8MB and the rule to purge the extract read buffer is either when the buffer is full or there is no ingress records for 0.2 seconds. Therefore, any uninterrupted workload with Extract consumption rate below 8MB will result in integrated Extract latency to exceed 1 second.

We’d checked with Oracle support and as a quick temporary solution they’d suggested to not use PERFORMANCEPROFILE parameter with HIGH flag, as the Extract consumption/intake rates are below specific value, such as ~15 MB/sec to get ~0.5 second extract response times. Hence we’d set the buffer size to one-third of the redo generation rate in MB/sec to get ~0.3 second maximum Extract latency. They also marked this as Bug 33772499 for GG 21c for July 2022 release.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

Differences I have noticed in the Query Block Registry section of an execution plan between Oracle 19c and 21c

Posted by FatDBA on October 10, 2022

Hi Guys,

Todays post is a quick one about the difference that I have noticed in one of the extended execution plan section ‘Query block registry‘ between Oracle 19c (19.8) and Oracle 21c (21.3). I am not going to explain about query blocks etc. here as I’ve already made few blog posts on those topics in the past, this one is about the difference that you will observe between two said database versions for QBR section in execution plans.

First I am going to use the option/flag ‘qbregistry‘ (for Query block registry info) in Oracle database version 19.16, and next will repeat same steps in Oracle 21.3. Query block registy information can also be collect from the 10053 optimizer traces, but I always notice that one’s there in CBO traces are more repetitive that what you see as a concise version through execution plans with ‘qbregistry‘ option.

So, I have already set the playground, for testing purpose, created two sample tables and have written two outer join queries. One for each table. Then combining the results of these using union all.

--
-- In Oracle 19.16 Database
--
SQL*Plus: Release 19.0.0.0.0 - Production on Sun Oct 9 03:17:19 2022
Version 19.8.0.0.0

SQL> explain plan for select /*+ GATHER_PLAN_STATISTICS */ *
from   toys, bricks
where  toy_id = brick_id (+)
union all
select *
from   toys, bricks
where  toy_id (+) = brick_id
and    toy_id is null;  

Explained.


SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'+alias +outline +qbregistry'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 731550672

-------------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |     6 |   354 |     8   (0)| 00:00:01 |
|   1 |  UNION-ALL           |        |       |       |            |          |
|*  2 |   HASH JOIN OUTER    |        |     3 |   177 |     4   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL | TOYS   |     3 |    96 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | BRICKS |     3 |    81 |     2   (0)| 00:00:01 |
|*  5 |   FILTER             |        |       |       |            |          |
|*  6 |    HASH JOIN OUTER   |        |     3 |   177 |     4   (0)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| BRICKS |     3 |    81 |     2   (0)| 00:00:01 |
|   8 |     TABLE ACCESS FULL| TOYS   |     3 |    96 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SET$1
   2 - SEL$1
   3 - SEL$1 / TOYS@SEL$1
   4 - SEL$1 / BRICKS@SEL$1
   5 - SEL$2
   7 - SEL$2 / BRICKS@SEL$2
   8 - SEL$2 / TOYS@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH(@"SEL$1" "BRICKS"@"SEL$1")
      LEADING(@"SEL$1" "TOYS"@"SEL$1" "BRICKS"@"SEL$1")
      FULL(@"SEL$1" "BRICKS"@"SEL$1")
      FULL(@"SEL$1" "TOYS"@"SEL$1")
      USE_HASH(@"SEL$2" "TOYS"@"SEL$2")
      LEADING(@"SEL$2" "BRICKS"@"SEL$2" "TOYS"@"SEL$2")
      FULL(@"SEL$2" "TOYS"@"SEL$2")
      FULL(@"SEL$2" "BRICKS"@"SEL$2")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("TOY_ID"="BRICK_ID"(+))
   5 - filter("TOY_ID" IS NULL)
   6 - access("TOY_ID"(+)="BRICK_ID")

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

Query Block Registry:
---------------------
<q o="2" f="y"><n><![CDATA[SET$1]]></n><f><h><t><![CDATA[NULL_HALIAS]]></t><s><![CDATA[SET$1]]></s></h></f></q>
<q o="2" f="y"><n><![CDATA[SEL$1]]></n><f><h><t><![CDATA[BRICKS]]></t><s><![CDATA[SEL$1]]></s></h><h><t><![CDATA[TOYS]]></t><s><![CDATA[SEL$1]]> </s></h></f></q>
<q o="2" f="y"><n><![CDATA[SEL$2]]></n><f><h><t><![CDATA[BRICKS]]></t><s><![CDATA[SEL$2]]></s></h><h><t><![CDATA[TOYS]]></t><s><![CDATA[SEL$2]]> </s></h></f></q>

73 rows selected.

SQL>
SQL> 

Above ‘Query Block Registry’ XML translates to something like this
SET$1 NULL_HALIAS|SET$1
SEL$1 BRICKS|SEL$1|TOYS|SEL$1
SEL$2 BRICKS|SEL$2|TOYS|SEL$2

Considering we have a two SELECT statements, one for each table, internally optimizer has created two query blocks SEL$1 and SEL$2, one for each of the select. Here its using a hint alias name ‘NULL_HALIAS‘, and points to both of the two SELECT statements used in the original query.

Next, lets execute the same statement in Oracle 21c (21.3.0) version and see the difference in QBR section.

--
-- In Oracle 21.3 Database
--
-- Skipping few sections to have more clarity about discussed topic
SQL*Plus: Release 21.0.0.0.0 - Production on Sat Oct 8 23:57:12 2022
Version 21.3.0.0.0

SQL>  select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'+alias +outline +qbregistry'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 731550672
...
.....
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SET$1
   2 - SEL$1
   3 - SEL$1 / "TOYS"@"SEL$1"
   4 - SEL$1 / "BRICKS"@"SEL$1"
   5 - SEL$2
   7 - SEL$2 / "BRICKS"@"SEL$2"
   8 - SEL$2 / "TOYS"@"SEL$2"

Outline Data
-------------
......
Predicate Information (identified by operation id):
---------------------------------------------------
.....

Query Block Registry:
---------------------

  SEL$1 (PARSER) [FINAL]
  SEL$2 (PARSER) [FINAL]
  SET$1 (PARSER) [FINAL]

SQL>

Here with 21c (21.3), first thing is its no more coming in the form of an XML, The curious part out of the entire output is the ‘Query Block Registry‘ where the [FINAL] is the transformation that is chosen by the CBO. This assures that time was used on a query block which has been selected for an optimal plan.

That’s it, just a small tidbit this time! 🙂

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

A cool perl script to generate AWR report time series in any given timeframe …

Posted by FatDBA on October 2, 2022

Hi All,

Recently I was doing an analysis on a slothful database where I had to generate multiple AWR reports to cover almost 12 hours of the problem period. I usually go with few of the SQL scripts or AWR generator tools for my trend analysis, but I was little lucky in making unexpected and fortunate discovery on Metalink, found a cool Doc ID 2857522.1 which explains about an Oracle provided perl script to generate AWR report time series in any given timeframe. The script works with RDBMS 12.1 and later.

The script generates all the AWR reports with [begin snapid:end snapid] equal to [n,n+1] with n falling into a given time interval. The script is very handy and interactive, gives you option to choose between standard Text or HTML format, report type (RAC or Non-RAC type reports). You can call it in both command line or interactive mode.

This script generates a timeseries of awr report for a given database in a awr repository. It connects to database via oracle sid on the db server or via tnsalias. To run the script just execute B . To connect via oracle sid do not specify username password and tnsalias. In order to have sixty minutes time-series reports do not specify frequency if awr snapshot is executed every 60 minutes (default awr setting), set frequency 2 if awr snapshot is executed every 30 mins , 4 if awr snapshot is executed every 15 mins and so on. Leaving frequency blank will generate a series based on the awr snapshot frequency. All reports generated during execution will be saved in dumpdir, if the directory does not exist it will be automatically created.

By default awr reports will be generated in text format. To generate pluggable database awr reports connect to database by specifying username , password and tns alias of the pluggable db.

-- Call perl script awrdmp.pl to run the AWR extraction.
[oracle@fatdba ~]$ perl ./awrdmp.pl 
Enter usrname: - 
Enter password: - 
Enter tnsalias: - 
Enter frequency - 
Enter mode
(text/html) - text
CONNECTED AS SYSDBA  
RDBMS VERSION: 19.0.0.0.0

---- ---------- ------- ---------- ---------- -------------------- ------
NUM       DBID  INSTID     DBNAME     INSTID              MACHINE CONTID
---- ---------- ------- ---------- ---------- -------------------- ------
0 2511273110       2      DIXITD       fat2             racnode2      0
1 2511273110       1      DIXITD       fat1             racnode1      0

Enter database num: [0,1] -: 0
 0 2511273110       2      DIXITD       fat2             racnode2
RANGE AVAILABLE IN REPOSITORY FOR DBID 2511273110 INST 2:
------------------------------------------------------
[191 04-SEP-22 07.58.34.180 AM : 420 05-SEP-22 06.28.18.307 AM] 
Enter the minimum date interval (DD/MM/YYYY) -: 04/09/2022
Enter the maximum date interval (DD/MM/YYYY) -: 05/09/2022
GENERATING FILES
[  12 %] writing file : report_2_DIXITD_191_192.text 



--
--
--
-- In case if want to execute it in command line format.
perl awrdmp.pl --batch --freq 1 --instid 1 --dbid 2511273110  --dbn DIXITD --begin 04/09/2022 --end 05/09/2022 --rac --mode html


--
--
--
-- Output under dumpdir directory.
ls -ltr ./dumpdir
[...]
-rw-r--r-- 1 oracle oinstall 145147 Sep 05 14:16 report_1_DIXITD_196_197.text
-rw-r--r-- 1 oracle oinstall 159775 Sep 05 14:16 report_1_DIXITD_197_198.text
-rw-r--r-- 1 oracle oinstall 157100 Sep 05 14:16 report_1_DIXITD_198_199.text
-rw-r--r-- 1 oracle oinstall 148216 Sep 05 14:16 report_1_DIXITD_199_200.text
-rw-r--r-- 1 oracle oinstall 144003 Sep 05 14:16 report_1_DIXITD_200_201.text
-rw-r--r-- 1 oracle oinstall 146216 Sep 05 14:16 report_1_DIXITD_201_202.text

Hope It Helped
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

Are you looking for a method to stop automatic SQL quarantine without disabling the entire SQL Quarantine feature ? Welcome to Oracle 21c …

Posted by FatDBA on September 23, 2022

Hi All,

Recently I was working on a 21c database for a POC where at one point I want to disable automatic creation of SQL Quarantine, but without disabling the entire statement Quarantine feature. This 21.3.0.0.0 Database had got the resource manager enabled, and one of the SQL was taking long time to complete, longer than the allowed directive limits on IO & CPU TIME, and as expected the SQL was killed with message “ORA-00040: active time limit exceeded – call aborted” and the SQL plan was quarantined. I wanted to stop or disable the auto creation of SQL Quarantines for the SQL in question, after RM terminates the SQL.

If you want to read more about SQL Quarantine, please click this link to my earlier post with a demo about it.

I remember in 19c there wasn’t any way to achieve that and can only regulate behavior using two of the underscore parameters _quarantine_enabled or _optimizer_quarantine_sql. Oracle 21c has introduced two two new parameters to control the behavior of SQL Quarantine, and that specially solves this issue.

First one is optimizer_capture_sql_quarantine, if set to FALSE, would disable the automatic creation of SQL Quarantine configurations after RM termination of a SQL query execution. This is FALSE by default.

The second parameter is optimizer_use_sql_quarantine, if set to FALSE would disable the use of existing SQL Quarantine configurations in a database. This parameter determines whether the optimizer considers SQL Quarantine configurations when choosing an execution plan for a SQL statement. This is TRUE by default, thereby allowing users to manually create and use SQL Quarantine configurations.

Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

-- Default Setting
SQL> show parameter OPTIMIZER_CAPTURE_SQL_QUARANTINE

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_capture_sql_quarantine     boolean     FALSE
SQL>
SQL>

-- Default Setting
SQL> sho parameter OPTIMIZER_USE_SQL_QUARANTINE

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_use_sql_quarantine         boolean     TRUE
SQL>
SQL>

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

A new 21c dynamic view securefile_shrink, and possibly a new BUG in 21.3. I guess I have identified a BUG

Posted by FatDBA on September 1, 2022

Recently I was doing a POC on Oracle 21c, and I had to shrink an LOB secure file column, and I was aware about a new dynamic view added to Oracle 21c called V$SECUREFILE_SHRINK, which is very useful to monitor securefiles shrink operations. This gives you some great details like start time, end time, status of the operation etc. I shrank one of the required securefile LOB column of a table and I was interested to see what all was captured by V$SECUREFILE_SHRINK … But It was empty!! got ‘no rows selected’, Why is that ??

Let me try to replicate the scenario and explain what I was doing that day.

-- Let me create a test scenario
-- Will create a brand new playground to test all things
[oracle@witnessalberta ~]$ !sq
sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Sun Aug 28 10:50:26 2022
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL>
SQL>
SQL>
SQL> CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY oracle90 CREATE_FILE_DEST='/opt/oracle/oradata';

Pluggable database created.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCLPDB1                       MOUNTED
         4 PDB2                           MOUNTED
SQL> ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;

Pluggable database altered.

SQL>  ALTER SESSION SET CONTAINER = PDB2;

Session altered.

SQL> SHOW CON_NAME;

CON_NAME
------------------------------
PDB2

SQL>
SQL> conn fatdba/oracle90@PDB2
Connected.
SQL>
SQL>
SQL>
SQL> show user
USER is "FATDBA"
SQL>
SQL>
SQL>
SQL>

-- Created a Table with a SECUREFILE LOB 
SQL> CREATE TABLE secure_file_tab (rid  NUMBER(5), bcol BLOB) LOB (bcol) STORE AS SECUREFILE bcol_lob 
(TABLESPACE users DISABLE  STORAGE IN ROW CHUNK 8192 RETENTION MIN 3600 KEEP_DUPLICATES NOCOMPRESS DECRYPT CACHE READS); 

Table created.

SQL>

SQL>
-- Inserted some 100000 random rows into the table
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
...
......
........

SQL> select count(*) from secure_file_tab;

  COUNT(*)
----------
    100001

SQL>
SQL>
-- Lets check the size of the table.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name='SECURE_FILE_TAB';

SEGMENT_NAME         BYTES/1024
-------------------- ----------
SECURE_FILE_TAB      4096

-- Have deleted all rows from the table
SQL> delete from secure_file_tab where RID=101;

100000 rows deleted.

SQL> COMMIT;

SQL> select count(*) from secure_file_tab;

COUNT(*)
----------
1

-- Gathered table stats
SQL> exec dbms_stats.gather_table_stats(null, 'secure_file_tab');

PL/SQL procedure successfully completed.


SQL> select segment_name, bytes/1024 from dba_segments where segment_name='SECURE_FILE_TAB';

SEGMENT_NAME         BYTES/1024
-------------------- ----------
SECURE_FILE_TAB      113



SQL> alter table secure_file_tab modify lob(bcol) (shrink space);

Table altered.

Now time to check if anything captured in the v$securefile_shrink or gv$securefile_shrink dynamic view about the securefile shrink operation.

-- Lets check if anything captured by the view
SQL> desc v$securefile_shrink
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 LOB_OBJD                                           NUMBER
 SHRINK_STATUS                                      VARCHAR2(40)
 START_TIME                                         TIMESTAMP(3) WITH TIME ZONE
 END_TIME                                           TIMESTAMP(3) WITH TIME ZONE
 BLOCKS_MOVED                                       NUMBER
 BLOCKS_FREED                                       NUMBER
 BLOCKS_ALLOCATED                                   NUMBER
 EXTENTS_ALLOCATED                                  NUMBER
 EXTENTS_FREED                                      NUMBER
 EXTENTS_SEALED                                     NUMBER
 CON_ID                                             NUMBER


SQL> select * from v$securefile_shrink;

no rows selected

SQL> select * from gv$securefile_shrink;

no rows selected

SQL>

Nothing!!! Why ? Why ? Let me try the other way (MOVE LOB) and see if that populates anything in v$securefile_shrink or gv$securefile_shrink. Repopulated the same table again with 100000 rows, deleted all rows from the table, committed changes, regathered stats and verified table size, same what we did in Test 1.

-- Changed the LOB SHRINK technique this time.
SQL> ALTER TABLE secure_file_tab MOVE LOB(BCOL) STORE AS (TABLESPACE test);

Table altered.


-- Lets check if anything captured by the view
SQL> select * from v$securefile_shrink;

no rows selected

SQL> select * from gv$securefile_shrink;

no rows selected

SQL>

Nothing!!! Why ? Why ? Let me try the with one more way (SHRINK SPACE CASCADE) again and see if that populates anything in v$securefile_shrink or gv$securefile_shrink. Repopulated the same table again with 100000 rows, deleted all rows from the table, committed changes, regathered stats and verified table size, same what we did in Test 1.

--Let me try the other way
SQL> alter table secure_file_tab MODIFY LOB(BCOL) (SHRINK SPACE CASCADE);

Table altered.

-- Lets check if anything captured by the view
SQL> select * from v$securefile_shrink;

no rows selected

SQL> select * from gv$securefile_shrink;

no rows selected

SQL>

Nothing!!! Why ? Why ? This is super strange now. Then finally, I decided to check with Oracle support and asked them to try at their end in 21c and from their end also it was getting populated. I guess I have identified a BUG in 21.3.0.0.0!! 🙂 🙂 Currently the case is with development team and they are taking a look on this buggy behavior on Oracle 21.5 on Red Hat Linux 7 x86_64 … Let’s see how it goes from here, will post the solution or the cause behind this ill behavior soon.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

What is a _FIX_CONTROL & DBMS_OPTIM_BUNDLE in Oracle ?

Posted by FatDBA on July 17, 2022

Lately I was in discussion with one of my friend who was facing an issue with Oracle 19c database where the vendor asked him to apply a patch to fix the problem, but he did not want to apply that single patch because their Oracle homes were shared and he didn’t want to increase the complexity of their patching cycles. Then later on Oracle suggested them to try a workaround which requires a setting using fix controls.

So many times Oracle recommends to set a fix control in case of a bug fix, but what exactly are they ? Their purpose ? & tools and methods to control these bug fixes ? This post is all about explaining all of them in detail.

So, What are they ? – Fix controls are bug fix control parameters introduced in 10.2 and they are typically used to enable/disable certain bug fixes in Oracle database. You cannot pull-back any patch, the patch you trying must have the option to use _FIX_CONTROL, and must be visible under V$SYSTEM_FIX_CONTROL views.

Let’s understand this using one of the case where mview push predicate was not happening due to wrong cardinality estimate in one of the production system running on 12.1.0.2. It was rejecting join predicate pushdown (JPPD) transformations and this was avoiding view to be joined with index-based nested-loop join method and causing issues. This was happening all due to bug 21802552. Let’s check if the bug number is present in fix control views and what’s its status.

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          1 correct cardinality adjusted by DS

-- You can get similar information using DBMS_SQLDIAG.GET_FIX_CONTROL(BUG NUMBER) proc as well.

So, its there in the view’s output and its enabled (value 1), and we can turn it off, lets do it. A proper syntax of using them is given below.

-- To enable:
"_fix_control"='Bugno:ON'    (OR)   "_fix_control"="Bugno:1"

-- To disable:
"_fix_control"='Bugno:OFF'  (OR)   "_fix_control"="Bugno:0"


SQL> ALTER SYSTEM SET "_fix_control" = '21802552:OFF';

System altered.

SQL>

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          0 correct cardinality adjusted by DS



-- same was recorded in alert log file as well

2022-07-16T09:04:02.371313-04:00
ALTER SYSTEM SET _fix_control='21802552:OFF' SCOPE=BOTH;

You can do the same using the new dbms_optim_bundle.set_fix_controls package, it was introduced in 12.1.0.2 to implement Oracle’s approach of ‘Automatic Fix Control Persistence’ framework. Let’s try to the same using said package.

-- This will set given _fix_controls in scope=BOTH on all instances
-- Lets enable it again before we disable it back again
SQL> exec dbms_optim_bundle.set_fix_controls('21802552:1','*', 'BOTH', 'NO');

PL/SQL procedure successfully completed.

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          1 correct cardinality adjusted by DS


-- Lets roll it back
SQL> exec dbms_optim_bundle.set_fix_controls('21802552:0','*', 'BOTH', 'NO');

PL/SQL procedure successfully completed.

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          0 correct cardinality adjusted by DS

--
-- Entry in parameter file made by the dbms_optim_bundle package for fix control
*._fix_control='21802552:0'#added through dbms_optim_bundle package


Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

How to trace DBMS_STATS and see what is under the hood for a slow sluggish stats collection ..

Posted by FatDBA on July 10, 2022

Recently while working on a slow stats gathering case, someone asked if there is a way to know what’s happening beneath the surface ? what all flags and calculations its doing internally while on the front end the DBMS_STATS still running ? Yes, there is a way! As with most of the Oracle utilities, DBMS_STATS too comes with its own tracing facility which you can call using dbms_stats.set_global_prefs. Today’s post is all about enabling tracing on DBMS_STATS package.

Few of the commonly used DBMS_STATS flags/options are given below, you can always do a combination if wanted to club …

4 = This is to trace table stats
8 = This is to trace index stats
16 = This is to trace columnar stats
512 = auto stats job
1024 = This is to trace parallel executions
4096 = This is to trace partition prunes
16384 = This one traces extended column stats
32768 = To trace approximate NDV (number distinct values) ….

Let’s quickly collect traces for both Indexes and Tables followed by the stas collection for a table and its dependent Indices.

SQL>
SQL> exec dbms_stats.set_global_prefs('TRACE',4+8);

PL/SQL procedure successfully completed.

SQL>

SQL> set time on
13:28:55 SQL>
13:28:56 SQL>
13:28:56 SQL>
13:28:56 SQL> exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'SYS' , tabname => 'BIGTAB',cascade => true, estimate_percent => 10,method_opt=>'for all indexed columns size 1', granularity => 'ALL', degree => 4);

PL/SQL procedure successfully completed.

13:28:59 SQL>
13:29:00 SQL>


SQL> select name, value from v$diag_info where name='Diag Trace';

NAME                                     VALUE
---------------------------------------- ------------------------------------------------------------------------------------
Diag Trace                               /u01/app/oracle/diag/rdbms/localdb/localdb/trace

Alright, we have our trace ready, let’s dig in deep what’s all inside the trace that we have collected for both the Table and it’s Index.
It starts with a standard header in the trace files, followed by all by default explicitly mentioned flags/options with DBMS_STATS in XML format.

-- Header
DBMS_STATS: Record gather table stats operation on table : BIGTAB
DBMS_STATS:   job name: 
DBMS_STATS:    |--> Operation id: 1341
DBMS_STATS: gather_table_stats: <params><param name="block_sample" val="FALSE"/><param name="cascade" val="TRUE"/>
<param name="concurrent" val="FALSE"/><param name="degree" val="4"/><param name="estimate_percent" val="10"/>
<param name="force" val="FALSE"/><param name="granularity" val="ALL"/><param name="method_opt" val="for all indexed columns size 1"/><param name="no_invalidate" val="NULL"/>
<param name="ownname" val="SYS"/><param name="partname" val=""/><param name="reporting_mode" val="FALSE"/><param name="statid" val=""/><param name="statown" val=""/>
<param name="stattab" val=""/><param name="stattype" val="DATA"/><param name="tabname" val="BIGTAB"/></params>
DBMS_STATS: Start gather table stats -- tabname: BIGTAB

Next comes the preferences set for the called Table, ‘BIGTAB’ in our case. This will give you more idea about what all options were set overall and if there is any scope to tune and tweak anyone of them.

DBMS_STATS: Preferences for table SYS.BIGTAB
DBMS_STATS: ================================================================================
DBMS_STATS: SKIP_TIME                                         - 
DBMS_STATS: STATS_RETENTION                                   - 
DBMS_STATS: MON_MODS_ALL_UPD_TIME                             - 
DBMS_STATS: SNAPSHOT_UPD_TIME                                 - 
DBMS_STATS: TRACE                                             - 12
DBMS_STATS: DEBUG                                             - 0
DBMS_STATS: SYS_FLAGS                                         - 1
DBMS_STATS: SPD_RETENTION_WEEKS                               - 53
DBMS_STATS: CASCADE                                           - DBMS_STATS.AUTO_CASCADE
DBMS_STATS: ESTIMATE_PERCENT                                  - DBMS_STATS.AUTO_SAMPLE_SIZE
DBMS_STATS: DEGREE                                            - NULL
DBMS_STATS: METHOD_OPT                                        - FOR ALL COLUMNS SIZE AUTO
DBMS_STATS: NO_INVALIDATE                                     - DBMS_STATS.AUTO_INVALIDATE
DBMS_STATS: GRANULARITY                                       - AUTO
DBMS_STATS: PUBLISH                                           - TRUE
DBMS_STATS: STALE_PERCENT                                     - 10
DBMS_STATS: APPROXIMATE_NDV                                   - TRUE
DBMS_STATS: APPROXIMATE_NDV_ALGORITHM                         - REPEAT OR HYPERLOGLOG
DBMS_STATS: ANDV_ALGO_INTERNAL_OBSERVE                        - FALSE
DBMS_STATS: INCREMENTAL                                       - FALSE
DBMS_STATS: INCREMENTAL_INTERNAL_CONTROL                      - TRUE
DBMS_STATS: AUTOSTATS_TARGET                                  - AUTO
DBMS_STATS: CONCURRENT                                        - OFF
DBMS_STATS: JOB_OVERHEAD_PERC                                 - 1
DBMS_STATS: JOB_OVERHEAD                                      - -1
DBMS_STATS: GLOBAL_TEMP_TABLE_STATS                           - SESSION
DBMS_STATS: ENABLE_TOP_FREQ_HISTOGRAMS                        - 3
DBMS_STATS: ENABLE_HYBRID_HISTOGRAMS                          - 3
DBMS_STATS: TABLE_CACHED_BLOCKS                               - 1
DBMS_STATS: INCREMENTAL_LEVEL                                 - PARTITION
DBMS_STATS: INCREMENTAL_STALENESS                             - ALLOW_MIXED_FORMAT
DBMS_STATS: OPTIONS                                           - GATHER
DBMS_STATS: GATHER_AUTO                                       - AFTER_LOAD
DBMS_STATS: STAT_CATEGORY                                     - OBJECT_STATS, REALTIME_STATS
DBMS_STATS: SCAN_RATE                                         - 0
DBMS_STATS: GATHER_SCAN_RATE                                  - HADOOP_ONLY
DBMS_STATS: PREFERENCE_OVERRIDES_PARAMETER                    - FALSE
DBMS_STATS: AUTO_STAT_EXTENSIONS                              - OFF
DBMS_STATS: WAIT_TIME_TO_UPDATE_STATS                         - 15
DBMS_STATS: ROOT_TRIGGER_PDB                                  - FALSE
DBMS_STATS: COORDINATOR_TRIGGER_SHARD                         - FALSE
DBMS_STATS: MAINTAIN_STATISTICS_STATUS                        - FALSE
DBMS_STATS: AUTO_TASK_STATUS                                  - OFF
DBMS_STATS: AUTO_TASK_MAX_RUN_TIME                            - 3600
DBMS_STATS: AUTO_TASK_INTERVAL                                - 900
DBMS_STATS: STATS_MODEL_INTERNAL_MINRSQ                       - 0.9
DBMS_STATS: STATS_MODEL_INTERNAL_CONTROL                      - 0
DBMS_STATS: STATS_MODEL                                       - ON
DBMS_STATS: AUTO_STATS_ADVISOR_TASK                           - TRUE

Next sections is where it monitors the stats gathering for Table and its Index, their start and end times were captured, for example : “APPROX_NDV_ALGORITHM => Non-Incremental” was chosen. Table level stats were used i.e. row count, blocks, average row length, sample size, number of indexes etc.

Next it jumps to the Index where it tries to calculate the global statistics for the partitioned Index by aggregating the partition-level statistics but as its a non-partitioned Index, it simples moves to the next stage. Next it triggers the Indexing options (just enables them at the start) i.e. nlb (number of leaf blocks), ndk (number of distinct keys), clf (clustering factor), nblks (number of blocks), sample percentage, degree (parallelism), nrw (key counts) etc by allocating them value 1 which means to consider them.
Finally it starts to analyze the Index structure (In this case it’s not using approximate_ndv algorithm) and calculates the final values of nrw, nlb, ndk, clf etc.

Few Internal options were also used i.e. ctx.conc_ctx, ctx.batching_coeff etc.

-- Table Stats
DBMS_STATS: Started table SYS.BIGTAB. at 28-JUN-22 01.28.58.499744000 PM -04:00. Execution phase: SERIAL (NON-CONCURRENT) stattype: DATA Reporting
DBMS_STATS: reporting_man_log_task: target: "SYS"."BIGTAB" objn: 108254 auto_stats: FALSE status: IN PROGRESS ctx.batching_coeff: 0
DBMS_STATS: Start construct analyze using SQL .. Execution Phase: SERIAL (NON-CONCURRENT) granularity: ALL global_requested: NULL pfix: 
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: HLL in incremental (no stats exist)
DBMS_STATS: Start gather part -- ctx.conc_ctx.global_requested: NULL gran: ALL execution phase: 1
DBMS_STATS: APPROX_NDV_ALGORITHM chosen: AS in non-incremental
DBMS_STATS: Start gather_stats.. pfix:  ownname: SYS tabname: BIGTAB pname:  spname:  execution phase: 1
DBMS_STATS: Number of rows in the table = 492190, blocks = , average row length = 19, chain count = , scan rate = 0, sample size = 49219, cstats.count = 3, cind = 2

-- Index part 
DBMS_STATS: Started index SYS.IDX_DIXIT at 28-JUN-22 01.28.58.816819000 PM -04:00 granularity: ALL gIdxGran: 
DBMS_STATS: Specified granularity = , New granularity = ALL, Fixed granularity = ALL
DBMS_STATS: granularity ALL
DBMS_STATS: reporting_man_log_task: target: "SYS"."IDX_DIXIT" objn: 108255 auto_stats: FALSE status: IN PROGRESS ctx.batching_coeff: 0
DBMS_STATS:  Gather index subpartition stats...
DBMS_STATS:  Gather index partition stats...
DBMS_STATS:  Gather global index stats...
DBMS_STATS: Start analyze_index_using_sql
DBMS_STATS: tab_stats_stale: BIGTAB not analyzed using full compute
DBMS_STATS:   Number of blocks in the index: 1030
DBMS_STATS: Start execute_analyze_index(owner= SYS, indname= IDX_DIXIT, fobjn= , sample_pct= 10, seed= 0, degree= 4, collect_nrw= 1, collect_nlb= 1, collect_ndk= 1, collect_clf= 1, bmi= 0, iot_ov= 0, iot_sec= 0, ppredtxt= , nblks= 1030)
DBMS_STATS: execute_analyze_index(): Not using approximate_ndv, pct=111.529126213592233009708737864077669903,sample_pct=10
DBMS_STATS: End execute_analyze_index(ssize= 494100, nrw= 494100, nlb= 1027, ndk= 1000, clf= 494100)
DBMS_STATS: target_size: 1027
DBMS_STATS: Finished index SYS.IDX_DIXIT at 28-JUN-22 01.28.59.464468000 PM -04:00

DBMS_STATS: Finished table SYS.BIGTAB. at 28-JUN-22 01.28.59.519495000 PM -04:00

Once you are done with the stats tracing, close it immediately!

SQL> exec dbms_stats.set_global_prefs('TRACE',0);

PL/SQL procedure successfully completed.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

High stats collection time for partitioned tables after upgrade to 19c

Posted by FatDBA on July 2, 2022

Recently, while working on a database upgrade from 12c to 19c (19.15) one of my friend encountered a strange issue on the newly upgraded 19c database where the stats gathering on the full database started taking huge time. It used to take ~ 3 hours to complete the full database statistics, but the same stats collection job after the upgrade to 19c started taking close to 10 hours. The stats collection script they were using was quite simple and with minimal parameters used.

EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

During the analysis he observed that the top 3-4 in-flight transactions during stats collection were related to the Index Statistics and were found doing ‘Index Fast Full Scan’, and all of them are on few of the large partitioned tables in the database. He discussed the case with me and together tried few thing i.e. recollected dictionary and fixed object statistics, did some comparative study of parameters between 12c and 19c but none of them worked. At last we tried to set debugging levels on DBMS_STATS to see what’s happening under the hood, and that gave us some hint when set it with level/flag 8 (trace index stats) and with level 32768 to trace approximate NDV (number distinct values) gatherings. Traces gave us some idea that its surely with the index stats and NDV or number of distinct keys and is taking time.

But even after that we both were totally clueless as these Tables and its dependent objects are there in the system for a very long time. So. the big question was – What’s new in 19c that has slowed down stats collection ?

Finally we decided to contact OCS! And they quickly responded to the problem as its a known problem with the 19c. As per them, there was an enhancement in 19c that is related to Index stats gathering, and that had lead to the longer stats times. It was all due to an unpublished Bug 33427856 which is an enhancement to improve the calculation of index NDK (Number of Distinct Keys). This new feature with the approx_count_distinct function and fully scans indexes to calculate NDK. This has a significant benefit because NDK is now accurate. It also means that gathering statistics can take longer (for example, updating global index statistics if incremental stats is used). So, In general, this is expected behavior, since DBMS_STATS is doing more work in 19c than it did in previously unenhanced versions.

And the solution to this new 19c index-stats feature (a problem) off by setting fix control to disable ‘Enhance Index NDK Statistics’ – 27268249

alter system set "_fix_control"='27268249:0';

And as soon as we deleted existing statistics and regather them, the time dropped drastically and got completed under 3 hours.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

Exceptionally high stats collection time on FIXED OBJECTS during an upgrade …

Posted by FatDBA on June 26, 2022

Someone recently asked about a situation where they were trying to upgrade their database to 19c and as a part of their upgrade plan, they were trying to run fixed object statistics but it was going on forever, and they were totally clueless why and where its taking time. This being a mandatory step, they tried several times, but same result.

About fixed object stats, It is recommended that you re-gather them if you do a major database or application upgrade, implement a new module, or make changes to the database configuration. For example if you increase the SGA size then all of the X$ tables that contain information about the buffer cache and shared pool may change significantly, such as X$ tables used in v$buffer_pool or v$shared_pool_advice.

About fixed objects stats collection idle time, I mean anything between 1-10 minutes is I will say normal and average, but anything that goes beyond 20 minutes or even more or even in hours is abnormally high and points to a situation.

So, I was asked to take a look on ad-hoc basis and during the analysis I found a SQL trying to do a count all on unified_audit_trail, and was running from the same time since they called the DBMS_STATS for FIXED OBJECTS on the database. When asked, they told me that they’d enabled auditing on the database some 6 months back and haven’t purged anything since then, the audit trail had grown behemoth and has ~ 880 Million records. I immediately offered them two approaches to handle the situation – Either lock your unified table statistics (using dbms_stats.lock_table_stats) or else take backup of the table and purge audit records before calling the stats gathering job again. They agreed with the second approach, they took backup of audit table and purged audit trail. As soon as they purged audit table, the stats collection on fixed objects got finished in ~ 3 minutes.

This was the situation and what we did …

SQL> select * from dba_audit_mgmt_last_arch_ts;

AUDIT_TRAIL RAC_INSTANCE LAST_ARCHIVE_TS
-------------------- ------------ ------------------------------
STANDARD AUDIT TRAIL 0 22-MAY-22 06.00.00.000000 AM +00:00


SQL> select count(*) from aud$;

COUNT(*)
----------
885632817

BEGIN
DBMS_AUDIT_MGMT.clean_audit_trail(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
use_last_arch_timestamp => FALSE);
END;
/

SQL> select count(*) from aud$;

COUNT(*)
----------
0


SQL> SET TIMING ON
SQL> BEGIN
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
END;
/

Elapsed: 00:03:10.81

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | 2 Comments »

 
%d bloggers like this: