Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for the ‘Advanced’ Category

Advance/Troubleshooting/Error-Bug Fixing

A new 21c dynamic view securefile_shrink, and possibly a new BUG in 21.3. I guess I have identified a BUG

Posted by FatDBA on September 1, 2022

Recently I was doing a POC on Oracle 21c, and I had to shrink an LOB secure file column, and I was aware about a new dynamic view added to Oracle 21c called V$SECUREFILE_SHRINK, which is very useful to monitor securefiles shrink operations. This gives you some great details like start time, end time, status of the operation etc. I shrank one of the required securefile LOB column of a table and I was interested to see what all was captured by V$SECUREFILE_SHRINK … But It was empty!! got ‘no rows selected’, Why is that ??

Let me try to replicate the scenario and explain what I was doing that day.

-- Let me create a test scenario
-- Will create a brand new playground to test all things
[oracle@witnessalberta ~]$ !sq
sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Sun Aug 28 10:50:26 2022
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL>
SQL>
SQL>
SQL> CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY oracle90 CREATE_FILE_DEST='/opt/oracle/oradata';

Pluggable database created.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCLPDB1                       MOUNTED
         4 PDB2                           MOUNTED
SQL> ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;

Pluggable database altered.

SQL>  ALTER SESSION SET CONTAINER = PDB2;

Session altered.

SQL> SHOW CON_NAME;

CON_NAME
------------------------------
PDB2

SQL>
SQL> conn fatdba/oracle90@PDB2
Connected.
SQL>
SQL>
SQL>
SQL> show user
USER is "FATDBA"
SQL>
SQL>
SQL>
SQL>

-- Created a Table with a SECUREFILE LOB 
SQL> CREATE TABLE secure_file_tab (rid  NUMBER(5), bcol BLOB) LOB (bcol) STORE AS SECUREFILE bcol_lob 
(TABLESPACE users DISABLE  STORAGE IN ROW CHUNK 8192 RETENTION MIN 3600 KEEP_DUPLICATES NOCOMPRESS DECRYPT CACHE READS); 

Table created.

SQL>

SQL>
-- Inserted some 100000 random rows into the table
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
insert into secure_file_tab VALUES(101, utl_raw.cast_to_raw('hello, this is the first review'));
...
......
........

SQL> select count(*) from secure_file_tab;

  COUNT(*)
----------
    100001

SQL>
SQL>
-- Lets check the size of the table.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name='SECURE_FILE_TAB';

SEGMENT_NAME         BYTES/1024
-------------------- ----------
SECURE_FILE_TAB      4096

-- Have deleted all rows from the table
SQL> delete from secure_file_tab where RID=101;

100000 rows deleted.

SQL> COMMIT;

SQL> select count(*) from secure_file_tab;

COUNT(*)
----------
1

-- Gathered table stats
SQL> exec dbms_stats.gather_table_stats(null, 'secure_file_tab');

PL/SQL procedure successfully completed.


SQL> select segment_name, bytes/1024 from dba_segments where segment_name='SECURE_FILE_TAB';

SEGMENT_NAME         BYTES/1024
-------------------- ----------
SECURE_FILE_TAB      113



SQL> alter table secure_file_tab modify lob(bcol) (shrink space);

Table altered.

Now time to check if anything captured in the v$securefile_shrink or gv$securefile_shrink dynamic view about the securefile shrink operation.

-- Lets check if anything captured by the view
SQL> desc v$securefile_shrink
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 LOB_OBJD                                           NUMBER
 SHRINK_STATUS                                      VARCHAR2(40)
 START_TIME                                         TIMESTAMP(3) WITH TIME ZONE
 END_TIME                                           TIMESTAMP(3) WITH TIME ZONE
 BLOCKS_MOVED                                       NUMBER
 BLOCKS_FREED                                       NUMBER
 BLOCKS_ALLOCATED                                   NUMBER
 EXTENTS_ALLOCATED                                  NUMBER
 EXTENTS_FREED                                      NUMBER
 EXTENTS_SEALED                                     NUMBER
 CON_ID                                             NUMBER


SQL> select * from v$securefile_shrink;

no rows selected

SQL> select * from gv$securefile_shrink;

no rows selected

SQL>

Nothing!!! Why ? Why ? Let me try the other way (MOVE LOB) and see if that populates anything in v$securefile_shrink or gv$securefile_shrink. Repopulated the same table again with 100000 rows, deleted all rows from the table, committed changes, regathered stats and verified table size, same what we did in Test 1.

-- Changed the LOB SHRINK technique this time.
SQL> ALTER TABLE secure_file_tab MOVE LOB(BCOL) STORE AS (TABLESPACE test);

Table altered.


-- Lets check if anything captured by the view
SQL> select * from v$securefile_shrink;

no rows selected

SQL> select * from gv$securefile_shrink;

no rows selected

SQL>

Nothing!!! Why ? Why ? Let me try the with one more way (SHRINK SPACE CASCADE) again and see if that populates anything in v$securefile_shrink or gv$securefile_shrink. Repopulated the same table again with 100000 rows, deleted all rows from the table, committed changes, regathered stats and verified table size, same what we did in Test 1.

--Let me try the other way
SQL> alter table secure_file_tab MODIFY LOB(BCOL) (SHRINK SPACE CASCADE);

Table altered.

-- Lets check if anything captured by the view
SQL> select * from v$securefile_shrink;

no rows selected

SQL> select * from gv$securefile_shrink;

no rows selected

SQL>

Nothing!!! Why ? Why ? This is super strange now. Then finally, I decided to check with Oracle support and asked them to try at their end in 21c and from their end also it was getting populated. I guess I have identified a BUG in 21.3.0.0.0!! 🙂 🙂 Currently the case is with development team and they are taking a look on this buggy behavior on Oracle 21.5 on Red Hat Linux 7 x86_64 … Let’s see how it goes from here, will post the solution or the cause behind this ill behavior soon.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

What is a _FIX_CONTROL & DBMS_OPTIM_BUNDLE in Oracle ?

Posted by FatDBA on July 17, 2022

Lately I was in discussion with one of my friend who was facing an issue with Oracle 19c database where the vendor asked him to apply a patch to fix the problem, but he did not want to apply that single patch because their Oracle homes were shared and he didn’t want to increase the complexity of their patching cycles. Then later on Oracle suggested them to try a workaround which requires a setting using fix controls.

So many times Oracle recommends to set a fix control in case of a bug fix, but what exactly are they ? Their purpose ? & tools and methods to control these bug fixes ? This post is all about explaining all of them in detail.

So, What are they ? – Fix controls are bug fix control parameters introduced in 10.2 and they are typically used to enable/disable certain bug fixes in Oracle database. You cannot pull-back any patch, the patch you trying must have the option to use _FIX_CONTROL, and must be visible under V$SYSTEM_FIX_CONTROL views.

Let’s understand this using one of the case where mview push predicate was not happening due to wrong cardinality estimate in one of the production system running on 12.1.0.2. It was rejecting join predicate pushdown (JPPD) transformations and this was avoiding view to be joined with index-based nested-loop join method and causing issues. This was happening all due to bug 21802552. Let’s check if the bug number is present in fix control views and what’s its status.

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          1 correct cardinality adjusted by DS

-- You can get similar information using DBMS_SQLDIAG.GET_FIX_CONTROL(BUG NUMBER) proc as well.

So, its there in the view’s output and its enabled (value 1), and we can turn it off, lets do it. A proper syntax of using them is given below.

-- To enable:
"_fix_control"='Bugno:ON'    (OR)   "_fix_control"="Bugno:1"

-- To disable:
"_fix_control"='Bugno:OFF'  (OR)   "_fix_control"="Bugno:0"


SQL> ALTER SYSTEM SET "_fix_control" = '21802552:OFF';

System altered.

SQL>

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          0 correct cardinality adjusted by DS



-- same was recorded in alert log file as well

2022-07-16T09:04:02.371313-04:00
ALTER SYSTEM SET _fix_control='21802552:OFF' SCOPE=BOTH;

You can do the same using the new dbms_optim_bundle.set_fix_controls package, it was introduced in 12.1.0.2 to implement Oracle’s approach of ‘Automatic Fix Control Persistence’ framework. Let’s try to the same using said package.

-- This will set given _fix_controls in scope=BOTH on all instances
-- Lets enable it again before we disable it back again
SQL> exec dbms_optim_bundle.set_fix_controls('21802552:1','*', 'BOTH', 'NO');

PL/SQL procedure successfully completed.

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          1 correct cardinality adjusted by DS


-- Lets roll it back
SQL> exec dbms_optim_bundle.set_fix_controls('21802552:0','*', 'BOTH', 'NO');

PL/SQL procedure successfully completed.

SQL> select bugno, value, description from v$system_fix_control where bugno=21802552;

     BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
  21802552          0 correct cardinality adjusted by DS

--
-- Entry in parameter file made by the dbms_optim_bundle package for fix control
*._fix_control='21802552:0'#added through dbms_optim_bundle package


Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

High stats collection time for partitioned tables after upgrade to 19c

Posted by FatDBA on July 2, 2022

Recently, while working on a database upgrade from 12c to 19c (19.15) one of my friend encountered a strange issue on the newly upgraded 19c database where the stats gathering on the full database started taking huge time. It used to take ~ 3 hours to complete the full database statistics, but the same stats collection job after the upgrade to 19c started taking close to 10 hours. The stats collection script they were using was quite simple and with minimal parameters used.

EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

During the analysis he observed that the top 3-4 in-flight transactions during stats collection were related to the Index Statistics and were found doing ‘Index Fast Full Scan’, and all of them are on few of the large partitioned tables in the database. He discussed the case with me and together tried few thing i.e. recollected dictionary and fixed object statistics, did some comparative study of parameters between 12c and 19c but none of them worked. At last we tried to set debugging levels on DBMS_STATS to see what’s happening under the hood, and that gave us some hint when set it with level/flag 8 (trace index stats) and with level 32768 to trace approximate NDV (number distinct values) gatherings. Traces gave us some idea that its surely with the index stats and NDV or number of distinct keys and is taking time.

But even after that we both were totally clueless as these Tables and its dependent objects are there in the system for a very long time. So. the big question was – What’s new in 19c that has slowed down stats collection ?

Finally we decided to contact OCS! And they quickly responded to the problem as its a known problem with the 19c. As per them, there was an enhancement in 19c that is related to Index stats gathering, and that had lead to the longer stats times. It was all due to an unpublished Bug 33427856 which is an enhancement to improve the calculation of index NDK (Number of Distinct Keys). This new feature with the approx_count_distinct function and fully scans indexes to calculate NDK. This has a significant benefit because NDK is now accurate. It also means that gathering statistics can take longer (for example, updating global index statistics if incremental stats is used). So, In general, this is expected behavior, since DBMS_STATS is doing more work in 19c than it did in previously unenhanced versions.

And the solution to this new 19c index-stats feature (a problem) off by setting fix control to disable ‘Enhance Index NDK Statistics’ – 27268249

alter system set "_fix_control"='27268249:0';

And as soon as we deleted existing statistics and regather them, the time dropped drastically and got completed under 3 hours.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

Exceptionally high stats collection time on FIXED OBJECTS during an upgrade …

Posted by FatDBA on June 26, 2022

Someone recently asked about a situation where they were trying to upgrade their database to 19c and as a part of their upgrade plan, they were trying to run fixed object statistics but it was going on forever, and they were totally clueless why and where its taking time. This being a mandatory step, they tried several times, but same result.

About fixed object stats, It is recommended that you re-gather them if you do a major database or application upgrade, implement a new module, or make changes to the database configuration. For example if you increase the SGA size then all of the X$ tables that contain information about the buffer cache and shared pool may change significantly, such as X$ tables used in v$buffer_pool or v$shared_pool_advice.

About fixed objects stats collection idle time, I mean anything between 1-10 minutes is I will say normal and average, but anything that goes beyond 20 minutes or even more or even in hours is abnormally high and points to a situation.

So, I was asked to take a look on ad-hoc basis and during the analysis I found a SQL trying to do a count all on unified_audit_trail, and was running from the same time since they called the DBMS_STATS for FIXED OBJECTS on the database. When asked, they told me that they’d enabled auditing on the database some 6 months back and haven’t purged anything since then, the audit trail had grown behemoth and has ~ 880 Million records. I immediately offered them two approaches to handle the situation – Either lock your unified table statistics (using dbms_stats.lock_table_stats) or else take backup of the table and purge audit records before calling the stats gathering job again. They agreed with the second approach, they took backup of audit table and purged audit trail. As soon as they purged audit table, the stats collection on fixed objects got finished in ~ 3 minutes.

This was the situation and what we did …

SQL> select * from dba_audit_mgmt_last_arch_ts;

AUDIT_TRAIL RAC_INSTANCE LAST_ARCHIVE_TS
-------------------- ------------ ------------------------------
STANDARD AUDIT TRAIL 0 22-MAY-22 06.00.00.000000 AM +00:00


SQL> select count(*) from aud$;

COUNT(*)
----------
885632817

BEGIN
DBMS_AUDIT_MGMT.clean_audit_trail(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
use_last_arch_timestamp => FALSE);
END;
/

SQL> select count(*) from aud$;

COUNT(*)
----------
0


SQL> SET TIMING ON
SQL> BEGIN
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
END;
/

Elapsed: 00:03:10.81

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | 2 Comments »

What is that strange looking wait event ‘TCP Socket (KGAS)’ in AWR report ?

Posted by FatDBA on June 13, 2022

Hi Guys,

Recently someone shared me an AWR report from a production 19c system, and he was very tensed about one of the strange looking wait event called ‘TCP Socket (KGAS)’. He was strained because the event was coming with a very high average wait time of 7863.68ms (7.86 seconds), and was consuming around 98.0% of the total DB Time.

Luckily I’d encountered something similar in the past for one of the customer, where the application team was unable to send the mail as DBMS scheduler, and it was stuck for a long time with wait event “TCP Socket(KGAS)” where problem was not with the scheduler, but was an underlying network or third-party application problem.

So, today’s post is all about the wait event, what it is, how to resolve it etc.

KGAS is a element in the server which handles TCP/IP sockets which is typically used in dedicated connections i.e. by some PLSQL built in packages such as UTL_HTTP and UTL_TCP.
A session is waiting for an external host to provide requested data over a network socket. The time that this wait event tracks does not indicate a problem, and even a long wait time is not a reason to contact Oracle Support. It naturally takes time for data to flow between hosts over a network, and for the remote aspect of an application to process any request made to it. An application that communicates with a remote host must wait until the data it will read has arrived.

From an application/network point of view, delays in establishing a network connection may produce unwanted delays for users. We should make sure that the application makes network calls efficiently and that the network is working well such that these delays are minimized.

From the database point of view, these waits can safely be ignored; the wait event does not represent a database issue. It merely reports the total elapsed time for a network connection to be established or for data to arrive from over the network. The database waits for the connection to be established and reports the time taken. Its always good to check with the network or the third-party application vendors to investigate the underlying socket.

But in case of systemwide waits – If the TIME spent waiting for this event is significant then it is best to determine which sessions are showing the wait and drill into what those sessions are doing as the wait is usually related to whatever application code is executing eg: What part of the application may be using UTL_HTTP or similar and is experiencing waits. This statement can be used to see which sessions may be worth tracing

SELECT sid, total_waits, time_waited
FROM v$session_event WHERE event='TCP Socket (KGAS)' and total_waits>0 ORDER BY 3,2;

In order to reduce these waits or to help find the origin of the socket operations try:

  • Check the current SQL/module/action of V$SESSION for sessions that are waiting on the event at the time that they are waiting to try and identify any common area of application code waiting on the event.
  • Get an ERRORSTACK level 3 dump of some sessions waiting on the event. This should help show the exact PLSQL and C call stacks invoking the socket operation if the dump is taken when the session is waiting. Customers may need assistance from Oracle Support in order to get and interpret such a dump but it can help pinpoint the relevant application code.
  • Trace sessions incurring the waits including wait tracing to try and place the waits in the context of the code executing around the waits. eg: Use event 10046 level 8 or DBMS_MONITOR.SESSION_TRACE_ENABLE.
  • Use DBA_DEPENDENCIES to find any application packages which may ultimately be using UTL_HTTP or UTL_TCP underneath for some operation.

Example:
Execute the following SQL from a session on a dedicated connection and then check the resulting trace file to see “TCP Socket (KGAS)” waits:

alter session set events '10046 trace name context forever, level 8';

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

An interesting case of ‘enq: CR – block range reuse ckpt’, CKPT blocking user sessions …

Posted by FatDBA on May 2, 2022

Hi All,

Last week we faced an interesting issue with one of the production system which was recently migrated from Oracle 12.2. to 19.15. The setup was running on a VMWare machine with limited resources. It all started when the application team started reporting slowness in their daily scheduled jobs and other ad-hoc operations, when checked at the database layer, it was all ‘enq: CR – block range reuse ckpt‘ wait event. Same can be seen in the below ORATOP output, where the BLOCKER ID 3817 is the CKPT or the checkpoint process.

The strange part was, the blocker was CKPT process and it was all for a particular SQL ID (an INSERT operation), see below in next oratop screen fix.

As far as this wait event (other classed), This comes just before you delete or truncate a table, where we need a level segment checkpoint. This is because it must maintain the consistency of the blocks there may be in the buffer memory and what’s on the disc. As per the definition, this wait event happens due to contention on blocks caused by multiple processes trying to update same blocks. This may seem issues from the application logic resulting into this concurrency bottleneck, but interestingly this was happening on a simple INSERT operation, not a DELETE or TRUNCATE.

Same can be seen in the AWR and ASH reports too! There are CBC (Cache Buffer Chains) latching and latch free events too along with ‘enq: CR – block range reuse ckpt‘, but the initial focus was to understand the event and its reasons. As far as ‘direct path read temp‘ it was happening due to couple of SELECT statement which we resolved after attaching a better plan with the SQLs.

Wait event source (from ASH)

SQL Text was quite simple, an INSERT INTO statement.

 INSERT INTO xx_xxx_xx(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) VALUES (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)

I’ve tried first with the CKPT process traces, just to see if there is anything interesting or useful caught within the process logs or traces. The trace was short and has only got some strange and obscure looking content, but which at least gave us an idea that the reuse block checkpoint of the RBR was failed due to enqueue, and its entry was failed due to abandoned parent. Still, that doesn’t helped us anything, we were unsure about the reason.

--> Info in CKPT trace file ---> XXXXX_ckpt_110528.trc.
RBR_CKPT: adding rbr ckpt failed for 65601 due to enqueue
RBR_CKPT: rbr mark the enqueue for 65601 as failed
RBR_CKPT: adding ckpt entry failed due to abandoned parent 0x1b57b4a88
RBR_CKPT: rbr mark the enqueue for 65601 as failed

There were few things logged in the alert.log, multiple deadlocks (ORA 0060), too many parse errors for one SELECT statement and some checkpoint incomplete errors (log switching was high >35)

-- deadlocks in alert log.
Errors in file /opt/u01/app/oracle/diag/rdbms/xxxxxx/xxxx/trace/xxxx_ora_73010.trc:
2022-04-19T13:41:22.489551+05:30 ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄ 
ORA-00060: Deadlock detected. See Note 60.1 at My Oracle Support for Troubleshooting ORA-60 Errors. More info in file /opt/u01/app/oracle/diag/rdbms/pwfmfsm/PWFMFSM/trace/PWFMFSM_ora_73010.trc.

-- From systemstatedumps
[Transaction Deadlock]
The following deadlock is not an ORACLE error. It is a ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following
information may aid in determining the deadlock:

Deadlock graph: 
------------Blocker(s)----------- ------------Waiter(s)------------
Resource Name process session holds waits serial process session holds waits serial
TX-03AC0014-0000B33A-00000000-00000000 1562 691 X 1632 946 5441 X 56143
TX-01460020-0001A5C2-00000000-00000000 946 5441 X 56143 1562 691 X 1632



-- too many parse errors
2022-04-19T13:57:26.261176+05:30
WARNING: too many parse errors, count=2965 SQL hash=0x8ce1e2ff ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
PARSE ERROR: ospid=68036, error=942 for statement:
2022-04-19T13:57:26.272805+05:30
SELECT * FROM ( SELECT xxxxxxxxxxxxxxxxxxxxxx FROM xxxxxxxxxxxxxxxxxxxxxx AND xxxxxxxxxxxxxxxxxxxxxx ORDER BY xxxxxxxxxxxxxxxxxxxxxx ASC ) WHERE ROWNUM <= 750
Additional information: hd=0x546ba1be8 phd=0x61edab798 flg=0x20 cisid=113 sid=113 ciuid=113 uid=113 sqlid=ccmkzhy6f3srz
...Current username=xxxxxxxxxxxxxxxxxxxxxx
...Application: xxxxxxxxxxxxxxxxxxxxxx.exe Action:


-- Checkpoint incomplete
2022-04-19T15:03:16.964470+05:30 ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
Thread 1 cannot allocate new log, sequence 186456
Checkpoint not complete
Current log# 11 seq# 186455 mem# 0: +ONLINE_REDO/xxxxxxxxxxxxxxxxxxxxxx/ONLINELOG/group_11.256.1087565999
2022-04-19T15:03:17.785113+05:30

But the alert log was not sufficient to give us any concrete evidences or reasons for CKPT bloking user sessions. So, next we decided to generate the HANGANALYZE and SYSTEMSTATEDUMPs to understand what’s all happening under the hood, through its wait chains. We noticed few interesting things there

  • Wait chain 1 where a session was waiting on ‘buffer busy waits‘ while doing the “bitmapped file space header” which talks about the space management blocks (nothing to with bitmap indexes) and was related with one SELECT statement.
  • Wait chain 2 where a session was found waiting on ‘enq: CR – block range reuse ckpt‘ event and was blocked by CKPT process (3817) which was further waiting on ‘control file sequential read
  • Wait chain 4 where SID 1670, was found waiting on ‘buffer busy waits‘ while doing ‘bitmapped file space header’ operations.
Chain 1:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (kpkpkpkpk.kpkpkpkpk)
os id: 46794
process id: 2014, oracle@monkeymachine.prod.fdt.swedish.se
session id: 15
session serial #: 19322
module name: 0 (xxxx.exe)
}
is waiting for 'buffer busy waits' with wait info:
{
p1: 'file#'=0xca
p2: 'block#'=0x2
p3: 'class#'=0xd ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄ bitmapped file space header
time in wait: 2 min 20 sec
timeout after: never
wait id: 10224
blocking: 0 sessions
current sql_id: 1835193535
current sql: SELECT * FROM ( SELECT xxxxx FROM task JOIN request ON xxxxx = xxxxx JOIN xxxxx ON xxxxx = xxxxx JOIN c_task_assignment_view ON xxxxx = xxxxx
.
.
.
and is blocked by
=> Oracle session identified by:
{
instance: 1 (kpkpkpkpk.kpkpkpkpk)
os id: 23090
process id: 3231, oracle@monkeymachine.prod.fdt.swedish.se
session id: 261
session serial #: 39729
module name: 0 (xxxx.exe)
}
which is waiting for 'buffer busy waits' with wait info:
{
p1: 'file#'=0xca
p2: 'block#'=0x2
p3: 'class#'=0xd ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄  bitmapped file space header
time in wait: 18.999227 sec ◄◄◄
timeout after: never
wait id: 47356
blocking: 25 sessions ◄◄◄
current sql_id: 0
current sql: <none>
short stack: ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+223<-__sighandler()<-read()+14<-snttread()+16<-nttfprd()+354<-nsbasic_brc()+399<-nioqrc()+438<-opikndf2()+999<-opitsk()+910<-opiino()+936<-opiodr()+1202<-opidrv()+1094<-sou2o()+165<-opimai_real()+422<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
wait history:
* time between current wait and wait #1: 0.000084 sec
1. event: 'SQL*Net message from client'
time waited: 0.008136 sec
wait id: 47355 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
* time between wait #1 and #2: 0.000043 sec
2. event: 'SQL*Net message to client'
time waited: 0.000002 sec
wait id: 47354 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
* time between wait #2 and #3: 0.000093 sec
3. event: 'SQL*Net message from client'
time waited: 2.281674 sec
wait id: 47353 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
}
and may or may not be blocked by another session.
.
.
.
Chain 2:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (kpkpkpkpk.kpkpkpkpk)
os id: 10122
process id: 2850, oracle@monkeymachine.prod.fdt.swedish.se
session id: 97
session serial #: 47697
module name: 0 (xxxx.exe)
}
is waiting for 'enq: CR - block range reuse ckpt' with wait info: ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
{
p1: 'name|mode'=0x43520006
p2: '2'=0x10b22
p3: '0'=0x1
time in wait: 2.018004 sec
timeout after: never
wait id: 81594
blocking: 0 sessions
current sql_id: 1335044282
current sql: <none>
.
.
.
and is blocked by
=> Oracle session identified by:
{
instance: 1 (kpkpkpkpk.kpkpkpkpk)
os id: 110528
process id: 24, oracle@monkeymachine.prod.fdt.swedish.se
session id: 3817 ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄  CKPT background process
session serial #: 46215
}
which is waiting for 'control file sequential read' with wait info: ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
{
p1: 'file#'=0x0
p2: 'block#'=0x11e
p3: 'blocks'=0x1
px1: 'disk number'=0x4
px2: 'au'=0x34
px3: 'offset'=0x98000
time in wait: 0.273981 sec
timeout after: never
wait id: 17482450
blocking: 45 sessions ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
current sql_id: 0
current sql: <none>
short stack: ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+223<-__sighandler()<-semtimedop()+10<-skgpwwait()+192<-ksliwat()+2199<-kslwaitctx()+205<-ksarcv()+320<-ksbabs()+602<-ksbrdp()+1167<-opirip()+541<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
wait history:
* time between current wait and wait #1: 0.000012 sec
1. event: 'control file sequential read'
time waited: 0.005831 sec
wait id: 17482449 p1: 'file#'=0x0
p2: 'block#'=0x11
p3: 'blocks'=0x1
* time between wait #1 and #2: 0.000012 sec
2. event: 'control file sequential read'
time waited: 0.011667 sec
wait id: 17482448 p1: 'file#'=0x0
p2: 'block#'=0xf
p3: 'blocks'=0x1
* time between wait #2 and #3: 0.000017 sec
3. event: 'control file sequential read'
time waited: 0.009160 sec
wait id: 17482447 p1: 'file#'=0x0
p2: 'block#'=0x1
p3: 'blocks'=0x1
}
.
.
.
Chain 4:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (kpkpkpkpk.kpkpkpkpk)
os id: 46479
process id: 1036, oracle@monkeymachine.prod.fdt.swedish.se
session id: 1670
session serial #: 6238
module name: 0 (xxxx.exe)
}
is waiting for 'buffer busy waits' with wait info:
{
p1: 'file#'=0xca
p2: 'block#'=0x2
p3: 'class#'=0xd ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄  bitmapped file space header
time in wait: 18.954206 sec ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
timeout after: never
wait id: 20681
blocking: 0 sessions
current sql_id: 343919375
current sql: SELECT * FROM ( SELECT xxxxx FROM task JOIN request ON xxxxx = xxxxx JOIN xxxxx ON xxxxx = xxxxx JOIN c_task_assignment_view ON xxxxx = xxxxx
.
.
.
and is blocked by
=> Oracle session identified by:
{
instance: 1 (kpkpkpkpk.kpkpkpkpk)
os id: 44958
process id: 523, oracle@monkeymachine.prod.fdt.swedish.se
session id: 4681
session serial #: 41996
module name: 0 (xxxx.exemonkeymachine.prod.fdt.swedish.se (TNS)
}
which is waiting for 'buffer busy waits' with wait info: ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
{
p1: 'file#'=0xca
p2: 'block#'=0x2
p3: 'class#'=0xd ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄ bitmapped file space header
time in wait: 18.959429 sec ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
timeout after: never
wait id: 153995
blocking: 101 sessions ◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄◄
current sql_id: 343919375
current sql:  INSERT INTO xx_xxx_xx(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) VALUES (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
short stack: ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+223<-__sighandler()<-semtimedop()+10<-skgpwwait()+192<-ksliwat()+2199<-kslwaitctx()+205<-ksqcmi()+21656<-ksqcnv()+809<-ksqcov()+95<-kcbrbr_int()+2476<-kcbrbr()+47<-ktslagsp()+2672<-ktslagsp_main()+945<-kdliAllocCache()+37452<-kdliAllocBlocks()+1342<-kdliAllocChunks()+471<-kdliWriteInit()+1249<-kdliWriteV()+967<-kdli_fwritev()+904<-kdlxNXWrite()+577<-kdlx_write()+754<-kdlxdup_write1()+726<-kdlwWriteCallbackOld_pga()+1982<-kdlw_write()+1321<-kdld_write()+410<-kdl
wait history:
* time between current wait and wait #1: 0.756195 sec
1. event: 'direct path write temp'
time waited: 0.406543 sec
wait id: 153994 p1: 'file number'=0xc9
p2: 'first dba'=0xb28fc
p3: 'block cnt'=0x4
* time between wait #1 and #2: 0.000001 sec
2. event: 'ASM IO for non-blocking poll'
time waited: 0.000000 sec
wait id: 153993 p1: 'count'=0x4
p2: 'where'=0x2
p3: 'timeout'=0x0
* time between wait #2 and #3: 0.000002 sec
3. event: 'ASM IO for non-blocking poll'
time waited: 0.000001 sec
wait id: 153992 p1: 'count'=0x4
p2: 'where'=0x2
p3: 'timeout'=0x0
}
and may or may not be blocked by another session.
.
.

Though, we wanted to try couple of hidden parameters to enable fast object level truncation and checkpointing, as they had helped us a lot in the past in similar scenarios, but had to involve Oracle support and after carefully analyzing the situation, they too agreed and want us to try them as they started suspecting it as an aftermath of a known bug of 19c.

[oracle@oracleontario ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 24 00:24:47 2022
Version 19.15.0.0.0
Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

SQL>
SQL> @hidden
Enter value for param: db_fast_obj_truncate
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%db_fast_obj_truncate%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_db_fast_obj_truncate                         TRUE                      TRUE                      enable fast object truncate
_db_fast_obj_ckpt                             TRUE                      TRUE                      enable fast object checkpoint


SQL> ALTER SYSTEM SET "_db_fast_obj_truncate" = false sid = '*';

System altered.

SQL>
SQL> ALTER SYSTEM SET "_db_fast_obj_ckpt" = false sid = '*';

System altered.

SQL>

And soon after setting them, we saw a drastic drop in the waits and system seemed better, much better. But it all happened during an off-peak hour, so there wasn’t much of a workload to see anything odd.

And as we suspected, the issue repeated itself, and next day during peak business hours we started seeing the same issue, same set of events back into existence. This time the ‘latch: cache buffers chains‘ was quite high and prominent which was not that much earlier.

Initially we tried to fix some of the expensive statements on ‘logical IOs’ or memory reads, but that hardly helped. The issue persisted even after setting a higher value for db_block_hash_latches and decreasing cursor_db_buffers_pinned. AWR continues to show ‘latch: cache buffers chains’ in the top ten, foreground timed events, and ‘latch free‘ in first place in the background timed events.
Oracle support confirmed the behavior was due to published bug 33025005 where excessive Latch CBC waits were seen after upgrading from 12c to 19c, and suggested to apply patch 33025005 and then to set hidden parameter “_post_multiple_shared_waiters” to value FALSE (in MEMORY only to test) which disables multiple shared waiters in the database.

-- After applying patch 33025005

SQL> ALTER SYSTEM SET "_post_multiple_shared_waiters" = FALSE SCOPE = MEMORY;

System altered.

And even after applying the patch and setting the recommended undocumented parameter, the issue persisted and we were totally clueless.

And as a last resort, we tried to flush the buffer cache of the database, and bingo that crude method of purging the cache helped to drastically to reduce the load on ‘CBC Latching‘ and for ‘enq: cr block range reuse ckpt‘, and the system ran fine soon after the flush of the DB Buffer cache.

So, nothing worked for us, we changed multiple checkpointing and shared writers related parameters in the database, applied a bug fix patch (33025005), but of no use. Finally, the flush of buffer cache worked for us! Oracle support agreed that this was happening due to a new/unpublished bug (33125873 or 31844316) which is not yet fixed in 19.15 and will be included in 23.1, and they are in status 11 that means still being worked by Development so there is no fix for it yet.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | 5 Comments »

How to prioritize an Oracle Database background process ?

Posted by FatDBA on April 18, 2022

Recently while looking into a system (was running on 19.3.0.0.0 standalone) where ‘log file sync’ was bugging the database, and after we tried all other possible solutions, we thought to increase the priroty of the LGWR background process to see if that helps.

Increasing the LGWR priority is putting the LGWR process in the Round-Robin (SCHED_RR) class. You can increase process’s priority both using OS (renice, nice commands) or Database methods, but this post is about setting the priority using ‘_high_priority_process’ an undocumented/hidden parameter that prioritizes your database managed processes.

I am using Oracle 19.3 for the test where the LGWR is not by default comes with any priority in the DB, starting from 21.3.0.0.0 LGWR process is part of _high_priority_processes group along with VKTM & LMS* processes.
Note: This being a hidden/undocumented parameter I advise to consult with Oracle support before going and changing the parameter value. Try other possible ways to reduce log file sync, before jumping into this crude method of prioritizing LGWR over others.

[oracle@oracleontario ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 10 03:36:06 2022
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> @hidden

Enter value for param: high_prio
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%priority_processes%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_highest_priority_processes                   VKTM                      VKTM                      Highest Priority Process Name Mask
_high_priority_processes                      LMS*|VKTM                 LMS*|VKTM                 High Priority Process Name Mask 

And by default in Oracle version 19.3.0.0 the parameter is set to prioritize VKTM (Virtual keeper of time) and LMS (Lock Manager, a RAC process). Let me check VKTM’s current priority class, and it is set to RR class (SCHED_RR scheduling class) for the process as its defined via _high_priority_processes parameter.

[oracle@oracleontario ~]$ ps -eo pid,class,pri,nice,time,args |grep vktm
 23871 RR   41   - 00:00:53 ora_vktm_dixitdb

About the LGWR process, and it is set to TS (SCHED_OTHER) class and it has no priority class attached to it.

[oracle@oracleontario ~]$ ps -eo pid,class,pri,nice,time,args |grep ora_lg*
 23990 TS   19   0 00:00:07 ora_lgwr_dixitdb

Let’s change the priority and reboot the database to persistent the change!

SQL> alter system set "_high_priority_processes"='LMS*|VKTM|LGWR' scope=spfile;

System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 1593831936 bytes
Fixed Size                  8897024 bytes
Variable Size            1107296256 bytes
Database Buffers          469762048 bytes
Redo Buffers                7876608 bytes
Database mounted.
Database opened.

SQL> @hidden
Enter value for param: high_priority_processes
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%high_priority_processes%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_high_priority_processes                      LMS*|VKTM|LGWR            LMS*|VKTM|LGWR            High Priority Process Name Mask

At the same time I can see the same was logged into the Alert log file.

2022-04-10T03:54:31.488732-04:00
LGWR started with pid=8, OS id=26058 at elevated (RT) priority

So, we have reniced the priority of LGWR on the system, I mean the higher value of priority actually makes the process lower priority; it means the process demands fewer system resources (and therefore is a “nicer” process). Now lets check the scheduling class of the process at the OS, it should be now changed to RR from TS.

SQL> !ps -eo pid,class,pri,nice,time,args |grep ora_lm*
 26058 RR   41   - 00:00:00 ora_lgwr_dixitdb

Let me check at the OS Level what has changed now.

[oracle@oracleontario 26058]$ pwd
/proc/26058
[oracle@oracleontario 26058]$ more sched
ora_lgwr_dixitd (26058, #threads: 1)
-------------------------------------------------------------------
se.exec_start                                :      26820431.663015
se.vruntime                                  :            -2.963799
se.sum_exec_runtime                          :          1858.211503
se.nr_migrations                             :                    0
nr_switches                                  :                 4038
nr_voluntary_switches                        :                 4023
nr_involuntary_switches                      :                   15
se.load.weight                               :                 1024
policy                                       :                    2      -----> Policy, the 0-99 are real-time priorities
prio                                         :                   98
clock-delta                                  :                   59
mm->numa_scan_seq                            :                    0
numa_migrations, 0
numa_faults_memory, 0, 0, 1, 0, -1
numa_faults_memory, 1, 0, 0, 0, -1


-- output from top utility
top - 05:09:14 up  7:32,  3 users,  load average: 0.14, 0.10, 0.11
Tasks:   2 total,   0 running,   2 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
GiB Mem :      4.3 total,      0.0 free,      1.1 used,      3.1 buff/cache
GiB Swap:      3.9 total,      3.9 free,      0.0 used.      2.1 avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
 26027 oracle    -2   0 2016104  20096  17064 S  1.7  0.4   1:28.22 ora_vktm_dixitdb      ---> Look at the PR (priority) column with value -2 (higher pri)
 26058 oracle    -2   0 2017136  30360  26768 S  0.0  0.7   0:01.86 ora_lgwr_dixitdb      ---> Look at the PR (priority) column with value -2 (higher pri)

So, when nothing was working for us, this workaround helped and we were able to reduce LFS waits by more than 80% …

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | 2 Comments »

Adaptive Log File Sync is not always good for you …

Posted by FatDBA on April 11, 2022

Last week I was part of one system/database stability short term assignment where customer running a critical Telco application on 2-Node RAC Cluster (RHEL) on 11.2.0.3.1 reported slowness in few of the critical data processing modules, which in turn had slowed down their entire system. Todays post is about buggy behavior of an adaptive feature which had caused a huge mess in the system in the form of ‘Log File Sync‘ waits.

After few of the initial calls with the customer, we come to know that the database has a history of high ‘log file sync‘ waits, but they simply come and go, and it seemed that the events were never handled correctly or being analyzed! And interestingly, restarting the cluster resolves the issue for few hours, sometimes for few days. LFS event was quite prominently seen in this database and on an average found consuming > 38% of the total DB Time% available with very high average wait times (248 ms).

Below are few of the database statistics captured from the problem time.

About the event, When a user session commits, all redo records generated by that session’s transaction need to be flushed from memory to the redo logfile to insure changes to the database made by that transaction become permanent. The time between the user session posting the LGWR and the LGWR posting the user after the write has completed is the wait time for ‘log file sync’ that the user session will show. This event is known as the time lost as a result of the LGWR process waiting while users initiate a Transaction Commit or Rollback.

Next in order to get more idea about the event, I ran the lfsdiag.sql (Oracle provided script to catch diag info on it – Doc ID 1064487.1)). The script will look at the important parameters involved in log file sync waits, wait histogram data, and at the worst average LFS times in the active session history data and AWR data and dump information to help determine why those times were the highest.

Below are the ASH LFS background process waits caught during its worst minute and it had some really bad stats captured for the worst minute for the same time when customer had worst application performance.

MINUTE          INST_ID EVENT                            TOTAL_WAIT_TIME      WAITS   AVG_TIME_WAITED
------------ ---------- ------------------------------ ----------------- ---------- -----------------
Apr06_1742            2 log file sync                        2819744.300       1973          1429.166
Apr06_1745            2 log file sync                        1219765.027       1200          1016.471
Apr06_1747            2 log file sync                        2045077.213       1745          1171.964
Apr06_1748            2 log file sync                        1417639.236       1170          1211.657
Apr06_1749            2 log file sync                        2202804.958       2190          1005.847
Apr06_1753            2 log file sync                        1967863.159       1871          1051.771
Apr06_1756            2 log file sync                        1096747.638        336          3264.130
Apr06_1843            2 log file sync                        1710602.016        531          3221.473
Apr06_1846            2 log file sync                         814607.205        247          3298.005

Another intersting section was the Histogram data for LFS and other related events. Here was can see the LFS waits at “wait_time_milli” and specially the high wait times to correlate them with other wait events. From below stats its evident that Node 2 of this RAC cluster was severely impacted with the LFS waits., with very high wait counts and wait times (ms), and lot of ‘gcs log flush sync’ along with LFS events that pushes LGWR process to write data to the disk.

The stats are very bad, with highest wait time of 1048576 ms on Node 2 and average of 99864 ms (1.6 mins), both ‘log file parallel write’ and ‘gcs log flush sync’ were quite high too.

   INST_ID EVENT                                    WAIT_TIME_MILLI WAIT_COUNT
---------- ---------------------------------------- --------------- ----------
         1 log file sync                                          1   76126281
         1 log file sync                                          2   31805429
         1 log file sync                                          4   18893320
         1 log file sync                                          8   29604071
         1 log file sync                                         16   21903517
         1 log file sync                                         32    7252292
         1 log file sync                                         64    3692615
         1 log file sync                                        128    1615552
         1 log file sync                                        256     262632
         1 log file sync                                        512      25341
         1 log file sync                                       1024       5431
         1 log file sync                                       2048        901
         1 log file sync                                       4096        188
         1 log file sync                                       8192        102
         1 log file sync                                      16384         33
         2 log file sync                                          1   56003954
         2 log file sync                                          2   19903760
         2 log file sync                                          4   12749437
         2 log file sync                                          8   17572031
         2 log file sync                                         16   12266342
         2 log file sync                                         32    3209853
         2 log file sync                                         64    1124852
         2 log file sync                                        128    2912977
         2 log file sync                                        256    5516739
         2 log file sync                                        512    3226632
         2 log file sync                                       1024     783055
         2 log file sync                                       2048     119096
         2 log file sync                                       4096      24974
         2 log file sync                                       8192       8841
         2 log file sync                                      16384       3782
         2 log file sync                                      32768       1640
         2 log file sync                                      65536        578
         2 log file sync                                     131072        259
         2 log file sync                                     262144         78
         2 log file sync                                     524288         22
         2 log file sync                                    1048576          4


         1 log file parallel write                                1  198650048
         1 log file parallel write                                2   16777612
         1 log file parallel write                                4    9934905
         1 log file parallel write                                8    4511957
         1 log file parallel write                               16     808734
         1 log file parallel write                               32     107699
         1 log file parallel write                               64      17255
         1 log file parallel write                              128       1837
         1 log file parallel write                              256        139
         1 log file parallel write                              512         24
         1 log file parallel write                             1024          2
         2 log file parallel write                                1  150805280
         2 log file parallel write                                2    9299589
         2 log file parallel write                                4    4816280
         2 log file parallel write                                8    3145522
         2 log file parallel write                               16     793765
         2 log file parallel write                               32     116312
         2 log file parallel write                               64      12469
         2 log file parallel write                              128       1298
         2 log file parallel write                              256        118
         2 log file parallel write                              512         14
         2 log file parallel write                             1024          1


         1 gcs log flush sync                                     1   85952301
         1 gcs log flush sync                                     2    8052174
         1 gcs log flush sync                                     4    4224838
         1 gcs log flush sync                                     8    4017059
         1 gcs log flush sync                                    16    1226469
         1 gcs log flush sync                                    32      88359
         1 gcs log flush sync                                    64       4263
         1 gcs log flush sync                                   128         90
         1 gcs log flush sync                                   256          1
         2 gcs log flush sync                                     1   60282021
         2 gcs log flush sync                                     2    5102517
         2 gcs log flush sync                                     4    1951187
         2 gcs log flush sync                                     8    2470197
         2 gcs log flush sync                                    16    1088285
         2 gcs log flush sync                                    32     214273
         2 gcs log flush sync                                    64      26585
         2 gcs log flush sync                                   128        147

As a possible solution, we tried few of the quick ones i.e. batching redo (commit_logging = batch) for the log writer (I know it has its own risks) to reduce LFS, but that didn’t worked either.

Next thing I’d generated the system state dump to understand the system and situation bettern, and the LFS events were caught in the system state dump as well with wait chains pointing to ‘rdbms ipc message'<=’log file sync’.

Process traces are always considered a wealth of diagnostic information, So I’d checked the LGWR process traces and thats where I saw some strangeness with frequent entries related with switching between post/wait and polling method which is an adaptive way to control switching between post/wait (older way) and polling (new method) for log file syncs. This gave me little hint about the possible reason on why so many LFS waits.

Talking about the adaptive log file sync, there are 2 methods by which LGWR and foreground processes can communicate in order to acknowledge that a commit has completed:
Post/wait: traditional method available in previous Oracle releases LGWR explicitly posts all processes waiting for the commit to complete. The advantage of the post/wait method is that sessions should find out almost immediately when the redo has been flushed to disk.
Polling: Foreground processes sleep and poll to see if the commit is complete, this was introduced to free high CPU usage by the LGWR.

This behavior is controlled by the parameter “_use_adaptive_log_file_sync” and was introduced in 11gR2 and controls whether adaptive switching between post/wait and polling is enabled. In 11.2.0.1 and 11.2.0.2 the default value for the parameter is false. From 11.2.0.3, the default value has been changed to true.

-- LGWR traces 
WARNING:io_submit failed due to kernel limitations MAXAIO for process=128 pending aio=128
WARNING:asynch I/O kernel limits is set at AIO-MAX-NR=1048576 AIO-NR=169402

*** 2022-04-07 06:03:31.916
Warning: log write broadcast wait time 2612477ms (SCN 0xad1.f8c170fe)

*** 2022-04-07 06:03:31.916
Warning: log write broadcast wait time 2598008ms (SCN 0xad1.f8c21251)
kcrfw_update_adaptive_sync_mode: post->poll long#=33 sync#=202 sync=5963 poll=8730 rw=383 rw+=383 ack=3982 min_sleep=1135

*** 2022-04-07 07:46:20.018
Log file sync switching to polling --------------------->>>>>>> It shows current method is polling 
Current scheduling delay is 1 usec 
Current approximate redo synch write rate is 67 per sec

*** 2022-04-07 07:47:13.877
kcrfw_update_adaptive_sync_mode: poll->post current_sched_delay=0 switch_sched_delay=1 current_sync_count_delta=63 switch_sync_count_delta=202

*** 2022-04-07 07:47:13.877
Log file sync switching to post/wait ------------------>>>>>>>> It shows current method is post/wait
Current approximate redo synch write rate is 21 per sec


-- Below stats shows that the POLLING is happening on the database for LGWR wrtes 
SQL> select name,value from v$sysstat where name in ('redo synch poll writes','redo synch polls');

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo synch poll writes                                             10500129
redo synch polls                                                   10773618

As there were no other symptoms of issues with I/O or in other areas, the problem could be with excessive switching between post/wait and polling wait methods.

After consulting with Oracle support, we found the issue was happening due to BUG (25178179) and the issue gets severe when only while using log file sync “polling mode“. To prevent the problem from happening, they suggested us to turn off the _use_adaptive_log_file_sync by setting it to FALSE in either in the spfile and restarting the database or dynamically in memory.
This will force the log file sync waits to use (the traditional) “post/wait mode” rather than the automatically switching between “post/wait mode” and “polling mode” based on performance statistics. Changing the said parameter to FALSE disabled adaptive LFS in the database and that resolved the issue and system performance was restored.

-- Set the parameter _use_adaptive_log_file_sync = false and restart the database:
SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync" = FALSE;

-- In cases where a restart is not feasible, then you can set in memory and also in the SP file for when a restart does occur:
SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync"=FALSE; -- to set in memory and spfile as by default scope=both
SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync"=FALSE scope=sfile sid='*'; -- to set on spfile, so parameter is used at next restart.

Though In the vast majority of cases adaptive log file sync improves the overall performance of log file synchronization, but there are few bugs associated with this feature i.e. 13707904, 13074706 and 25178179.

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | 1 Comment »

Demystifying OUTLINE DATA in an Execution Plan …

Posted by FatDBA on March 7, 2022

‘Outline Data’ section displays the list of hints that would be needed to replicate the execution plan, even if the statistics change., but in case of a complex execution plans it comes with lot of strange looking and obscure terms used. As there isn’t any published explanation of stored outline hints, so today’s post is to decipher few of the terms that you see in an outline data of an execution plan.

If you want to read more about outlines, what are they, their purpose, please read my previous post on the same subject at –> https://fatdba.com/2017/11/30/how-to-fix-sql-plan-issues-using-outline-data/

Okay, coming back to the post, let me quickly generate the outline data for one of the test SQL and will try to explain about each of the hints, query blocks, aliases and other representations used.

SQL>
SQL> explain plan for 
SELECT d.department_name,e.employee_name
FROM departments d
LEFT OUTER JOIN employees e ON d.department_id = e.department_id
WHERE d.department_id >= 30
ORDER BY d.department_name, e.employee_name;  

Explained.

SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'+alias +outline'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------
Plan hash value: 3871261979
-----------------------------------------------------------------------------------
| Id  | Operation           | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |             |     4 |   168 |     7  (15)| 00:00:01 |
|   1 |  SORT ORDER BY      |             |     4 |   168 |     7  (15)| 00:00:01 |
|*  2 |   HASH JOIN OUTER   |             |     4 |   168 |     6   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| DEPARTMENTS |     2 |    44 |     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| EMPLOYEES   |     6 |   120 |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$2BFA4EE4
   3 - SEL$2BFA4EE4 / D@SEL$1
   4 - SEL$2BFA4EE4 / E@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH(@"SEL$2BFA4EE4" "E"@"SEL$1")
      LEADING(@"SEL$2BFA4EE4" "D"@"SEL$1" "E"@"SEL$1")
      FULL(@"SEL$2BFA4EE4" "E"@"SEL$1")
      FULL(@"SEL$2BFA4EE4" "D"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      ANSI_REARCH(@"SEL$1")
      OUTLINE(@"SEL$8812AA4E")
      ANSI_REARCH(@"SEL$2")
      OUTLINE(@"SEL$948754D7")
      MERGE(@"SEL$8812AA4E" >"SEL$948754D7")
      OUTLINE_LEAF(@"SEL$2BFA4EE4")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID"(+))
   3 - filter("D"."DEPARTMENT_ID">=30)
   4 - filter("E"."DEPARTMENT_ID"(+)>=30)

As per the above outline data captured for the execution plan, the very first entry is USE_HASH(@”SEL$2BFA4EE4″ “E”@”SEL$1”) This represents the use of HASH join methods used in the query. Other join method outline options you might see are USE_NL and USE_MERGE. Here in our example we’ve used the LEFT OUTER JOIN which will return all valid rows from the table on the left side of the JOIN keyword, that’s table EMPLOYEE (alias ‘E’) along with the values from the table on the right side.

i.e LEFT OUTER JOIN employees e ON d.department_id = e.department_id

LEADING(@”SEL$2BFA4EE4″ “D”@”SEL$1” “E”@”SEL$1”) The LEADING hints specifies the exact join order for the SQL that is followed by the optimizer in the plan. It displays the join order as an ordered list of table aliases and query block names. The aliases appears in the ordering in which we access those tables in the query.

The first part of the hint @”SEL$2BFA4EE4″ represents the main SELECT block and if you closely see the full hint definition you will see “D”@”SEL$1”, where “SEL$1” is the query block name for table alias ‘D’ for Department table, which is followed by “E”@”SEL$1”, that represents query block name “SEL$1” on second table joined ‘E’ for Employee table.

FULL(@”SEL$2BFA4EE4″ “E”@”SEL$1”) & FULL(@”SEL$2BFA4EE4″ “D”@”SEL$1”) are what you see next and these two hints for FULL TABLE SCANS on table alias “E”, that is EMPLOYEE table, followed by FTS on table alias “D”, that is DEPARTMENT table. Same you can see in the execution plan too.

OUTLINE(@”SEL$1″) & OUTLINE(@”SEL$2″) These two OUTLINE hints correspond to initial and the intermediate query blocks.

ANSI_REARCH(@”SEL$1″) This is the hint that instructs the optimizer to re-architecture of ANSI left, right, and full outer joins. In our case this was to re-arch the left outer join.

MERGE(@”SEL$8812AA4E” >”SEL$948754D7″) is the MERGE query block hint. SEL$8812AA4E and SEL$948754D7 are the transformed query blocks.

OUTLINE_LEAF(@”SEL$2BFA4EE4″) This hint builds an outline leaf for the specified query block. In our example it represents that the query block SEL$2BFA4EE4 is a “final” query block that has actually been subject to independent optimization. Outline leaf cannot be transformed. You can see multiple outline_leaf hints for the query blocks.

IGNORE_OPTIM_EMBEDDED_HINTS is a special hint instructs the CBO to ignore most of all the other supplied hints.

ALL_ROWS This hint instructs to optimize the query/statement block for best throughput with lowest resource utilization.

DB_VERSION(‘19.1.0’) & OPTIMIZER_FEATURES_ENABLE(‘19.1.0’) these two hints allows for the CBO to process the SQL on the said version of the database, 19.1.0 in our case.

Few others that you might see in OUTLINE DATA sections and are easy to decipher are INDEX_RS_ASC (Index Range Scan in ascending order) and it happens when the INDEX RANGE SCAN is used as an access path/method in SQL execution plan. Few others that you can see in case of Nested Loop Joins are NLJ_BATCHING which happens when Oracle batches multiple physical I/O requests and process them using a vector I/O (array) instead of processing them one at a time, and batching improves performance because it uses asynchronous reads..
Few others that you can see in case of NL’s are USE_NL, NLJ_PREFETCH etc.

INDEX_FFS in case of Index Fast Full Scans and the list is long …

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , | Leave a Comment »

Script to display latency wait times using DBA_HIST_EVENT_HISTOGRAM using Heat Maps

Posted by FatDBA on March 2, 2022

Today’s post if about a brilliant tool/script that I frequently use to get the outputs/graphs for customer meetings, presentations and for quick analysis of any performance problem …

This is one of the awesome Oracle provided tool/script that generates a heat map of latency times for a given wait event. The script takes the output from DBA_HIST_EVENT_HISTOGRAM and produces a heat map and a JPG version of the heat map is also produced representing the wait time latency for the given wait event over a specific time frame identified by range of snapshots.

Source : Script to Display Latency Wait Time From DBA_HIST_EVENT_HISTOGRAM Using Heat Map (Doc ID 1931492.1)

Note: This script queries ASH views, specially DBA_HIST_EVENT_HISTOGRAM which requires license as its part of the Diagnostics Pack on Oracle EE

The scripts takes the following input parameters:

  • Snapshot id range (first snapshot id and last snapshot id)
  • Database id
  • Instance number
  • Exact name of wait event
perl ./lhm.pl --begin [first snapid] --end [last snapid] --dbid [database id] --instance [instance num] --wait_event "wait event"

perl ./lhm.pl  --dbid 1266075800 --begin 19907 --end 19991 --event "log file parallel write" --instance 4


-- Example Output
database id:     1266075800
wait event:      log file parallel write
Matrix:          [26x85]
snap range:      [19907:19991]
instance id:     4
jpg file   :     latency_log_file_parallel_write.jpg

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , , | 1 Comment »

 
%d bloggers like this: