Tales From A Lazy Fat DBA

Love all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Posts Tagged ‘oracle’

Another 10053 trace viewer : Best of the Best …

Posted by FatDBA on October 21, 2021

Finally, I’ve got a working copy of my favorite 10053 Oracle optimizer trace viewer from one of my connection. This one was written by Sergei Romanenko. I love it because it allows direct jumps to the most important parts of the trace and also uses highlighter to improve the readability of the trace and optionally formats the final query after transformations. It’s pretty easy to use, and you can directly search the keywords within these big thumping traces. And you can also wrap your text and can also format the final query.

Click here to download!

This is how the interface looks like.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

10053 Trace Viewer : A life savior when handling colossal optimizer traces

Posted by FatDBA on October 18, 2021

Hi Everyone,

I am sure that my last post about 10053 debug traces has sparked some interest in optimizer cost calculations and estimations 🙂 As you guys are familiar that these traces aren’t that easy to digest and interpret, as they are pretty complicated, a humongous pile of internal cryptic information’s. One of the reader asked – If there are any tool that can help to at least format the trace and its sections ? Yes, there are few and one of my favorite is the 10053 viewer, and is what I am using from last few years now (lucky that I found that great blog post by Jonathan Lewis).

Click here if want to download it!

The tool is pretty easy to use! You have to click on ‘open trace file’ button and browse the 10053 trace from the system and click on ‘show trace file’ (next button).

Now load the trace file.

Now you’ll have a drop down view to select from. Once the trace is loaded, you can access sections by using ‘+’ to expand and ‘‘-‘ to minimize the section.

Expand to get more details about any particular section.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Oracle event 10046 debug traces, they really aren’t that ‘complicated’ as we think – A 10046 trace break apart!

Posted by FatDBA on October 16, 2021

Hi Everyone,

Oracle has a long list of internal debug codes and this tracing is an art and a real craft. 10046 debug traces is one of the popular method for collecting extended SQL trace (like SQL_TRACE=TRUE) information for Oracle sessions. This we specially use to determine or distinguish the nature of a SQL Tuning problem. By setting this event, you can get detailed trace information of Oracle’s internal execution system analysis, call, wait, and bind variables, which plays a very important role in analyzing the performance of the system. This provides a great source and different levels of details about SQLs.

This post is all about breaking the parts of the trace and understand some of the critical sections to help understanding about SQL stats that it captures. I am not going to show how to generate them as the steps are pretty straight forward and are available on metalink.

Though there are multiple use cases of 10046, but I recently used them to understand a complicated and costly PL/SQL program which has got more than 1000 different SQL’s inside that it calls and I was interested to check on the costliest among them and why is that …. Though there are surely other ways to get the details of same level like using SQL Profiler, SQL Traces etc. but none of them provides the level of details what 10046 gives) and elapsed time parsing (prsela).

Above used snippet is from live 10046 sorted traces from a production system running on 10.2.0.5.0 (Yes, an old application). There isn’t much different that you will notice if you execute it on any new version of database too, there are very few changes that you will notice in case running on a latest oracle DB version. Okay let’s first understand few of the keywords or column names used in the result.

TKPROF: Release 10.2.0.5.0 - Production on Fri Sep 27 03:31:42 2021

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Trace file: dixitdb_ora_28282_10046_for_spdixitM.trc
Sort options: exeela  fchela  prsela  
********************************************************************************

SELECT COUNT(*) 
FROM
 CANONTALAB.DIXIT1_SAMPLE WHERE DIXIT1_SAMPLE_NO=:B1 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      134      0.00       0.00          0          0          0           0
Execute    862      0.03       0.03          0          0          0           0
Fetch      862     46.59      45.52          0    1235246          0         862
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1858     46.63      45.55          0    1235246          0         862

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 296  (CANONTALAB)   (recursive depth: 3)

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  SORT AGGREGATE (cr=2866 pr=0 pw=0 time=114898 us)
      2   INDEX FAST FULL SCAN PK_DIXIT1_SAMPLE (cr=2866 pr=0 pw=0 time=114888 us)(object id 125001)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
      2   SORT (AGGREGATE)
      2    INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'PK_DIXIT1_SAMPLE' 
               (INDEX (UNIQUE))

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        60       12.17         43.54
  
********************************************************************************

COUNT – Represents the umber of times a SQL statement was parsed, executed, or was fetched.
CPU – Total CPU time in seconds for all parse, execute, or fetch calls for the SQL.
ELAPSED – Total elapsed time in seconds for all parse, execute, or fetch calls for the SQL statement.
DISK – Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls.
QUERY – Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls. Usually, buffers are retrieved in consistent mode for queries. A Consistent Get is where oracle returns a block from the block buffer cache but has to take into account checking to make sure it is the block current at the time the query started. Or is a normal reading of a block from the buffer cache. A check will be made if the data needs reconstructing from rollback info to give you a consistent view
CURRENT – Total number of buffers retrieved in current mode. Buffers are retrieved in current mode for statements such as INSERT, UPDATE, and DELETE. A DB block get (or current get in TKPROF) not only gets the block as it is right now, but it stops anyone else from getting that block (in current mode!) until we change it and release it. If someone else got there first, we wait.
ROWS – Total number of rows processed by the SQL statement. This total does not include rows processed by subqueries of the SQL statement. Also the rows gives statistics about 3 calls: Parse, Execute & Fetch
PARSE – Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects. This is where the physical and logical transformations and optimizations happens.
EXECUTE – Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.
FETCH – Retrieves rows returned by a query. Fetches are only performed for SELECT statements.

Okay, now when all the column names and table entries are explained, let me try to explain what it represents in the form of those numbers.

It says 1235246 blocks were received in consistent mode during fetch operation. Since this is a SELECT statement, the blocks are shown during Fetch operation. If its a DML statement then blocks will be shown during Execute operation. Misses in library cache during each call. If there is no miss then it wont be mentioned. The 1 miss for the SQL is very much acceptable since when a SQL runs for the first time it need to be parsed and executed and execution plan will be stored. So parse and execute call will have 1 misses. If you see the statement parse call happened 134 times but the miss count is only 1, meaning that the statement was parsed only once and was stored in library cache. For next 133 parses the same parsed statement from library cache was used. So we have miss 1 and hit 133. Similarly execution plan was made only once and 861 times Oracle used same execution plan from library cache.

Now next jump to the row source operations and codes that it uses. cr = Consistent Reads, pr = Physical Reads, pw = Physical Writes, time = time taken by this step in microseconds. You might see some other codes used i.e. cost = cost incured by the step, size = represent the size of data in that step and card = cardinality.

So, the query was found doing an UNIQUE SCAN on it’s primary key index with name PK_DIXIT1_SAMPLE in ALL_ROWS mode and which is pretty understood as an equality predicate was used and the unique or primary key constraint was sufficient by itself to produce an index unique scan. And finally it shows wait event details and is pretty straight forward and says that it waits on ‘db file sequential read‘ with max wait time of 12.17

With the values above we need make a decision to whether to tune the SQL or not. Unless we have a locking problem or bad performing SQLs we shouldn’t worry about CPU time or the elapsed time. because timings come into consideration only when we have bad performing SQLs. The important factor is the number of block visits, both query (that is, subject to read consistency) and current (that is, not subject to read consistency). Segment headers and blocks that are going to be updated are acquired in current mode, but all query and subquery processing requests the data in query mode.

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | 4 Comments »

Migrated to RAC and getting ‘row cache locks’ or ‘enq: SQ – contention’ ?

Posted by FatDBA on September 24, 2021

Hi Everyone,

Recently I was working on a performance issue where customer reported frequent slowness and hang issues with their newly migrated 12.2 2-Node RAC cluster. I was involved at the time when issue was already gone and now I had to dig out the data the history either from AWR or via DBA_HIST_XX views. I started glancing over AWR reports for the probe period (~ 2 hours). I saw especially node 1 was swamped with excessive ‘row cache lock’ wait events, and that’s too with very high average wait time of 7973.47 ms (~ 8 seconds per wait). Though similar waits were found happening on instance 2, but quite less as compared to node1 (take a look at the AWR snip below)

You can also see ‘enq: SQ – contention’ in place of ‘row cache locks’ as this got renamed.

Below is the snippet from AWR that states it spend ~ 99% of DB Time% on sequence loading.

While checking ‘enqueue stats’ I saw ‘SQ-Sequence Cache’ type enqueues with very high overall wait period of 545 seconds (~ 9 minutes).

Next target was to find out the source SQL or the statements waiting on these row cache lock waits. And as expected, it was a SQL that interacts with the sequence to generate the NEXTVAL and feed that information to another statements that inserts records to a frequently accessed application log table. You can think of that statement something like below …

-- Generating next available value from the sequence
SELECT TEST_SEQ.NEXTVAL FROM DUAL; 

And source being a sequence, I thought of generating the DDL to see all its options or properties. And as expected, this sequence has NOCACHE option because this was recently upgraded from a standalone 12.1 database to a new 12.2 2-Node RAC cluster. The main reason for specifying NOCACHE earlier was to avoid gaps in sequence number as the value is not lost when the instance abnormally terminates.

CREATE SEQUENCE  "DIXIT"."TEST_SEQ"  MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1673163 NOCACHE  NOORDER  NOCYCLE  NOKEEP  NOSCALE  GLOBAL ;

And we complety missed to modify sequences as per the best practice of having CACHE + NOORDER combination in case of RAC. With this combo, each instances caches a distinct set of numbers in the shared pool & Sequences will not be globally ordered.

When caching is used, then the dictionary cache (the row cache) is updated only once with the new high watermark, e.g. when a caching of 20 is used and a nextval is requested the first time, then the row cache value of the sequence is changed in the dictionary cache and increased by 20. The LAST_NUMBER of the DBA_SEQUENCES get increased with the cache value or 20. The extracted 20 values, stored in the shared pool, will be distributed to the sessions requesting the nextval of it.

When no caching is used, then the dictionary cache has to be updated for any nextval request. It means the row cache has to be locked and updated with a nextval request. Multiple sessions requesting a nextval will hence be blocked on a ‘row cache lock’ wait. Each instance allocates numbers by access to the database but cache fusion may delay sending current seq$ block to a requesting instance if the block is busy owing to many sequence number allocations from the instance owning the current block image.

But there is a caveat when you use CACHE option and that is that the gaps in the sequence numbering occur when the sequence cache is lost e.g. any shared pool flush or instance shutdown like an single instance databases. When the sequence caching is used and the cached values are flushed from the shared pool. The same happen in RAC as in single instance databases. Any flush on any shared pool is enough to invalidate the cache value on RAC systems. And I don’t see any problem having a gap in the sequence, if not using a banking application.

Let me explain it through an example ..

-- Will create a sequence, default is to cache 20 sequence values in memory.

SQL> create sequence mytest_seq start with 1 increment by 1;

Sequence created.

SQL> select mytest_seq.nextval from dual;

  NEXTVAL
----------

         1

SQL> select mytest_seq.nextval from dual;

  NEXTVAL
----------

         2


-- The database is terminated and after startup, the next value of the sequence is selected.


SQL> select mytest_seq.nextval from dual;

  NEXTVAL
----------

        21

-- The first 20 values were in the cache, but only the first two were actually used. 
-- When the instance got terminated, sequence values 3 through 20 were lost as they were in cache. 

So, we decided to use caching, considering the average modifications and sequence generation requests to the main table, we planned to go with 500 sequence to be cached that Oracle will pre-allocate and keep in the memory for faster access.

ALTER SEQUENCE TEST_SEQ cache 500; 

And yup, the issue got fixed as soon we made sufficient sequences numbers available in the cache and no more ‘row cache lock’ waits afterwards.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | 2 Comments »

Are you suffering from excessive ‘cursor: mutex X’ & ‘cursor: mutex S’ waits after the upgrade ?

Posted by FatDBA on September 15, 2021

Hi Everyone,

Recently, I was contacted by one of my friend who was battling with some performance issues, since they moved from 12c to 19c. He was mostly strained about a particular problem with the new 19c database where he was getting excessive concurrency classed waits of “cursor: mutex X” (> 92% of the DB Time) and some “cursor: mutex S” events in the database. This was leading to frequent database hang instances.

As per the above snippet from AWR report for the period, ‘cursor: mutex X’ was consuming more than 170 ms per wait or an average wait time and was responsible for more than 91% of the total DB Time consumption.

Initially I though it was a case of classic hard parsing issue, as Cursor: mutex S wait usually occurs when Oracle is serializing parsing of multiple SQL statements. I mean there must be SQLs which are going through excessive hard parsing and has too many child cursors in the library cache. So, I immediately checked section ‘SQL Ordered by Version Count’ and saw one individual statement was there with 7,201 versions or Childs within a period of 2 hours.

Same was confirmed through ASH report too (see below pasted snippet). This particular SELECT statement was waiting on both on these two concurrency classed events specific to library cache.

I further drilled down on this issue to identify the cause behind this problem by querying V$SQL_SHARED_CURSOR (for reasons) to know why a particular child cursor was not shared with existing child cursors, and I was getting BIND_EQUIV_FAILURE as a reason. The database has the ACS(Adaptive Cursor Sharing) and CFB(Cardinality Feedback) enabled and seemed a ‘cursor leak’ issue.

I also noted huge sleeps for CP type mutexes on functions kkscsAddChildNode & kkscsPruneChild, below is the snippet from AWR, take a look at the first two in red.

And when I was about to prepare the strategy (i.e. specific plan purges etc.) to handle the situation, I thought to generate the hang analyze to identify if there are any known/familiar hang chains within stack traces. And I saw most of the chains running the same cursor from different processes waiting on ‘cursor: mutex X’ with below error stack … I mean there were multiple unique sessions waiting for a parent cursor mutex in exclusive mode on the same cursor under the following stack.

<-kgxExclusive<-kkscsAddChildNode<-kxscod<-kkscsCompareBinds<-kkscscid_bnd_eval<-kkscsCheckCriteria<-kkscsCheckCursor<-kkscsSearchChildList<-kksfbc<-

So, we had an error stack showing wait chains running the same cursor from different processes waiting on ‘cursor: mutex X’ and with BIND_EQUIV_FAILURE=Y in V$SQL_SHARED_CURSOR and CFB & ACS enabled, it was appearing that this was happening due to some bug.

Oracle support confirmed my doubt. They affirmed that this was happening all due to two unpublished bugs 28889389 and 28794230. For first one we need to apply patch 28889389, which has the optimized code for cursor mutex while searching the parent cursor for the match, for second one 28794230, they recommended few alternatives .given below …

_optimizer_use_feedback=false
_optimizer_adaptive_cursor_sharing=false
_optimizer_extended_cursor_sharing_rel=none

But even after setting above three undocumented parameters, which is to disable cardinality feedback and adaptive & extended cursor sharing, we only saw ~30% reduction in total waits. Later on they recommended us to apply the optimizer related bug fix control

_fix_control='23596611:OFF

and that completely resolved the issue.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Installing and configuring Oracle 21c using RPM method.

Posted by FatDBA on September 6, 2021

Hi Folks,

I know there are already few posts there explaining how to install Oracle 21c database using RPMs, but this one is to explicate both installing the software and creating a test PDB database after RPM installation using ‘configure’ command.

Alright, so let me first install the oracle-database-preinstall-21c package which will do all pre-work for you.

[root@localhost ~]#
[root@localhost ~]# yum install oracle-database-preinstall-21c.x86_64
BDB2053 Freeing read locks for locker 0x829: 3296/140273180206912
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-preinstall-21c.x86_64 0:1.0-1.el7 will be installed
--> Processing Dependency: ksh for package: oracle-database-preinstall-21c-1.0-1.el7.x86_64
--> Running transaction check
---> Package ksh.x86_64 0:20120801-142.0.1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                                            Arch                       Version                                  Repository                      Size
=============================================================================================================================================================
Installing:
 oracle-database-preinstall-21c                     x86_64                     1.0-1.el7                                ol7_latest                      26 k
Installing for dependencies:
 ksh                                                x86_64                     20120801-142.0.1.el7                     ol7_latest                     882 k

Transaction Summary
=============================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 908 k
Installed size: 3.2 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/oracle-database-preinstall-21c-1.0-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for oracle-database-preinstall-21c-1.0-1.el7.x86_64.rpm is not installed
(1/2): oracle-database-preinstall-21c-1.0-1.el7.x86_64.rpm                                                                            |  26 kB  00:00:01
(2/2): ksh-20120801-142.0.1.el7.x86_64.rpm                                                                                            | 882 kB  00:00:02
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                        350 kB/s | 908 kB  00:00:02
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
 Userid     : "Oracle OSS group (Open Source Software group) <build@oss.oracle.com>"
 Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
 Package    : 7:oraclelinux-release-7.7-1.0.5.el7.x86_64 (@anaconda/7.7)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : ksh-20120801-142.0.1.el7.x86_64                                                                                                           1/2
  Installing : oracle-database-preinstall-21c-1.0-1.el7.x86_64                                                                                           2/2
  Verifying  : oracle-database-preinstall-21c-1.0-1.el7.x86_64                                                                                           1/2
  Verifying  : ksh-20120801-142.0.1.el7.x86_64                                                                                                           2/2

Installed:
  oracle-database-preinstall-21c.x86_64 0:1.0-1.el7

Dependency Installed:
  ksh.x86_64 0:20120801-142.0.1.el7

Complete!
[root@localhost ~]#

Now when all pre-work is done, time to install the software using RPM package which I’ve downloaded from Oracle’s website.

[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# yum -y localinstall  oracle-database-ee-21c-1.0-1.ol7.x86_64.rpm
Loaded plugins: langpacks, ulninfo
Examining oracle-database-ee-21c-1.0-1.ol7.x86_64.rpm: oracle-database-ee-21c-1.0-1.x86_64
Marking oracle-database-ee-21c-1.0-1.ol7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-ee-21c.x86_64 0:1.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                                  Arch                     Version                  Repository                                                  Size
=============================================================================================================================================================
Installing:
 oracle-database-ee-21c                   x86_64                   1.0-1                    /oracle-database-ee-21c-1.0-1.ol7.x86_64                   7.1 G

Transaction Summary
=============================================================================================================================================================
Install  1 Package

Total size: 7.1 G
Installed size: 7.1 G
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-database-ee-21c-1.0-1.x86_64                                                                                                       1/1
[INFO] Executing post installation scripts...
[INFO] Oracle home installed successfully and ready to be configured.
To configure a sample Oracle Database you can execute the following service configuration script as root: /etc/init.d/oracledb_ORCLCDB-21c configure
  Verifying  : oracle-database-ee-21c-1.0-1.x86_64                                                                                                       1/1

Installed:
  oracle-database-ee-21c.x86_64 0:1.0-1

Complete!
[root@localhost ~]#

Installation of software is finished! Next we will create a test database with name ORCLCDB and a pluggable database with name ORCLPDB1.

[root@localhost ~]# /etc/init.d/oracledb_ORCLCDB-21c configure
Configuring Oracle Database ORCLCDB.
Prepare for db operation
8% complete
Copying database files
31% complete
Creating and starting Oracle instance
32% complete
36% complete
40% complete
43% complete
46% complete
Completing Database Creation
51% complete
54% complete
Creating Pluggable Databases
58% complete
77% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/ORCLCDB.
Database Information:
Global Database Name:ORCLCDB
System Identifier(SID):ORCLCDB
Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details.

Database configuration completed successfully. The passwords were auto generated, you must change them by connecting to the database using 'sqlplus / as sysdba' as the oracle user.
[root@localhost ~]#

Okay, its all set. Lets connect with the CDB and the pluggable database that we’ve created above.

[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Thu Sep 2 12:03:22 2021
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL>
SQL> alter session set container = ORCLPDB1;

Session altered.


SQL> show con_name

CON_NAME
------------------------------
ORCLPDB1
SQL>

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

My top 5 Oracle 21c features …

Posted by FatDBA on September 3, 2021

Hi Guys,

Recently I started the ‘Top 5’ series where I share my top 5 features in any particular tool or product. Last time I did for SQL Developer command line (SQLcl) & TOP utility, this time it will be about top 5 features in Oracle 21c database.

So, without any particular order, below are my top 5 Oracle 21c features …

1. Immutable Tables :

Native Blockchain Tables provide in-database immutable, insert-only tables. This type of tamper resistance helps protect against hacks and illegal changes and is a great feature added into Oracle 21c database. Even an account with DBA role cannot modify there tables. Immutable tables intended for use in an environment where it is required that an audit trail could potentially be tampered with my insertion but that once a record was inserted it would not be possible to alter or delete it except within the date constraints imposed as part of the NO DROP and NO DELETE clauses.

Let me do a demo to explain!

[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Thu Sep 2 12:03:22 2021
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL>
SQL>
--- You will get an error if you try to create it in the roo container.
SQL> create immutable table testimmu (
  id            number,
  testname         varchar2(20),
  class           number,
  created_date  date,
  constraint testimmu_pk primary key (id)
)
no drop until 1 days idle
no delete until 20 days after insert; 
create immutable table testimmu (
*
ERROR at line 1:
ORA-05729: blockchain or immutable table cannot be created in root container


SQL> alter session set container = ORCLPDB1;

Session altered.


SQL> show con_name

CON_NAME
------------------------------
ORCLPDB1
SQL>


SQL> show user
USER is "SYS"
SQL>
SQL> create immutable table testimmu (
  id            number,
  testname         varchar2(20),
  class           number,
  created_date  date,
  constraint testimmu_pk primary key (id))
no drop until 1 days idle
no delete until 20 days after insert;  

Table created.

SQL>
SQL>
SQL>
SQL> SELECT row_retention "Row Retention Period", row_retention_locked "Row Retention Lock", table_inactivity_retention "Table Retention Period" FROM dba_immutable_tables WHERE table_name = 'TESTIMMU';

Row Retention Period Row Table Retention Period
-------------------- --- ----------------------
                  20 NO                       1

SQL>
-- lets try to alter the NO DELETE clause.
SQL> alter table testimmu no delete until 60 days after insert;

Table altered.

SQL>
SQL> SELECT row_retention "Row Retention Period", row_retention_locked "Row Retention Lock", table_inactivity_retention "Table Retention Period" FROM dba_immutable_tables WHERE table_name = 'TESTIMMU';

Row Retention Period Row Table Retention Period
-------------------- --- ----------------------
                  60 NO                       1

-- What happens when anyone tries to lower down that ?
SQL>  alter table testimmu no delete until 59 days after insert;
 alter table testimmu no delete until 59 days after insert
*
ERROR at line 1:
ORA-05732: retention value cannot be lowered


-- Lets insert some data.
SQL> insert into testimmu (id, testname, class, created_date) values (10,'Elisa',50,sysdate-1);

1 row created.

SQL>
SQL> select * from testimmu;

        ID TESTNAME                  CLASS CREATED_D
---------- -------------------- ---------- ---------
        10 Elisa                        50 01-SEP-21

-- Now try to UPDATE the table record.
SQL> update testimmu set CLASS=40 where TESTNAME='Elisa';
update testimmu set CLASS=40 where TESTNAME='Elisa'
       *
ERROR at line 1:
ORA-05715: operation not allowed on the blockchain or immutable table



SQL> alter table testimmu no drop;

Table altered.

SQL>
SQL> SELECT row_retention "Row Retention Period", row_retention_locked "Row Retention Lock", table_inactivity_retention "Table Retention Period" FROM dba_immutable_tables WHERE table_name = 'TESTIMMU';

Row Retention Period Row Table Retention Period
-------------------- --- ----------------------
                  60 NO                  365000

SQL>

-- Now will try to drop it and will see what will happen.
SQL> drop table testimmu;
drop table testimmu
           *
ERROR at line 1:
ORA-05723: drop blockchain or immutable table TESTIMMU not allowed


SQL> alter table testimmu no drop until 10 days idle;
alter table testimmu no drop until 10 days idle
*
ERROR at line 1:
ORA-05732: retention value cannot be lowered

2. Compare EXECUTION PLANS:

Starting from 21c, now you can compare your execution plans. This is a great in-build feature which helps you to identify the differences between any two plans. Maybe a demo can help explaining how …

SQL> explain plan
  2  set statement_id = 'd1'
  3  for select /*+ full(bigtab) */ * from bigtab where id=840;

Explained.

SQL>
SQL> explain plan
  2  set statement_id = 'd2'
  3  for select /*+ index(bigtab) */ * from bigtab where id=840;

Explained.

SQL>


SQL> VARIABLE d varchar2(5000)
SQL> exec :d := dbms_xplan.compare_explain('d1','d2')

PL/SQL procedure successfully completed.

SQL>


SQL> print d

D
----------------------------------------------------------------------------------------------------------------

COMPARE PLANS REPORT
---------------------------------------------------------------------------------------------
  Current user           : SYS
  Total number of plans  : 2
  Number of findings     : 1
---------------------------------------------------------------------------------------------

COMPARISON DETAILS
---------------------------------------------------------------------------------------------
 Plan Number            : 1 (Reference Plan)
 Plan Found             : Yes
 Plan Source            : Plan Table
 Plan Table Owner       : SYS
 Plan Table Name        : PLAN_TABLE
 Statement ID           : d1
 Plan ID                : 1
 Plan Database Version  : 21.0.0.0
 Parsing Schema         : "SYS"
 SQL Text               : No SQL Text

Plan
-----------------------------

 Plan Hash Value  : 441133017

-----------------------------------------------------------------------
| Id  | Operation           | Name   | Rows | Bytes | Cost | Time     |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |   74 |  2590 |   71 | 00:00:01 |
| * 1 |   TABLE ACCESS FULL | BIGTAB |   74 |  2590 |   71 | 00:00:01 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("ID"=840)


Notes
-----
- Dynamic sampling used for this statement ( level = 2 )


---------------------------------------------------------------------------------------------
 Plan Number            : 2
 Plan Found             : Yes
 Plan Source            : Plan Table
 Plan Table Owner       : SYS
 Plan Table Name        : PLAN_TABLE
 Statement ID           : d2
 Plan ID                : 2
 Plan Database Version  : 21.0.0.0
 Parsing Schema         : "SYS"
 SQL Text               : No SQL Text

Plan
-----------------------------

 Plan Hash Value  : 3941851520

--------------------------------------------------------------------------------------------
| Id  | Operation                             | Name      | Rows | Bytes | Cost | Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |           |   74 |  2590 |   87 | 00:00:01 |
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | BIGTAB    |   74 |  2590 |   87 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | IDX_TESTA |   74 |       |    1 | 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("ID"=840)


Notes
-----
- Dynamic sampling used for this statement ( level = 2 )


Comparison Results (1):
-----------------------------
 1. Query block SEL$1, Alias "BIGTAB"@"SEL$1": Access path is different -
    reference plan: FULL (line: 1), current plan: INDEX_RS_ASC (lines: 1, 2).


---------------------------------------------------------------------------------------------

3. CHECKSUM with export dumps

This is again a great feature and this is to avoid any data tampering or modifications in export dumps. So, you generate the checksum at the time of export backup and if someone modifies any data into the dump (via any editing tool), it will be highlighted during import time.

-- To create a checksum at the time of exporting the data in to dump.
[oracle@localhost admin]$ expdp system/*****@*** DIRECTORY=testdirectory1 DUMPFILE=bigtabtestbkp.dmp tables=bigtab CHECKSUM=YES

-- To verify checksum during import
[oracle@localhost admin]$ impdp system/*****@*** DUMPFILE=bigtabtestbkp.dmp verify_checksum=yes

4. Zero Down Time Timezone Upgrade :

One problem with this is that DST patch required “startup upgrade”, I mean it is not a RAC-rolling patch and standby databases have to be in MOUNT mode. With this feature now these patches are RAC-rolling patches and standby databases can be OPEN-ed. This is really cool! Who knows they will even be part of RU & RUR’s 🙂

5. MULTIVALUE INDEX on JSON data.

I’ve recently starting using JSON data for one of our data and I know how difficult it was when you are doing searches using JSON_EXISTS or JSON_QUERY operators in your query.
Now in 21c, a new create index syntax CREATE MULTIVALUE INDEX allows you to create a functional index on arrays of strings or numbers within a JSON type column. Each unique value within the array will become a searchable index entry. This avoids the need for full JSON scans to find values within arrays in JSON columns, when searched using the JSON_EXISTS or JSON_QUERY operators.

CREATE MULTIVALUE INDEX idx_jsndatest ON mytable tempjsdata (temp.jcol.item_grade.numberOnly());

There are few other good features like automatic materialized views, SQL Macros, Expressions in Initialization Parameters (i.e. alter system set pga_aggregate_target=’sga_target/2‘ now possible) etc. available with 21c.

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

With Oracle 21c, now you have Clusterware REST API …

Posted by FatDBA on September 1, 2021

Hi Guys,

While reading official database 21c guides, I came across something really cool – Starting from 21c you have Clusterware REST API 🙂 … REST API is available in previous versions too, but not for Clusterware command line utilities, but with release of 21c this is finally here ..

The REST application programming interfaces (APIs) for Oracle Clusterware makes you capable to remotely execute commands on your database cluster, whether in the Oracle Cloud, at remote physical locations, or locally purveyed. With the remote running of REST interface commands, you are able to get back information about that execution, including output, error codes, and execution lengths. The REST interface allows secure support for Oracle Clusterware command line utilities like CRSCTL, CLUVFY and SRVCTL.

REST APIs for Oracle Clusterware expect that the CDP Cross Cluster Domain Protocol (CDP) daemon is running on all of the SCAN VIPs of the cluster. To support the ability to make requests from outside the cluster, you can run the srvctl modify cdp command to provide a list of IPs or networks in CIDR format.

Below are few of the cluster commands you can run using REST APIs …

To enable connections from outside the cluster, run the following commands.

$ srvctl start cdp
$ srvctl modify cdp -allow "ip/networkid1,ip/networkid2,.."

You can view the configuration information with the following command

$ srvctl config cdp

Get the list of all homes

curl -k -X GET https://scan-name:port/grid/cmd/v1/cmd/ --user admin:DixitTestPassword 

Create a job (crsctl) and monitor the status

curl -k -X POST \
    https://scan-name:port/grid/cmd/v1/cmd/exec \
    '-H "accept: text/plain,text/javascript,application/json"' \
    '-H "content-type: application/vnd.oracle.resource+json;type=singular"' \
     --user admin:DixitTestPassword \
    '-d  {"command" : ["crsctl", "stat", "res", "-t"], "runAsUser":"osUser", "userPassword":"osPasswd"}'

curl -k -X GET https://scan-name:port/grid/cmd/v1/cmd/jobs/myJobId --user admin:DixitTestPassword 

Monitor the status of all jobs

curl -k -X GET \
    https://scan-name:port/grid/cmd/v1/cmd/jobs/ --user admin:DixitTestPassword 

Delete a job

curl -k -X DELETE \ 
         https://scan-name:port/grid/cmd/v1/cmd/jobs/myJobId --user admin:DixitTestPassword 

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

Oracle zero downtime migration 21.2

Posted by FatDBA on August 31, 2021

I am happy, excited and I guess this time no much writing and explanation is needed, as this sole image is enough 🙂
Pic courtesy: Oracle Corp

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

When and why the optimizer switched to RBO from CBO mode … Why I cannot see that in 10053 trace ??

Posted by FatDBA on August 28, 2021

Hi Guys,

Recently I was working on one performance issue where one critical SQL statement started consuming more time. After a quick check I saw a switch was happened to RBO mode from CBO, but wasn’t sure on when and why the optimizer mode was switched. And the expected answer to my quest is to generate the debug 10053 trace file to get some insight about the cost-based optimizer internal calculations and to check the cardinality, selectivity and draw a good parallel with the way cost of table, index or sort etc. may be calculated.

Usually the best way to work out what’s going on in this situation is to look at the optimizer debug trace event, 10053 trace file. I always prefer to generate optimizer traces in such situations when the mighty optimizer messed up things. Being a performance consultant, it had saved me so many times in the past, always a best bet for me.

But this time it was looking little different, I couldn’t see details about why optimizer switched the mode in the ‘Query‘ section of the trace. I was totally perplexed, I mean this was not the first time I was looking for that information in the trace file. Why it’s not there, what happened .. 😦

This was Oracle 19.3.0.0.0 database running on RHEL, I tried metalink and found one document specific to this issue and luckily this was happening all due to a known bug 31130156. The problem was later on solved after we applied the bug-fix patch and interpreted the reason of the mode switch (I will write another post about the core problem) …

Note: It can be very difficult to interpret the 10053 optimizer trace if you don’t have any prior experience on it. I recommend readers to check one great document written by Wolfgang Breitling with title ‘A Look Under The Hood Of CBO’.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

 
%d bloggers like this: