Today, one of the customer’s application architect during a team meet asked me if its possible to have the SQL ID in advance, before the SQL execution ? And luckily, I know that’s possible (through Connor McDonald‘s blog post on the same subject) and I was immediately able to answer the customer with a confident ‘Yes, that’s possible!’ 🙂
When I asked, why he want to do that, he said that they are planning to integrate a new module to their CRM application, first into development environment where they want to catch all expensive SQLs, their IDs in advance and later on compare it with the UAT environment and to get some anticipations on SQL runtime latencies. That I guess is a valid reason to have the SQL IDs in advance.
If you’re using Oracle 18c or above then you can use SET FEEDBACK ON SQL_ID, which means the SQL_ID for the currently executed SQL or PL/SQL statement is displayed after the results. This is very useful, and a huge time saver, when all you need is to know the SQL_ID. When feedback for SQL_ID is ON, then the value of the SQL_ID is assigned to a predefined variable _SQL_ID
And if you’re running below 18c then you can use dbms_sql_translator.sql_id package, it provides an interface for creating, configuring, and using SQL translation profiles, and you can use one of its function sql_id to get the SQLID in advance.
Let’s do the demo for both! Though I am running this on 12c, but still be able to use both options as connecting to the database via SQLcl (a SQL Dev CLI) version 21.4 and that has both the options available.
[oracle@ontadomain.fatdba bin]$ ./sql ontadbschema/xxxxxxxxx@testdb_ha
SQLcl: Release 21.4 Production on Tue Jan 04 02:58:04 2022
Copyright (c) 1982, 2022, Oracle. All rights reserved.
Last Successful login time: Tue Jan 04 2022 02:58:06 +02:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> select * from dixdroid.testtable;
STATS_CORRNAME STATS_CORRVALUE STATS_CORRDESC
_____________________________ __________________ _________________
DEFAULT_LOCK_DURATI 60 SECONDS
FIX_ALLOWED_LOCK_DURATI 120 SECONDS
FIX_RETURNABLE_NUMBERS 200
FIX_RETURNABLE_RANGES 10
DEFAULT_RESERVATION_DURATI 60 MINUTES
SQL>
-- Now trying the first approach which will work on < 18c versions.
SQL>
SQL> select dbms_sql_translator.sql_id('select * from dixdroid.testtable where STATS_CORRVALUE > 150') from dual;
1 row selected.
SQL_ID: 16jcpmjs087ct
-- Trying the second approach using SET FEEDBACK which works great if running >= 18c
SQL> set feedback only sql_id
SQL> select * from dixdroid.testtable where STATS_CORRVALUE > 150;
1 row selected.
SQL_ID: 16jcpmjs087ct
SQL>
SQL>
Last few weeks I was busy doing some CI/CD integrations using Liquibase, and this was the first time I was using Liquibase and I immediately fell in love with this brilliant tool, that you can use for tracking, managing, automation and applying database schema changes. This is gaining popularity as a DevOps tool to automate your database deployments.
Today’s post is about how to integrate Liquibase with Oracle databases. I did all of the demos on Oracle database version 19.3.0.0.0 on RHEL8 and using Liquibase community version 4.6.2 You can download latest version from https://www.liquibase.org/download and they also have certified courses available on their university website https://learn.liquibase.com/
Okay, let’s quickly build the playground to do demos. I will first first un-tar the file that I have downloaded from their website.
Lets add the PATH variable to .bash_profile and set it to export PATH=$PATH:/root/liquibase (my Liquibase un-tar directory) this is to call the executable from anywhere. With that you’re all set to use the Liquibase, yes! you only need to unzip/un-tar the software and ready to go. It’s installation and configuration both is very easy and straight forward.
Here in this demo, I will be using all options or flags directly with the Liquibase cli to better understanding, but I recommend to create the property file and put all your configuration entries there, like the one I have shared below.
Here changeLogFile is the changelog file to use, driver is the database driver class name, its ‘oracle.jdbc.OracleDriver’ as I am doing this demo on Oracle database. Classpath flag is to point the jar file for the classpath containing migration files and JDBC Driver, URL is the database JDBC URL (hostname:portnumber/SID), username/password is the database username and the password, the outputFile is the used to send output to a file. The loglevel parameter controls the amount of messages that are generated when running Liquibase commands, possible options are SEVERE/WARNING/INFO/FINE/OFF and the last option liquibase.hub.mode disables the HUB mode for Liquibase. There are whole lot of other parameters and are available on https://docs.liquibase.com/
Next I am going to create a test schema where I will create objects later on, and will track changes using Liquibase.
[oracle@localhost ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Dec 30 02:42:18 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> create user dixdroid identified by dixdroid;
User created.
-- Granting SYSDBA to the test user for demo purpose only
SQL> grant connect, sysdba to dixdroid;
Grant succeeded.
SQL> conn dixdroid
Enter password:
Connected.
SQL>
-- Next, I will add some test data to this schema.
-- Will create a Table, insert few rows, index, function and a sequence.
SQL> @testdata.sql
Table created.
1 row created.
1 row created.
Index created.
Table created.
Sequence created.
1 row created.
Function created.
Commit complete.
SQL>
SQL>
SQL> col object_name for a30
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
Now I will check if the Liquibase connection is successful, for that you should use the status command.
[root@localhost liquibase]# liquibase --username=liquibase --password=liquibase --changeLogFile=changelogfile.sql status
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 04:35:09 (version 4.6.2 #886 built at 2021-11-30 16:20+0000)
Liquibase Version: 4.6.2
Liquibase Community 4.6.2 by Liquibase
Output saved to /root/liquibase/output_local.sql
Liquibase command 'status' was executed successfully.
[root@localhost liquibase]# more /root/liquibase/output_local.sql
LIQUIBASE@jdbc:oracle:thin:@localhost.ontadomain:1521/dixitdb is up to date
[root@localhost liquibase]#
Next we need to generate the ‘changelog‘. Liquibase uses a changelog to consecutively list all changes made to your database. Think of it as a account book or a daybook. It is a file that contains a record of all your database changes (changesets). Liquibase uses this changelog record to inspect your database and execute any changes that are not yet applied to your database.
[root@localhost liquibase]# liquibase --driver=oracle.jdbc.OracleDriver --changeLogFile=changelogfile.sql --classpath=/root/liquibase/lib/ojdbc8-18.3.0.0.jar --url="jdbc:oracle:thin:@localhost.ontadomain:1521/dixitdb" --username=liquibase --password=liquibase --defaultSchemaName=dixit generateChangeLog
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 22:44:36 (version 4.6.2 #886 built at 2021-11-30 16:20+0000)
Liquibase Version: 4.6.2
Liquibase Community 4.6.2 by Liquibase
BEST PRACTICE: The changelog generated by diffChangeLog/generateChangeLog should be inspected for correctness and completeness before being deployed.
When generating formatted SQL changelogs, it is important to decide if batched statements
should be split or not. For storedlogic objects, the default behavior is 'splitStatements:false'
.All other objects default to 'splitStatements:true'. See https://docs.liquibase.org for additional information.
Generated changelog written to /root/liquibase/changelogfile.sql
Output saved to /root/liquibase/output_local.sql
Liquibase command 'generateChangelog' was executed successfully.
[root@localhost liquibase]#
Next we will run the ‘updateSQL‘ command which is a helper command that allows you to inspect the SQL Liquibase will run while using the update command. The updateSQL command is used when you want to inspect the raw SQL before running the update command, so you can correct any issues that may arise before running the command.
Lets examine the output_local.sql file generate by the last command where we ran updateSQL command with Liquibase.
[root@localhost liquibase]#
[root@localhost liquibase]# more /root/liquibase/output_local.sql
-- *********************************************************************
-- Update Database Script
-- *********************************************************************
-- Change Log: changelog.sql
-- Ran at: 12/29/21 10:46 PM
-- Against: DIXDROID@jdbc:oracle:thin:@localhost.ontadomain:1521/dixitdb
-- Liquibase version: 4.6.2
-- *********************************************************************
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Lock Database
UPDATE DIXDROID.DATABASECHANGELOGLOCK SET LOCKED = 1, LOCKEDBY = 'localhost.ontadomain (192.168.154.142)', LOCKGRANTED = TO_TIMESTAMP('2021-12-29 22:46:27.69
5', 'YYYY-MM-DD HH24:MI:SS.FF') WHERE ID = 1 AND LOCKED = 0;
-- Create Database Change Log Table
CREATE TABLE DIXDROID.DATABASECHANGELOG (ID VARCHAR2(255) NOT NULL, AUTHOR VARCHAR2(255) NOT NULL, FILENAME VARCHAR2(255) NOT NULL, DATEEXECUTED TIMESTAMP NO
T NULL, ORDEREXECUTED INTEGER NOT NULL, EXECTYPE VARCHAR2(10) NOT NULL, MD5SUM VARCHAR2(35), DESCRIPTION VARCHAR2(255), COMMENTS VARCHAR2(255), TAG VARCHAR2(
255), LIQUIBASE VARCHAR2(20), CONTEXTS VARCHAR2(255), LABELS VARCHAR2(255), DEPLOYMENT_ID VARCHAR2(10));
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Changeset changelog.sql::1640835878960-1::root
CREATE TABLE DIXIT1 (ID NUMBER NOT NULL, DESCRIPTION VARCHAR(50), CONSTRAINT DIXIT1_PK PRIMARY KEY (ID));
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-1', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 1, '8:17e41ee520cc38d8600cb88325a89679', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Changeset changelog.h2.sql::1640835878960-2::root
CREATE SEQUENCE DIXIT1_SEQ START WITH 21 MAXVALUE 9999999999999999999999999999;
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-2', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 2, '8:8b670db06f2e0ad02cf1deeec1f9b79b', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Changeset changelog.sql::1640835878960-3::root
CREATE TABLE TEST (ID NUMBER(10, 0), NAME VARCHAR(30));
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-3', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 3, '8:e56593db5135656a87586790c3d5b671', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Changeset changelog.sql::1640835878960-4::root
CREATE INDEX IDX_TEST1 ON TEST(ID, NAME);
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-4', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 4, '8:178519da11977278e2192549595aed7b', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Release Database Lock
UPDATE DIXDROID.DATABASECHANGELOGLOCK SET LOCKED = 0, LOCKEDBY = NULL, LOCKGRANTED = NULL WHERE ID = 1;
[root@localhost liquibase]#
Now after careful examination of the last .sql file, time to call the ‘update‘ command which deploys any changes that are in the changelog file and that have not been deployed to your database yet. During this step (first time) only it creates the DATABASECHANGELOG table which is used to track which changesets have been run, and the DATABASECHANGELOGLOCK table to ensure only one instance of Liquibase is running at one time.
When you run the update command, Liquibase sequentially reads changesets in the changelog file, then it compares the unique identifiers of id, author, and path to filename to the values stored in the DATABASECHANGELOG table.
You can see both of the two new (liquibase specific) tables DATABASECHANGELOGLOCK & DATABASECHANGELOG created under the schema.
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
DATABASECHANGELOGLOCK TABLE
PK_DATABASECHANGELOGLOCK INDEX
DATABASECHANGELOG TABLE
9 rows selected.
SQL> desc DATABASECHANGELOG
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL VARCHAR2(255)
AUTHOR NOT NULL VARCHAR2(255)
FILENAME NOT NULL VARCHAR2(255)
DATEEXECUTED NOT NULL TIMESTAMP(6)
ORDEREXECUTED NOT NULL NUMBER(38)
EXECTYPE NOT NULL VARCHAR2(10)
MD5SUM VARCHAR2(35)
DESCRIPTION VARCHAR2(255)
COMMENTS VARCHAR2(255)
TAG VARCHAR2(255)
LIQUIBASE VARCHAR2(20)
CONTEXTS VARCHAR2(255)
LABELS VARCHAR2(255)
DEPLOYMENT_ID VARCHAR2(10)
Now the baseline is created, we can create the next version of the database. I have created four new SQL files to create new sequence, view, function and a table with few records and an Index.
[root@localhost liquibase]# ls -ltrh *.sql*
-rwxrwxrwx. 1 root root 88 Dec 29 23:03 seq2.sql
-rwxrwxrwx. 1 root root 65 Dec 29 23:05 view1.sql
-rwxrwxrwx. 1 root root 172 Dec 29 23:07 func2.sql
-rwxrwxrwx. 1 root root 159 Dec 30 00:34 tab2.sql
I will now create the master.xml file which acts as a master index and is an ordered list of all changelogs. I have added a master.xml file with following contents. includeAll path=”/root/liquibase” is the XML tag that allows you to specify a directory that contains multiple changelog files. include file=./changelog1.xml is the XML file which I have created and will be referenced or called by the master.xml file.
If you check the ‘changelog1.xml‘ file, it has the order that you want Liquibase to follow, like in my example it will first create the sequence after reading from file seq2.sql, followed by table script tab2.sql and last file which is to create the function using func2.sql There are few parameters used with the XML and are explained below. relativeToChangelogFile=”true” : is cause we are using relative paths. ID : tag is used to assign a unique value to the action. endDelimiter: is the attribute can be set in a sql or sqlFile Change Type to override the default value of ;. The endDelimiter can be set to ” or to a character other than ; to indicate the end of the SQL statement. stripComments : Set to true to remove any comments in the SQL before executing, otherwise false. Defaults to true if not set
[root@localhost liquibase]# more master.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.9.xsd">
<includeAll path="/root/liquibase"/>
<include file="./changelog1.xml" relativeToChangelogFile="true"/>
</databaseChangeLog>
[root@localhost liquibase]#
-- Changelog XML file that is called by the above master.xml file.
[root@localhost liquibase]# more changelog1.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.9.xsd">
<changeSet author="dixdroid" id="seq2">
<sqlFile dbms="oracle"
endDelimiter=";"
path="./seq2.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="false"/>
</changeSet>
<changeSet author="dixdroid" id="table2">
<sqlFile dbms="oracle"
endDelimiter=";"
path="./tab2.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="false"/>
</changeSet>
<changeSet author="dixdroid" id="getcount" runOnChange="true">
<sqlFile dbms="oracle"
endDelimiter=";"
path="./func2.sql"
relativeToChangelogFile="true"
splitStatements="false"
stripComments="false"/>
</changeSet>
</databaseChangeLog>
Now let’s run the Liquibase ‘update‘ command using master.xml as the new changeLogFile which will perform all the changes that are mentioned in the master xml and related changelog1.xml
Let’s check if all new objects are created in the database, and we have table TESLA and its Index IDX_TESLA and RETURNTABLECOUNT function created.
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
DATABASECHANGELOGLOCK TABLE
PK_DATABASECHANGELOGLOCK INDEX
DATABASECHANGELOG TABLE
TESLA TABLE
IDX_TESLA INDEX
RETURNTABLECOUNT FUNCTION
12 rows selected.
-- Lets check DATABASECHANGELOG table to view all details about this schema level change.
SQL> select id, AUTHOR,FILENAME,DATEEXECUTED,ORDEREXECUTED,DESCRIPTION,LIQUIBASE,DEPLOYMENT_ID from DATABASECHANGELOG;
ID AUTHOR FILENAME DATEEXECUTED ORDEREXECUTED DESCRIPTIO LIQUIBASE DEPLOYMENT
---------- ---------- --------------- ------------------------------ ------------- ---------- -------------------- ----------
seq2 dixdroid changelog1.xml 30-DEC-21 12.33.02.051131 AM 1 sqlFile 4.6.2 0842381718
table2 dixdroid changelog1.xml 30-DEC-21 12.36.11.790990 AM 2 sqlFile 4.6.2 0842571528
getcount dixdroid changelog1.xml 30-DEC-21 12.36.11.937545 AM 3 sqlFile 4.6.2 0842571528
Perfect! Let me add one more object to the schema and see what happens next and how details added to the changelog table. This time I will create a new VIEW using SQL file with name view1.sql. Below are the master.xml and changelog file that I have created for this new addition to the schema and to track it.
Time to run the update command once again using modified master.xml file to create view.
[root@localhost liquibase]#
[root@localhost liquibase]# liquibase --username=dixdroid --password=dixdroid --changeLogFile="master.xml" update
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 00:57:25 (version 4.6.2 #886 built at 2021-11-30 16:20+0000)
Liquibase Version: 4.6.2
Liquibase Community 4.6.2 by Liquibase
Output saved to /root/liquibase/output_local.sql
Liquibase command 'update' was executed successfully.
[root@localhost liquibase]#
[root@localhost liquibase]#
-- Letscheck if the VIEW named PEEK is created or not!
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
-------------------------------------------------------------------------------------------------------------------------------- -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
DATABASECHANGELOGLOCK TABLE
PK_DATABASECHANGELOGLOCK INDEX
DATABASECHANGELOG TABLE
TESLA TABLE
IDX_TESLA INDEX
RETURNTABLECOUNT FUNCTION
PEEK VIEW
13 rows selected.
SQL> select id, AUTHOR,FILENAME,DATEEXECUTED,ORDEREXECUTED,DESCRIPTION,LIQUIBASE,DEPLOYMENT_ID from DATABASECHANGELOG;
ID AUTHOR FILENAME DATEEXECUTED ORDEREXECUTED DESCRIPTIO LIQUIBASE DEPLOYMENT
---------- ---------- --------------- ------------------------------ ------------- ---------- -------------------- ----------
seq2 dixdroid changelog1.xml 30-DEC-21 12.33.02.051131 AM 1 sqlFile 4.6.2 0842381718
table2 dixdroid changelog1.xml 30-DEC-21 12.36.11.790990 AM 2 sqlFile 4.6.2 0842571528
getcount dixdroid changelog1.xml 30-DEC-21 12.36.11.937545 AM 3 sqlFile 4.6.2 0842571528
view1 dixdroid changelog2.xml 30-DEC-21 12.57.31.827752 AM 4 sqlFile 4.6.2 0843851310
Great, its there! About Liquibase, you can use it as core DevOps tool to track schema changes and for deployments and automations, integrate it with your CI/CD pipelines or can be used as a migration tool. There are multiple use cases. For more details check their official website.
Last week I was looking into a database problem where the customer reported database hang issues when trying to use their Java based application, and that application requires to connect with the database every now and then. This was a brand new platform where they were doing some UAT testing’s and were using WebLogic which they recently configured to use with connection pooling enabled.
Application team reported that they are getting frequent ‘ORA-00018 maximum number of sessions exceeded’ errors in the alert log and that was a big clue. Operations team tried many times to identify all INACTIVE sessions (session limit was 800 in the DB) and manually kill them, but new JDBC Thin Client gets spawns every time … I was sure that this is most probably the JDBC connection leak issue, and those are always difficult to identify!
This is what it was, the peak at the time of issue
Now talking about – How to detect this problem ?. Though there were multiple other performance problem on this platform, but none of them seems related with the SQLs, as the execution plan were perfect and reflects a well designed SQL workload. So I suspected the application, started with the WebLogic server logs and there I saw multiple instances of “BEA-000627 Reached maximum capacity of pool “cgDataSource”, making “0” new resource instances instead of “1” “ warnings, and with that, it once again solidified my initial assumption that it as happening all as a result of applications’ code not closing connections properly, connection leakage.
A leaked connection is a connection that was not properly returned to the connection pool in the data source. To automatically recover leaked connections, you can specify a value for Inactive Connection Timeout on the JDBC Data Source. When you set a value for Inactive Connection Timeout, WebLogic Server forcibly returns a connection to the data source when there is no activity on a reserved connection for the number of seconds that you specify. When set to 0 (the default value), this feature is turned off. So we have to we need to set one specific parameter for the data source to force close those connections that are not closed by the application.
Click on Services --> Data Sources --> Click on the Data Source you want to configure --> Click on Connection Pool --> Click on Advanced --> “Inactive Connection Timeout”
And it was set to its default value of 0, means no inactive sessions will be snapped or removed and will stay in the database and consume sessions limit. After carefull observation and a discussion with the team, we set it to a reasonable value and bounced Weblogic Admin and manager servers (MS) to make changes persistent.
But if this was the solution to the problem ? No, it was not! and is always a remedy, to immediately handle the situation while the root cause of the problem is investigated. I later on investigated the log files in order to isolate the culprit class. Also noticed that after we’d set the ‘Inactive Connection Timeout’ setting, multiple instances of BEA-001153 Warnings were captured within logs. This was because the code does not close the connection, and was waiting for the “Inactive connection timeout” to trigger closure of these objects and that had caused the warning message printed in the admin server logs.
<Dec 3, 2021 10:12:34 AM GMT>
<Warning> <JDBC> <BEA-001153> <Forcibly releasing inactive connection "weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_xxCConnection@xxx" back into the connection pool "xxxxxxxx-xxx", currently reserved by: java.lang.Exception
at weblogic.jdbc.common.internal.ConnectionEnv.setup(ConnectionEnv.java:325)
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:363)
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:329)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:417)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:324)
at weblogic.jdbc.common.internal.MultiPool.searchLoadBalance(MultiPool.java:312)
at weblogic.jdbc.common.internal.MultiPool.findPool(MultiPool.java:180)
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:89)
at weblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(RmiDataSource.java:350)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:369)
at oracle.jbo.server.DBTransactionImpl.establishNewConnection(DBTransactionImpl.java:990)
.....
..........
Later I collected JDBC Diagnostic Dumps via admin console to detect and further troubleshoot this JDBC Connection Leak.
Dumping Resource Pool:cgDataSource
Resource Pool:cgDataSource:dumpPool Current Capacity = 4
Resource Pool:cgDataSource:dumpPool dumping available resources, #entries = 0
Resource Pool:cgDataSource:dumpPool dumping reserved resources, #entries = 4
Resource Pool:cgDataSource:dumpPool reserved[0] = autoCommit=true, enabled=true, isXA=true, isJTS=false, vendorID=11, connUsed=true, doInit=false, 'null', destroyed=false, poolname=cgDataSource, appname=null, moduleName=null, connectTime=4941, dirtyIsolationLevel=false, initialIsolationLevel=2, infected=false, lastSuccessfulConnectionUse=1344715611974, secondsToTrustAnIdlePoolConnection=10, currentUser=java.lang.Exception
at weblogic.jdbc.common.internal.ConnectionEnv.setup(ConnectionEnv.java:308)
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:314)
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:292)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:425)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:316)
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:93)
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:61)
at weblogic.jdbc.jta.DataSource.getXAConnectionFromPool(DataSource.java:1473)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1302)
at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:425)
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:382)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
at troubleshooting.servlets.JdbcConnections.service(JdbcConnections.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
......
.........
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
at troubleshooting.servlets.JdbcConnections.service(JdbcConnections.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
.........
...............
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
at troubleshooting.servlets.JdbcConnections.service(JdbcConnections.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
.....
......
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
at troubleshooting.servlets.JdbcConnections.service(JdbcConnections.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
......
.........
If you watch carefully, on all the dumps above, the method that initiates the connection to the database is troubleshooting.servlets.JdbcConnections.service(), specifically at line 97 on Class JdbcConnections.java, as the stack says.
We went back to the source code for the application (dummy code):
} else if (i == -1) {
try {
InitialContext localInitialContext2 = new InitialContext();
DataSource localDataSource = (DataSource)localInitialContext2.lookup("cgDataSource"); <========== HERE IS WHERE THE LEAK STARTS
localObject = localDataSource.getConnection();
localObject = null; <========== HERE IS WHERE THE LEAK OCCURS, WHEN OBJECT IS SET TO NULL BUT CONNECTION NOT CLOSED.
System.out.println("xxxxxxxxx.");
localInitialContext2.close();
} catch (Exception localException2) {
localPrintWriter.println("An exception has been thrown while trying to increase the number of JDBC connection to " + i + ", the error is<br><br>");
And application team immediately recognized the cause, there was a need to explicitly closed the connection and was later on fixed by the application team to something below.
InitialContext localInitialContext2 = new InitialContext();
DataSource localDataSource = (DataSource)localInitialContext2.lookup("cgDataSource");
localObject = localDataSource.getConnection();
System.out.println("xxxxxxxxx.");
localInitialContext2.close();
} catch (Exception localException2) {
localPrintWriter.println("An exception has been thrown while trying to increase the number of JDBC connection to " + i + ", the error is<br><br>");
} finally{
localObject.close()
localObject = null;
}
So, that’s how it got resolved after changing the problematic code. Connection leakage is a very tricky scenario which requires careful observation primarily of the application code.
Today while doing an analysis for a very complex & a big SQL query (402 lines), I tried to use the Real Time SQL Monitoring Report to get some quick stats about the execution, but it failed and it didn’t showed me any output.
Luckily I found a great article written by my friendMohamed Houriand a metalink note for the problem (Doc ID 1613163.1), which says that there is a default limitation (300 lines) on the plan lines, and the threshold is in place to avoid spending too much time processing these large statements at the expense of other activities. So anything that has more than 300 plan lines won’t be monitored!
The fix to the problem is to set a hidden parameter which can be used to set the limit to a higher value, or you can set it on your SPFILE.
-- To change the default limit to any user defined value.
alter system set "_sqlmon_max_planlines"=450 scope=both;
-- or set it in your SPFILE
_sqlmon_max_planlines=450
Recently while doing a database migration/upgrade project, we encountered a strange case where the orachk utility caught a new security vulnerability (CVE-2021-44228) on this new upgraded platform, and were related with customer’s logging platform log4j and for their EBS (E-business suite version 12.2) middleware. The vulnerability was for its JNDI features that do not protect against attacker controlled LDAP and other JNDI related endpoints, and coming with a 10 out of 10 severity score.
We checked with Oracle customer support and they asked us to apply a workaround (link below). I later on found that its not only Oracle products, but has impacted many other applications & cloud services. This weakness poses a significant risk to many applications and cloud services and it needs to be patched right away!
Would like to discuss one interesting problem that I’ve recently faced while supporting one of the customer migration, where one critical SQL statement turned out to be very slow on this new infrastructure. They moved from 12.1 standalone to 12.2 2-Node RAC cluster and moved to a brand new hardware, but with similar HW and resource configurations as earlier.
As per the team, one of the core application API was behaving very slow while processing requests, and also failing due to time limits applied within the application. Hence the entire system becoming slow after they moved the workload to this new system.
Query was using multiple subqueries or nesting results that it got from them using a UNION ALL operator which returned all rows as it does not eliminate duplicate selected rows. The query was frequently waiting on User IO wait class, specially on ‘direct path read‘ and ‘db file sequential read‘ events, and took ~ 2.7 minutes to complete, but was expected to complete in less than a second. The query text was something like below ..
(select * from (select * from dix_table_A where perfor_column_12=:1 and testcas_idnu=:2 ) where rownum <= :"SYS_B_0")
union all (select * from (select * from dix_table_A where perfor_column_12=:3 and testcas_idnu=:4 ) where rownum <= :"SYS_B_1")
union all (select * from (select * from dix_table_A where perfor_column_12=:5 and testcas_idnu=:6 ) where rownum <= :"SYS_B_2")
union all (select * from (select * from dix_table_A where perfor_column_12=:7 and testcas_idnu=:8 ) where rownum <= :"SYS_B_3");
SQL_ID PLAN_HASH_VALUE EXECS AVG_ETIME AVG_CPU_TIME AVG_LIO AVG_PIO
------------- --------------- ------------ ------------ ------------ -------------- ------------
8sn7dhnash901 891893112 45 162.276 101.390 19,618,917.8 11,911,560.6
--- Execution Plan of the ill/slow SQL
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 24 (100)| |
| 1 | UNION-ALL | | | | | |
| 2 | COUNT STOPKEY | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID BATCHED| DIX_TABLE_A | 187 | 17952 | 6 (0)| 00:00:03 |
| 4 | INDEX SKIP SCAN | I2DIX_TABLE_A | 168K| | 2 (0)| 00:00:03 |
| 5 | COUNT STOPKEY | | | | | |
| 6 | TABLE ACCESS BY INDEX ROWID BATCHED| DIX_TABLE_A | 187 | 17952 | 6 (0)| 00:00:03 |
| 7 | INDEX SKIP SCAN | I2DIX_TABLE_A | 168K| | 2 (0)| 00:00:03 |
| 8 | COUNT STOPKEY | | | | | |
| 9 | TABLE ACCESS BY INDEX ROWID BATCHED| DIX_TABLE_A | 187 | 17952 | 6 (0)| 00:00:03 |
| 10 | INDEX SKIP SCAN | I2DIX_TABLE_A | 168K| | 2 (0)| 00:00:03 |
| 11 | COUNT STOPKEY | | | | | |
| 12 | TABLE ACCESS BY INDEX ROWID BATCHED| DIX_TABLE_A | 187 | 17952 | 6 (0)| 00:00:03 |
| 13 | INDEX SKIP SCAN | I2DIX_TABLE_A | 168K| | 2 (0)| 00:00:03 |
-------------------------------------------------------------------------------------------------------
As per the SQL text, its querying PERFOR_COLUMN_12 and TESTCASE_IDNUcolumns in the WHERE clause, and per the execution plan (above), I2DIX_TABLE_A index is picked to prepare the plan with INDEX SKIP SCAN as the access method/path for said Index. Index skip scan means, that the leading or other columns of the index is ignored. This costs performance since Oracle has read every item of the first column, second or subsequent columns and check if the second (or third etc.) column is what you searched for. This usually is faster than a full-table scan (depends on your query), but slower than a index range scan.
I2DIX_TABLE_A index is a multi column index which has both of the referenced columns PERFOR_COLUMN_12 and TESTCASE_IDNU, but in the reverse direction, not as the leading columns.
-- I2DIX_TABLE_A Index DDL (BAD Index)
CREATE UNIQUE INDEX "DIXIT"."I2DIX_TABLE_A" ON "DIXIT"."DIX_TABLE_A" ("PRODUCT_CLASSIC_CAT", "BUILD_CLASSIC_ID", "TESTCASE_IDNU", "PERFOR_COLUMN_12");
While doing further analysis, I saw there is already one dedicated index I1DIX_TABLE_A there to cover this combination of columns PERFOR_COLUMN_12 and TESTCASE_IDNU, and in the right order too, but was ignored by the optimizer while preparing the estimations or costing.
-- I1DIX_TABLE_A Index DDL (GOOD Index)
CREATE INDEX "DIXIT"."I1DIX_TABLE_A" ON "DIXIT"."DIX_TABLE_A" ("PERFOR_COLUMN_12", "TESTCASE_IDNU");
Now, this is strange, why the CBO didn’t considered that index even when the required column set is present in the other index and in the right order, and instead it opted to go for an expensive index ? These are the other stats about both the two indexes.
All statistics looks good, I mean the row count is almost same, ITL initial and max values, status and stats collection date, but the difference is there for the branch level (BLevel) and Leaf_Blocks as they are different. Taking about the Blevel, it’s the part of the B-tree index that relates to the number of times Oracle has to narrow its search on the index while searching for a particular record, or the blevel is the number of branch levels. On the other hand, leaf_blocks represents number of leaf blocks in an index.
As we know the Blevel and Clustering Factor are important elements of an index scan cost. The index that has lower values for these is likely to be chosen. I2DIX_TABLE_A has lower values than I1DIX_TABLE_A. About the CF, It is hard to decrease clustering factor because it requires rebuilding the table.
So, lets focus to decrease the BLEVEL, will try rebuilding I1DIX_TABLE_A as that might help to decrease the Blevel.
SQL> alter index dixit.I1DIX_TABLE_A rebuild;
Index altered.
-- Index stats after rebuilding
INDEX_NAME UNIQUENES BLEVEL DEGREE TABLE_NAME NUM_ROWS LEAF_BLOCKS INI_TRANS MAX_TRANS STATUS LAST_ANAL
-------------------- --------- ---------- -------- --------------- ----------- ------------ ---------- --------- ------ ------------
I1DIX_TABLE_A NONUNIQUE 3 1 DIX_TABLE_A 33457571 446703 2 255 VALID 01-DEC-21
I2DIX_TABLE_A UNIQUE 3 1 DIX_TABLE_A 35084294 494579 2 255 VALID 01-DEC-21
And Yes, both the BLEVEL and the LEAF_BLOCKS of I1DIX_TABLE_A got reset, and I’ve immediately started hearing some good news from the application team about query’s runtime improvements. The expensive index I2DIX_TABLE_A was finally replaced by the good I1DIX_TABLE_A and that’s how the INDEX SKIP SCAN was replaced with the fast INDEX RANGE SCAN.
You might be thinking that the other difference is the UNIQUENESS, though the Unique scan cost is lower than Range scan cost. But, in this case, unique scan cannot be used for any index. Unique index is NOT always used for “unique scan.” It depends on predicates. The bad index is chosen not because it is a unique index but its scan cost is lower.
Sometimes when you have to support a performance problem and have to do the firefight, every second counts and you don’t want to use any of those fancy tools to see what’s going on in the database at the moment, and want something handy and quick, nothing is as friendly as querying dynamic views. What about using a small quick script to provide you what. Specially the darling views V$SQLAREA and V$SESSION where the first gives you all sorts of SQL level information and later provides me session information.
Recently while working on a shot assignment where I have to settle the performance and stablize system’s performance for a network inventory application, I quickly penned one script, which can be called as ‘Poor man’s database monitoring script‘ 🙂 to provide me a quick understanding of the database system on what all comes and stays and waits at any given point in time.
The script is pretty basic but very handy, quick and gives you all sort of details, and I’ve played with the LAST_CALL_ET from V$SESSION to get the overall time (RUNNING_SINCE) the SQL is there in the database, rest all you can add, alter as per your need. So this is what it is …
--------------------------------------------------------------------------------------------------------------------
-- File name: prashantpoormanscript.sql
-- Version: V1.1 (12-08-2021) Simple View
-- Purpose: This script can be used on any Oracle DB to know what all running and for how long and waiting
-- Also provides details on SQL and SESSION level.
-- Author: Prashant Dixit The Fatdba www.fatdba.com
--------------------------------------------------------------------------------------------------------------------
set linesize 400 pagesize 400
select
x.inst_id
,x.sid
,x.serial#
,x.username
,x.sql_id
,plan_hash_value
,sqlarea.DISK_READS
,sqlarea.BUFFER_GETS
,sqlarea.ROWS_PROCESSED
,x.event
,x.osuser
,x.status
,x.BLOCKING_SESSION_STATUS
,x.BLOCKING_INSTANCE
,x.BLOCKING_SESSION
,x.process
,x.machine
,x.program
,x.module
,x.action
,TO_CHAR(x.LOGON_TIME, 'MM-DD-YYYY HH24:MI:SS') logontime
,x.LAST_CALL_ET
,x.SECONDS_IN_WAIT
,x.state
,sql_text,
ltrim(to_char(floor(x.LAST_CALL_ET/3600), '09')) || ':'
|| ltrim(to_char(floor(mod(x.LAST_CALL_ET, 3600)/60), '09')) || ':'
|| ltrim(to_char(mod(x.LAST_CALL_ET, 60), '09')) RUNNING_SINCE
from gv$sqlarea sqlarea
,gv$session x
where x.sql_hash_value = sqlarea.hash_value
and x.sql_address = sqlarea.address
and sql_text not like '%select x.inst_id,x.sid ,x.serial# ,x.username ,x.sql_id ,plan_hash_value ,sqlarea.DISK_READS%'
and x.status='ACTIVE'
and x.USERNAME is not null
and x.SQL_ADDRESS = sqlarea.ADDRESS
and x.SQL_HASH_VALUE = sqlarea.HASH_VALUE
order by RUNNING_SINCE desc;
This is how the output looks like. I have highlighted the RUNNING_SINCE column in red, just to show that the SQL was there in the database running for 3 minutes and 8 seconds and still active waiting on db file sequential reads and was called through SQL Developer, along it gets you the SQLID and PHV for all active SQLs along with other session and SQL level details that will help you to form the right approach for any of those slow sluggish SQLs.
There are hundreds of use cases for this one, specially when the application or product team doesn’t know what all gets triggered in the database in the form of SQLs whenver any of the APIs touched. That’s when you can take help of such handy SQL scripts to check what all runs on the database. You can also write a simple shell script and call it via OS utilities like watch to see it live showing you database workload .. You can write a simple shell something like this …
--------------------------------------------------------------------------------------------------------------------
-- File name: prashantpoormanscript.sh
-- Version: V1.1 (12-08-2021) Shell script View
-- Purpose: This script can be used on any Oracle DB to know what all running and for how long and waiting
-- Also provides details on SQL and SESSION level.
-- Author: Prashant Dixit The Fatdba www.fatdba.com
--------------------------------------------------------------------------------------------------------------------
#!/bin/ksh
sqlplus /nolog << EOF
CONNECT username/Password@connection_string
set linesize 400
set pagesize 400
col ACTION for a22
col USERNAME for a9
col SQL_ID for a16
col EVENT for a20
col OSUSER for a10
col PROCESS for a8
col MACHINE for a15
col OSUSER for a8
col PROGRAM for a15
col module for a20
select x.inst_id,x.sid
,x.serial#
,x.username
,x.sql_id
,plan_hash_value as PHV
,sqlarea.DISK_READS
,sqlarea.BUFFER_GETS
,sqlarea.ROWS_PROCESSED
,x.event
,x.osuser
,x.status
,x.BLOCKING_SESSION_STATUS
,x.BLOCKING_INSTANCE
,x.BLOCKING_SESSION
,x.process
,x.machine
,x.program
,x.module
,x.action
,TO_CHAR(x.LOGON_TIME, 'MM-DD-YYYY HH24:MI:SS') logontime
,x.LAST_CALL_ET
,x.SECONDS_IN_WAIT
,x.state
,sql_text,
ltrim(to_char(floor(x.LAST_CALL_ET/3600), '09')) || ':'
|| ltrim(to_char(floor(mod(x.LAST_CALL_ET, 3600)/60), '09')) || ':'
|| ltrim(to_char(mod(x.LAST_CALL_ET, 60), '09')) RUNNING_SINCE
from gv\$sqlarea sqlarea
,gv\$session x
where x.sql_hash_value = sqlarea.hash_value
and x.sql_address = sqlarea.address
and sql_text not like '%select x.inst_id,x.sid ,x.serial# ,x.username ,x.sql_id ,plan_hash_value%'
and x.status='ACTIVE'
and x.USERNAME is not null
and x.SQL_ADDRESS = sqlarea.ADDRESS
and x.SQL_HASH_VALUE = sqlarea.HASH_VALUE
order by RUNNING_SINCE desc;
SPOOL OFF
EXIT;
EOF
Once you put all that in shell script, call that via watch utility and you’re set. Example : call the shell every 2 seconds to refresh the output
[Fatdba@Ontacan-test7-dbmonkey mytools]$ watch -n 2 sh prashantpoormanscript.sh
But if you want a fancier representation of the code, here you go … I did some formatting and make it more easy to read but I still love the core one, unformatted and simple.
--------------------------------------------------------------------------------------------------------------------
-- File name: prashantpoormanscript1.sql
-- Version: V1.1 (12-08-2021) Fancy Version
-- Purpose: This script can be used on any Oracle DB to know what all running and for how long and waiting
-- Also provides details on SQL and SESSION level.
-- Author: Prashant Dixit The Fatdba www.fatdba.com
--------------------------------------------------------------------------------------------------------------------
set linesize 400
set pagesize 400
col ACTION for a22
col USERNAME for a9
col SQL_ID for a16
col EVENT for a20
col OSUSER for a10
col PROCESS for a8
col MACHINE for a15
col OSUSER for a8
col PROGRAM for a15
col module for a20
col BLOCKING_INSTANCE for a20
select
'InstID .............................................: '||x.inst_id,
'SID ................................................: '||x.sid,
'Serial .............................................: '||x.serial#,
'Username ...........................................: '||x.username,
'SQLID ..............................................: '||x.sql_id,
'PHV ................................................: '||plan_hash_value,
'DISK_READS .........................................: '||sqlarea.DISK_READS,
'BUFFER_GETS ........................................: '||sqlarea.BUFFER_GETS,
'ROWS_PROCESSED ..... ...............................: '||sqlarea.ROWS_PROCESSED,
'Event .............................................: '||x.event,
'OSUser .............................................: '||x.osuser,
'Status .............................................: '||x.status,
'BLOCKING_SESSION_STATUS ............................: '||x.BLOCKING_SESSION_STATUS,
'BLOCKING_INSTANCE ..................................: '||x.BLOCKING_INSTANCE,
'BLOCKING_SESSION ...................................: '||x.BLOCKING_SESSION,
'PROCESS ............................................: '||x.process,
'MACHINE ............................................: '||x.machine,
'PROGRAM ............................................: '||x.program,
'MODULE .............................................: '||x.module,
'ACTION .............................................: '||x.action,
'LOGONTIME ..........................................: '||TO_CHAR(x.LOGON_TIME, 'MM-DD-YYYY HH24:MI:SS') logontime,
'LAST_CALL_ET .......................................: '||x.LAST_CALL_ET,
'SECONDS_IN_WAIT ....................................: '||x.SECONDS_IN_WAIT,
'STATE ..............................................: '||x.state,
'RUNNING_SINCE ......................................: '||ltrim(to_char(floor(x.LAST_CALL_ET/3600), '09')) || ':' || ltrim(to_char(floor(mod(x.LAST_CALL_ET, 3600)/60), '09')) || ':' || ltrim(to_char(mod(x.LAST_CALL_ET, 60), '09')) RUNNING_SINCE,
'SQLTEXT ............................................: '||sql_text
from gv$sqlarea sqlarea
,gv$session x
where x.sql_hash_value = sqlarea.hash_value
and x.sql_address = sqlarea.address
and sql_text not like '%select x.inst_id,x.sid ,x.serial# ,x.username ,x.sql_id ,plan_hash_value%'
and x.status='ACTIVE'
and sql_text not like '%select :"SYS_B_00"||x.inst_id, :"SYS_B_01"||x.sid, :"SYS_B_02"||x.serial#,%'
and x.USERNAME is not null
and x.SQL_ADDRESS = sqlarea.ADDRESS
and x.SQL_HASH_VALUE = sqlarea.HASH_VALUE
order by RUNNING_SINCE desc;
And the output will look something like this, neat and clean ..
And finally the much loved ‘SQL Tuning Advisor‘ is now available to use in Oracle cloud DB Service. The useful utility is a built-in tool to provide suggestions or recommendations about certain SQL statements and now available to use with OCI.
Recently I was contacted by someone to take a look at an application problem where they caught few ORA errors in their logs, received “ORA-24777: use of non-migratable database link not allowed” which was for one of their important DBLink. They were using WebLogic as an application server and 19c as Oracle DB Version! Okay, talking about the issue, this usually happens due to a session using shared connection (XA) using DBLink with dedicated connection. About fixing this problem, there are few ways how you can fix this issue i.e. setting dispatchers, Change or server on tnsnames.ora to SHARED etc., but would like to discuss two of the other methods that I tried to fix the problem.
First Solution:
Open your Weblogic Server Admin Console and click on Services -> Data Sources -> {Choose the datasource you are using in your DB adapter} -> Connection Pool
Change Driver ClassName from oracle.jdbc.xa.client.OracleXADataSource to oracle.jdbc.OracleDriver
Second Solution: Sometimes, you don’t want to set the system wide change in your WebLogic by changing the driver class name, you can alter your DBLink and make it a SHARED DBLink. Shared database links purpose is to curb down the number of connections to the remote database by sharing same connection link between the source and the remote by multiple users sessions.
-- This was the previous DDL of the DBLink
CREATE DATABASE LINK test_dblink
CONNECT TO testuser IDENTIFIED BY mypassword
USING '(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)(HOST=monkeyserver.ontadomain.com)(PORT=1524))
(CONNECT_DATA=(SERVICE_NAME=TESTDB1))
)';
-- This is what I have altered it to to make it SHARED DBLink and worked for me.
CREATE SHARED DATABASE LINK test_dblink
CONNECT TO testuser IDENTIFIED BY mypassword
USING '(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)(HOST=monkeyserver.ontadomain.com)(PORT=1524))
(CONNECT_DATA=(SERVICE_NAME=TESTDB1))
)';
Entire last week I was looking in to one system wide performance issue where customer reported slowness in their database since they’d moved into a new infrastructure, after data migration. Though this wasn’t the production system, but kind of equally important, and the issue was impacting their acceptance tests. I was updated that few of the Materialized Views are there and their refresh times are now extended by a great margin, since they moved to this new system.
As soon I got the access on their system, I did a quick check on the database and I found a huge list of MV refresh jobs are scheduled and almost all of them are running with a huge delay, the performance was not at all consistent, sometimes they gets refreshed within few minutes and another run goes for few hours. This initially sounded little bizarre to me, until I did a comparive stuy between the previous (fast) and the new (slow) databases. What I observed is that there is more than 500% of extra data there in the business tables in this new (slow) database and with such a huge data, all those complex MV refresh queries ultimately got slowed down here in the new system.
For an example, one of the critical MV taking ~ 200 mins to refresh an is created with COMPLETE REFRESH mode.
create MATERIALIZED VIEW MV_MYTEST_BIGREFRESH
BUILD DEFERRED
REFRESH ON DEMAND
COMPLETE
AS
select abc,xyz,cast(value as ajaaj) as test ....
............
.................
SELECT *
FROM ( SELECT OWNER,
MVIEW_NAME,
CONTAINER_NAME,
REFRESH_MODE,
LAST_REFRESH_TYPE,
STALENESS,
round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS
FROM ALL_MVIEWS
WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
)
ORDER BY REFRESH_TIME_MINS DESC;
OWNER MVIEW_NAME CONTAINER_NAME REFRESH_MODE LAST_REFRESH_TYPE STALENESS REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
PRASH MV_MYTEST_BIGREFRESH MV_MYTEST_BIGREFRESH DEMAND COMPLETE COMPLETE 196.75
Though I did multiple optimizations in order to stabilize performance, but would like to talk about few of the refresh tuning techniques that I have applied and finally worked for me.
Atomic_Refresh with value FALSE. What caught my eye are all those long running DELETE statements running underneath those costly MV refresh jobs. This was the very first thing I tried with those complex and long running COMPLETE MV refresh, this is the optional parameter atomic_refresh of the procedure dbms_mview.refresh and is very useful. If the parameter is set to FALSE, the materialized view is deleted with a much faster TRUNCATE command. I immediately got the gain, as the runtime was dropped ~ 60% .. At the same time I have observed a great relax in redo generation as DELETE is replaced with TRUNCATE, and hence a great drop in ‘checkpoint incomplete‘ and ‘Log File Sync‘ waits.
The parameter atomic_refresh=FALSE works only with “complete” refresh, so “truncate” is only possible with “complete“. But there is one demerit too for this method, and is that no data is visible to the users during the refresh, so, choose wisely. If this is feasible in your environment, you can use the following command for a Complete Refresh:
BEGIN
dbms_mview.refresh(‘MV_MYTEST_BIGREFRESH_JOB’, method => ‘C’, atomic_refresh => FALSE);
END;
-- See below the REFRESH_TIME_MINS got dropped by ~ 60% after using setting atomic_refres = false
SELECT *
FROM ( SELECT OWNER,
MVIEW_NAME,
CONTAINER_NAME,
REFRESH_MODE,
LAST_REFRESH_TYPE,
STALENESS,
round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS
FROM ALL_MVIEWS
WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
)
ORDER BY REFRESH_TIME_MINS DESC;
OWNER MVIEW_NAME CONTAINER_NAME REFRESH_MODE LAST_REFRESH_TYPE STALENESS REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
PRASH MV_MYTEST_BIGREFRESH MV_MYTEST_BIGREFRESH DEMAND COMPLETE COMPLETE 79.12
Adding PX (Parallelism) to the base tables or to the refresh command itself. Though with atomic_refrsh FALSE, I was able to cut down a great percentage of the overall elapsed time, but still customer asked for me, maybe some 30% more … I decided to try PX to the base tables or to the refresh statement, among other parameters, this procedure takes one parameter PARALLELISM. At the outset, it appears that the PARALLELISM parameter should invoke a parallel refresh of the materialized view. However, it does not; it simply changes the requested degree of parallelism if it already executes in parallel. You can invoke this procedure as:
BEGIN
dbms_mview.refresh(‘MV_MYTEST_BIGREFRESH_JOB’, method => ‘C’, atomic_refresh => FALSE, PARALLELISM=>4);
END;
-- See below the REFRESH_TIME_MINS got dropped by further ~30% after using PARALLELISM to the master/base MV tables.
SELECT *
FROM ( SELECT OWNER,
MVIEW_NAME,
CONTAINER_NAME,
REFRESH_MODE,
LAST_REFRESH_TYPE,
STALENESS,
round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS
FROM ALL_MVIEWS
WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
)
ORDER BY REFRESH_TIME_MINS DESC;
OWNER MVIEW_NAME CONTAINER_NAME REFRESH_MODE LAST_REFRESH_TYPE STALENESS REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
PRASH MV_MYTEST_BIGREFRESH MV_MYTEST_BIGREFRESH DEMAND COMPLETE COMPLETE 55.08
You can also add PX to the base tables. When the master table of the materialized view has the PARALLEL attribute set to > 1, then the creation as well as the refresh processes will be parallelized. Whether or not you specify the PARALLELISM parameter in the REFRESH clause, doesn’t matter.
ALTER TABLE DIXIT_TEST_TABLE PARALLEL (DEGREE 4);
You can also add PX hint to the MV definition too! When the materialized view is created with a PARALLEL hint, then the creation as well as the refresh processes, will be parallelized. Whether or not you specify the PARALLELISM parameter in the REFRESH clause, doesn’t matter. This is how you can ad a parallel hint in the materialized view definition!
CREATE MATERIALIZED VIEW TEST_MV
AS
SELECT /*+ PARALLEL(DIXIT_TEST_TABLE, 4) */ MONTHLY_ID, SALARY_DATE, SUM(ANNUAL_COUNTS)
FROM DIXIT_TEST_TABLE GROUP BY MONTHLY_ID, SALARY_DATE;
Stats Collection: This is very vital that all of the tables, Indexes which your MV refresh references via the code, should be fresh and well collected. Look of for possibilities to add column level stats i.e Histograms wherever there is a scope, specially for low cardinality columns. You can also leave this to the optimizer to decide, by using METHOD_OPT=>’FOR ALL COLUMNS SIZE AUTO’
Other techniques:
Try for FAST/Incremental Refresh mode. With FAST refresh, only the changes since the last refresh are applied to the materialized view. Personally I have found the FAST refresh slower than the COMPLETE refresh mode, maybe because I used atomic_refresh parameter that replaced those costly DELETEs with fast TRUNCATE … Not sure why they called it FAST 🙂 To be sure that my materialized view can be fast refresh, we can also use explain_mview procedure and check the capability_name called “REFRESH_FAST”. The MV_CAPABILITIES TABLE is created in the owner’s schema by running the following script ..
SQL> @$ORACLE_HOME/rdbms/admin/utlxmv.sql
SQL> exec dbms_mview.explain_mview('MV_MYTEST_BIGREFRESH_JOB');
PL/SQL procedure successfully completed.
SQL> select capability_name,possible,related_text,msgtxt from mv_capabilities_table;
CAPABILITY_NAME POSSIBLE RELATED_TEXT MSGTXT
------------------------------ -------------------- ----------------------- ----------------------------------------------------------------------------
PCT N
REFRESH_COMPLETE Y
REFRESH_FAST Y <<<<<<<<---------------
REWRITE Y
PCT_TABLE N ID_ID_CANADA_HISTORY_T1 relation is not a partitioned table
PCT_TABLE N ID_ID_CANADA_HISTORY_TV relation is not a partitioned table
PCT_TABLE N DWH_SITE relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT Y
REFRESH_FAST_AFTER_ONETAB_DML Y
REFRESH_FAST_AFTER_ANY_DML Y
REFRESH_FAST_PCT N PCT is not possible on any of the detail tables in the materialized view
REWRITE_FULL_TEXT_MATCH Y
REWRITE_PARTIAL_TEXT_MATCH Y
REWRITE_GENERAL Y
REWRITE_PCT N general rewrite is not possible or PCT is not possible on any of the detail tables
PCT_TABLE_REWRITE N ID_ID_CANADA_HISTORY_T1 relation is not a partitioned table
PCT_TABLE_REWRITE N ID_ID_CANADA_HISTORY_TV relation is not a partitioned table
PCT_TABLE_REWRITE N DWH_SITE relation is not a partitioned table
18 rows selected.
Out-of-place refresh is a new option on DBMS_MVIEW. REFRESH in Oracle 12c. The complete refresh process builds a new table which seamlessly becomes the materialized view, the old table is dropped. It only works in conjunction with non-atomic refresh.
DBMS_ADVISOR contains a procedure called TUNE_MVIEW that is used when working with the query rewrite mechanism. DBMS_MVIEW.EXPLAIN_MVIEW and DBMS_MVIEW.EXPLAIN_REWRITE that could be used to diagnose why a materialized view wasn’t being used for query rewrite. The DBMS_ADVISOR.TUNE_MVIEW procedure has the capability to add additional aggregate columns and materialized view logs to the view definition so that it becomes fast refreshable.
Well, all after I’ve performed above three methods, I was able to reduce the MV refresh downtimes by more than 92% … 🙂