Recently one of my friend who started learning OCI told me he is not able to locate Oracle 21c Autonomous database in the list, but can only see 19c. He was using ‘Always Free Autonomous Databases’ to test few things on Oracle Database version 21c.
I was sure it was happening due to region he selected, I mean you can provision free Autonomous Databases only in your home region, and not all regions support the same database versions. The supported version may be 19c-only or 21c-only, depending on the region you have selected.
When checked, found he was trying from the ap-tokyo-1 (Tokyo region – key NRT) where only 19c version is available in ‘Always Free Autonomous Databases’. So, it always depends the region where you are trying creating your Always Free Autonomous Database Oracle Database weather you see 19c or 21c.
Today’s one is a quick one and is about an old package (I guess its there since 8i days), but lesser known and underutilized feature at the same time. Recently I was working on an OEM task where I have to edit the metric collection regular expressions to make it as per requirements. The next question was asked – ‘Lets wait till the time that specific incident happens in the database to see if it triggers the alert or not ….‘ Really ? Do we have to wait that long ?
No, in order to test that change, you don’t even have to adopt any crude method of editing the alert log file manually and write the error message over there or use UTL_FILE. There is an inbuilt package called dbms_system that you can use to handle such cases and write your custom messages to the alert log file. There are few other options/routines available that you can do with the package but this one is about a special subprogram called ‘ksdwrt‘.
dbms_system.ksdwrt(dest IN BINARY_INTEGER,tst IN VARCHAR2);
Here is the syntax: execute sys.dbms_system.ksdwrt(,to_char(sysdate)|| ‘ — ‘); where the argument values can be 1 to write to trace file 2 to write to alert log file 3 to write both trace and alert logfile.
Example: exec dbms_system.ksdwrt(3,'ORA-04031: This is a test error message, please ignore');
Alert log is always been a very important logfile which contain important information about error messages and exceptions that occur during database operations. Its very crucial for any analysis or for troubleshooting any critical event that has happened. Specially over the period of last few years, with all those new database releases, its slowly becoming very messy, loud and has got whole lot of new content added to it that it has to record for all those regular and critical database events, and finally with the inception of Oracle 21c we have the ‘Attention Log‘ that helps to segregate all those critical and vital events which otherwise gets mixed up with other regular incidents of alertlog file.
Each of the database has its own Attention log and is a regular JSON format file which is very easy to translate. Few of the important dimensions or its properties are URGENCY: Class with possible values INFO, IMMEDIATE etc. CAUSE : A quick detail about the possible cause or reason. NOTIFICATION : A regular message in case of any event i.e. “PMON (ospid: 1901): terminating the instance due to ORA error 12752” etc. ACTION : What possibly you can do TIME : A timestamp of the event
Let’s see how it looks like!
[oracle@witnessalberta ~]$ !sql
sqlplus / as sysdba
SQL*Plus: Release 21.0.0.0.0 - Production on Fri Apr 8 22:38:25 2022
Version 21.3.0.0.0
Copyright (c) 1982, 2021, Oracle. All rights reserved.
Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0
SQL> set linesize 400 pagesize 400
SQL> col NAME for a30
SQL> col value for a70
SQL>
SQL> select name, value from v$diag_info where value like '%attention%';
NAME VALUE
------------------------------ ----------------------------------------------------------------------
Attention Log /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/attention_ORCLCDB.log
[oracle@witnessalberta trace]$ pwd
/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace
[oracle@witnessalberta trace]$
[oracle@witnessalberta trace]$
[oracle@witnessalberta trace]$ ls -ltrh *.log*
-rw-r-----. 1 oracle oinstall 6.0K Apr 8 22:32 attention_ORCLCDB.log
-rw-r-----. 1 oracle oinstall 244K Apr 8 22:34 alert_ORCLCDB.log
[oracle@witnessalberta trace]$
{
"NOTIFICATION" : "Starting ORACLE instance (normal) (OS id: 3309)",
"URGENCY" : "INFO",
"INFO" : "Additional Information Not Available",
"CAUSE" : "A command to startup the instance was executed",
"ACTION" : "Check alert log for progress and completion of command",
"CLASS" : "CDB Instance / CDB ADMINISTRATOR / AL-1000",
"TIME" : "2022-04-08T22:32:47.914-04:00"
}
....
.....
.........
{
"NOTIFICATION" : "Shutting down ORACLE instance (immediate) (OS id: 9724)",
"URGENCY" : "INFO",
"INFO" : "Shutdown is initiated by sqlplus@localhost.ontadomain (TNS V1-V3). ",
"CAUSE" : "A command to shutdown the instance was executed",
"ACTION" : "Check alert log for progress and completion of command",
"CLASS" : "CDB Instance / CDB ADMINISTRATOR / AL-1001",
"TIME" : "2021-09-16T23:11:56.812-04:00"
}
.....
......
........
{
"ERROR" : "PMON (ospid: 1901): terminating the instance due to ORA error 12752",
"URGENCY" : "IMMEDIATE",
"INFO" : "Additional Information Not Available",
"CAUSE" : "The instance termination routine was called",
"ACTION" : "Check alert log for more information relating to instance termination rectify the error and restart the instance",
"CLASS" : "CDB Instance / CDB ADMINISTRATOR / AL-1003",
"TIME" : "2021-09-16T23:34:26.117-02:00"
}
...
.....
......
{
"ERROR" : "PMON (ospid: 3408): terminating the instance due to ORA error 474",
"URGENCY" : "IMMEDIATE",
"INFO" : "Additional Information Not Available",
"CAUSE" : "The instance termination routine was called",
"ACTION" : "Check alert log for more information relating to instance termination rectify the error and restart the instance",
"CLASS" : "CDB Instance / CDB ADMINISTRATOR / AL-1003",
"TIME" : "2022-04-08T23:38:11.258-04:00"
}
Recently during one of the performance taskforce on a newly migrated system, customer DBA asked me to use one of their legacy tool to get more idea about database’s performance, that one of their expert DBA written to collect performance metrics. I’d seen their previous reports collected through the same tool for other systems, and it was good. But, got a runtime exception with an error while calling the script/tool which says ‘PLAN_TABLE physical table present in user schema SYS‘. The error means the user executing it (SYS) owns the table PLAN_TABLE that is the not the Oracle seeded GTT (Global Temporary Table) plan table owned by SYS (PLAN_TABLE$ table with a PUBLIC synonym PLAN_TABLE).
This was little odd to the customer DBAs as they had never experienced this error with the tool, and now when its there, question was Shall we drop the PLAN_TABLE ? Is it risky to do that ? If we drop it, will it impact the execution plan generation or not ? Any other associated risk with drop of plan_table ?
Next when I’d queried DBA_OBJECTS, I saw the table is there in SYS schema, though this system was migrated from 12.2 to 19c, but the table should not be there as the table only by default existed in older versions of Oracle. The object creation date was coming for the time when database was upgraded. It had appeared that someone after upgrade/migration, called the utlrp.sql explicitly (maybe any old 8i/9i DBA) and that’d created the table. Now the question is – It’s safe to dropthis table ?
SQL> select owner, object_name, object_type, created from dba_objects where object_name like '%PLAN_TABLE%'
and owner not in ('SQLTXPLAIN','SQLTXADMIN') ORDER BY 1;
OWNER OBJECT_NAME OBJECT_TYPE CREATED
---------- -------------------- ----------------------- ---------
PUBLIC PLAN_TABLE SYNONYM 17-APR-19
PUBLIC SQL_PLAN_TABLE_TYPE SYNONYM 17-APR-19
PUBLIC PLAN_TABLE_OBJECT SYNONYM 17-APR-19
SYS SQL_PLAN_TABLE_TYPE TYPE 17-APR-19
SYS PLAN_TABLE TABLE 13-MAR-22 ----->>>>> OLD PLAN_TABLE created during the UPGRADE
SYS SQL_PLAN_TABLE_TYPE TYPE 17-APR-19
SYS PLAN_TABLE_OBJECT TYPE 17-APR-19
SYS PLAN_TABLE$ TABLE 17-APR-19
SYS PLAN_TABLE_OBJECT TYPE BODY 17-APR-19
9 rows selected.
-- Look at the difference between the two, PLAN_TABLE$ is a GLOBAL TEMP TABLE and old PLAN_TABLE is not.
SQL> SELECT TABLE_NAME, owner, temporary from dba_tables where table_name like '%PLAN_TABLE%'
AND owner not in ('SQLTXPLAIN','SQLTXADMIN') ORDER BY 1;
TABLE_NAME OWNER T
------------------------------ -------------------- -
PLAN_TABLE SYS N
PLAN_TABLE$ SYS Y ---> Y represents GTT
Let’s first see what’s there inside the PLAN_TABLE and what’s its purpose. Will generate few SQL execution plans will observe changes that happens in PLAN_TABLE.
-- Table columns and details
SQL> desc plan_table
Name Null? Type
----------------------------------------- -------- ----------------------------
STATEMENT_ID VARCHAR2(30)
PLAN_ID NUMBER
TIMESTAMP DATE
REMARKS VARCHAR2(4000)
OPERATION VARCHAR2(30)
OPTIONS VARCHAR2(255)
OBJECT_NODE VARCHAR2(128)
OBJECT_OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
OBJECT_ALIAS VARCHAR2(261)
OBJECT_INSTANCE NUMBER(38)
OBJECT_TYPE VARCHAR2(30)
OPTIMIZER VARCHAR2(255)
SEARCH_COLUMNS NUMBER
ID NUMBER(38)
PARENT_ID NUMBER(38)
DEPTH NUMBER(38)
POSITION NUMBER(38)
COST NUMBER(38)
CARDINALITY NUMBER(38)
BYTES NUMBER(38)
OTHER_TAG VARCHAR2(255)
PARTITION_START VARCHAR2(255)
PARTITION_STOP VARCHAR2(255)
PARTITION_ID NUMBER(38)
OTHER LONG
DISTRIBUTION VARCHAR2(30)
CPU_COST NUMBER(38)
IO_COST NUMBER(38)
TEMP_SPACE NUMBER(38)
ACCESS_PREDICATES VARCHAR2(4000)
FILTER_PREDICATES VARCHAR2(4000)
PROJECTION VARCHAR2(4000)
TIME NUMBER(38)
QBLOCK_NAME VARCHAR2(128)
OTHER_XML CLOB
-- Let me check other stats or details about the PLAN_TABLE
SQL> select index_name, table_name from dba_indexes where table_name='PLAN_TABLE'
And owner not in ('SQLTXPLAIN','SQLTXADMIN') ORDER BY 1;
INDEX_NAME TABLE_NAME
-------------------------------------------------- ------------------------------
SYS_IL0000078251C00036$$ PLAN_TABLE
SQL> select table_name, owner, TABLESPACE_NAME from dba_tables where table_name like '%PLAN_TABLE%'
and owner not in ('SQLTXPLAIN','SQLTXADMIN') ORDER BY 1;
TABLE_NAME OWNER TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PLAN_TABLE SYS SYSTEM
PLAN_TABLE$ SYS
SQL>
-- The OLD PLAN_TABLE is empty at the moment
SQL> select count(*) from plan_table;
COUNT(*)
----------
0
-- Lets explain a test SQL to see what happens to the OLD PLAN_TABLE
SQL> explain plan for select count(*) from bigtab;
Explained.
-- And immediately 3 rows related to the plan line ids added to it
SQL> select count(*) from plan_table;
COUNT(*)
----------
3
-- Three entries for below 3 IDs.
SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2140185107
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 69 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| BIGTAB | 72358 | 69 (0)| 00:00:01 |
---------------------------------------------------------------------
9 rows selected.
-- But the new PLAN_TABLE$ is still empty
SQL> select count(*) from PLAN_TABLE$ ;
COUNT(*)
----------
0
So, the question is – Is it safe to drop this table PLAN_TABLE ?
SQL> drop table PLAN_TABLE;
Table dropped.
SQL>
-- And the table is gone
SQL> select owner, object_name, object_type, created from dba_objects where object_name like '%PLAN_TABLE%'
and owner not in ('SQLTXPLAIN','SQLTXADMIN') ORDER BY 1;
OWNER OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ -------------------- ----------------------- ---------
PUBLIC PLAN_TABLE SYNONYM 17-APR-19
PUBLIC SQL_PLAN_TABLE_TYPE SYNONYM 17-APR-19
PUBLIC PLAN_TABLE_OBJECT SYNONYM 17-APR-19
SYS PLAN_TABLE_OBJECT TYPE BODY 17-APR-19
SYS SQL_PLAN_TABLE_TYPE TYPE 17-APR-19
SYS PLAN_TABLE_OBJECT TYPE 17-APR-19
SYS PLAN_TABLE$ TABLE 17-APR-19
SYS SQL_PLAN_TABLE_TYPE TYPE 17-APR-19
8 rows selected.
Now when the table is gone, lets check if we are still able to generate the execution plan.
SQL>
SQL> explain plan for select count(*) from bigtab;
Explained.
SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2140185107
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 69 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| BIGTAB | 72358 | 69 (0)| 00:00:01 |
---------------------------------------------------------------------
9 rows selected.
SQL> select count(*) from plan_table$;
COUNT(*)
----------
3
And yes, no issues at all. The plan now started to sit inside PLAN_TABLE$ that has a PUBLIC SYNONYM called PLAN_TABLE. So, it’s totally safe to drop the PLAN_TABLE from your schema if it still exists and Oracle has now a public synonym for the same purpose. WARNING: Don’t drop the PLAN_TABLE$ nor the PLAN_TABLE public synonym, these need to exist for the new PLAN_TABLE to work properly.
Lately a question was asked – Forhow long Oracle Database version 12.2.0.1 will be supported ? There are so many documents and blogs are available but they at the same time brings lot of confusion. So I thought to write a quick post about ES or regular support of Oracle DB 12cR2.
The bug fixing & full error corrections has already been ended for Oracle 12.2.0.1 on Nov 20, 2020, and on the top there has no plans from oracle 12.2.0.1 is not eligible for Extended Support (ES). At the moment 12.2.0.1 is running on the limited Error Correction from Dec 1, 2020 through March 31, 2022. Limited Error Correction means only Sev 1 and Security Updates only.
At the moment only 19c is the version that gives you a long term support, as the full span of bug fixing support until 31-APR-2024 with the option to have Extended Support until 31-Apr-2027. Premier Support (PS) ends April 30, 2024, Extended Support (ES) fees will be required beginning May 01, 2024 through April 30, 2027. Error Correction / Patching is available through April 30, 2027 with paid ES. Without paid ES, patching is only available until April 30, 2024
Last few weeks I was busy doing some CI/CD integrations using Liquibase, and this was the first time I was using Liquibase and I immediately fell in love with this brilliant tool, that you can use for tracking, managing, automation and applying database schema changes. This is gaining popularity as a DevOps tool to automate your database deployments.
Today’s post is about how to integrate Liquibase with Oracle databases. I did all of the demos on Oracle database version 19.3.0.0.0 on RHEL8 and using Liquibase community version 4.6.2 You can download latest version from https://www.liquibase.org/download and they also have certified courses available on their university website https://learn.liquibase.com/
Okay, let’s quickly build the playground to do demos. I will first first un-tar the file that I have downloaded from their website.
Lets add the PATH variable to .bash_profile and set it to export PATH=$PATH:/root/liquibase (my Liquibase un-tar directory) this is to call the executable from anywhere. With that you’re all set to use the Liquibase, yes! you only need to unzip/un-tar the software and ready to go. It’s installation and configuration both is very easy and straight forward.
Here in this demo, I will be using all options or flags directly with the Liquibase cli to better understanding, but I recommend to create the property file and put all your configuration entries there, like the one I have shared below.
Here changeLogFile is the changelog file to use, driver is the database driver class name, its ‘oracle.jdbc.OracleDriver’ as I am doing this demo on Oracle database. Classpath flag is to point the jar file for the classpath containing migration files and JDBC Driver, URL is the database JDBC URL (hostname:portnumber/SID), username/password is the database username and the password, the outputFile is the used to send output to a file. The loglevel parameter controls the amount of messages that are generated when running Liquibase commands, possible options are SEVERE/WARNING/INFO/FINE/OFF and the last option liquibase.hub.mode disables the HUB mode for Liquibase. There are whole lot of other parameters and are available on https://docs.liquibase.com/
Next I am going to create a test schema where I will create objects later on, and will track changes using Liquibase.
[oracle@localhost ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Dec 30 02:42:18 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> create user dixdroid identified by dixdroid;
User created.
-- Granting SYSDBA to the test user for demo purpose only
SQL> grant connect, sysdba to dixdroid;
Grant succeeded.
SQL> conn dixdroid
Enter password:
Connected.
SQL>
-- Next, I will add some test data to this schema.
-- Will create a Table, insert few rows, index, function and a sequence.
SQL> @testdata.sql
Table created.
1 row created.
1 row created.
Index created.
Table created.
Sequence created.
1 row created.
Function created.
Commit complete.
SQL>
SQL>
SQL> col object_name for a30
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
Now I will check if the Liquibase connection is successful, for that you should use the status command.
[root@localhost liquibase]# liquibase --username=liquibase --password=liquibase --changeLogFile=changelogfile.sql status
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 04:35:09 (version 4.6.2 #886 built at 2021-11-30 16:20+0000)
Liquibase Version: 4.6.2
Liquibase Community 4.6.2 by Liquibase
Output saved to /root/liquibase/output_local.sql
Liquibase command 'status' was executed successfully.
[root@localhost liquibase]# more /root/liquibase/output_local.sql
LIQUIBASE@jdbc:oracle:thin:@localhost.ontadomain:1521/dixitdb is up to date
[root@localhost liquibase]#
Next we need to generate the ‘changelog‘. Liquibase uses a changelog to consecutively list all changes made to your database. Think of it as a account book or a daybook. It is a file that contains a record of all your database changes (changesets). Liquibase uses this changelog record to inspect your database and execute any changes that are not yet applied to your database.
[root@localhost liquibase]# liquibase --driver=oracle.jdbc.OracleDriver --changeLogFile=changelogfile.sql --classpath=/root/liquibase/lib/ojdbc8-18.3.0.0.jar --url="jdbc:oracle:thin:@localhost.ontadomain:1521/dixitdb" --username=liquibase --password=liquibase --defaultSchemaName=dixit generateChangeLog
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 22:44:36 (version 4.6.2 #886 built at 2021-11-30 16:20+0000)
Liquibase Version: 4.6.2
Liquibase Community 4.6.2 by Liquibase
BEST PRACTICE: The changelog generated by diffChangeLog/generateChangeLog should be inspected for correctness and completeness before being deployed.
When generating formatted SQL changelogs, it is important to decide if batched statements
should be split or not. For storedlogic objects, the default behavior is 'splitStatements:false'
.All other objects default to 'splitStatements:true'. See https://docs.liquibase.org for additional information.
Generated changelog written to /root/liquibase/changelogfile.sql
Output saved to /root/liquibase/output_local.sql
Liquibase command 'generateChangelog' was executed successfully.
[root@localhost liquibase]#
Next we will run the ‘updateSQL‘ command which is a helper command that allows you to inspect the SQL Liquibase will run while using the update command. The updateSQL command is used when you want to inspect the raw SQL before running the update command, so you can correct any issues that may arise before running the command.
Lets examine the output_local.sql file generate by the last command where we ran updateSQL command with Liquibase.
[root@localhost liquibase]#
[root@localhost liquibase]# more /root/liquibase/output_local.sql
-- *********************************************************************
-- Update Database Script
-- *********************************************************************
-- Change Log: changelog.sql
-- Ran at: 12/29/21 10:46 PM
-- Against: DIXDROID@jdbc:oracle:thin:@localhost.ontadomain:1521/dixitdb
-- Liquibase version: 4.6.2
-- *********************************************************************
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Lock Database
UPDATE DIXDROID.DATABASECHANGELOGLOCK SET LOCKED = 1, LOCKEDBY = 'localhost.ontadomain (192.168.154.142)', LOCKGRANTED = TO_TIMESTAMP('2021-12-29 22:46:27.69
5', 'YYYY-MM-DD HH24:MI:SS.FF') WHERE ID = 1 AND LOCKED = 0;
-- Create Database Change Log Table
CREATE TABLE DIXDROID.DATABASECHANGELOG (ID VARCHAR2(255) NOT NULL, AUTHOR VARCHAR2(255) NOT NULL, FILENAME VARCHAR2(255) NOT NULL, DATEEXECUTED TIMESTAMP NO
T NULL, ORDEREXECUTED INTEGER NOT NULL, EXECTYPE VARCHAR2(10) NOT NULL, MD5SUM VARCHAR2(35), DESCRIPTION VARCHAR2(255), COMMENTS VARCHAR2(255), TAG VARCHAR2(
255), LIQUIBASE VARCHAR2(20), CONTEXTS VARCHAR2(255), LABELS VARCHAR2(255), DEPLOYMENT_ID VARCHAR2(10));
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Create Database Lock Table
CREATE TABLE DIXDROID.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DAT
ABASECHANGELOGLOCK PRIMARY KEY (ID));
-- Initialize Database Lock Table
DELETE FROM DIXDROID.DATABASECHANGELOGLOCK;
INSERT INTO DIXDROID.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0);
-- Changeset changelog.sql::1640835878960-1::root
CREATE TABLE DIXIT1 (ID NUMBER NOT NULL, DESCRIPTION VARCHAR(50), CONSTRAINT DIXIT1_PK PRIMARY KEY (ID));
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-1', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 1, '8:17e41ee520cc38d8600cb88325a89679', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Changeset changelog.h2.sql::1640835878960-2::root
CREATE SEQUENCE DIXIT1_SEQ START WITH 21 MAXVALUE 9999999999999999999999999999;
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-2', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 2, '8:8b670db06f2e0ad02cf1deeec1f9b79b', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Changeset changelog.sql::1640835878960-3::root
CREATE TABLE TEST (ID NUMBER(10, 0), NAME VARCHAR(30));
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-3', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 3, '8:e56593db5135656a87586790c3d5b671', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Changeset changelog.sql::1640835878960-4::root
CREATE INDEX IDX_TEST1 ON TEST(ID, NAME);
INSERT INTO DIXDROID.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBA
SE, DEPLOYMENT_ID) VALUES ('1640835878960-4', 'root', 'changelog.h2.sql', SYSTIMESTAMP, 4, '8:178519da11977278e2192549595aed7b', 'sql', '', 'EXECUTED', NULL,
NULL, '4.6.2', '0835988346');
-- Release Database Lock
UPDATE DIXDROID.DATABASECHANGELOGLOCK SET LOCKED = 0, LOCKEDBY = NULL, LOCKGRANTED = NULL WHERE ID = 1;
[root@localhost liquibase]#
Now after careful examination of the last .sql file, time to call the ‘update‘ command which deploys any changes that are in the changelog file and that have not been deployed to your database yet. During this step (first time) only it creates the DATABASECHANGELOG table which is used to track which changesets have been run, and the DATABASECHANGELOGLOCK table to ensure only one instance of Liquibase is running at one time.
When you run the update command, Liquibase sequentially reads changesets in the changelog file, then it compares the unique identifiers of id, author, and path to filename to the values stored in the DATABASECHANGELOG table.
You can see both of the two new (liquibase specific) tables DATABASECHANGELOGLOCK & DATABASECHANGELOG created under the schema.
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
DATABASECHANGELOGLOCK TABLE
PK_DATABASECHANGELOGLOCK INDEX
DATABASECHANGELOG TABLE
9 rows selected.
SQL> desc DATABASECHANGELOG
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL VARCHAR2(255)
AUTHOR NOT NULL VARCHAR2(255)
FILENAME NOT NULL VARCHAR2(255)
DATEEXECUTED NOT NULL TIMESTAMP(6)
ORDEREXECUTED NOT NULL NUMBER(38)
EXECTYPE NOT NULL VARCHAR2(10)
MD5SUM VARCHAR2(35)
DESCRIPTION VARCHAR2(255)
COMMENTS VARCHAR2(255)
TAG VARCHAR2(255)
LIQUIBASE VARCHAR2(20)
CONTEXTS VARCHAR2(255)
LABELS VARCHAR2(255)
DEPLOYMENT_ID VARCHAR2(10)
Now the baseline is created, we can create the next version of the database. I have created four new SQL files to create new sequence, view, function and a table with few records and an Index.
[root@localhost liquibase]# ls -ltrh *.sql*
-rwxrwxrwx. 1 root root 88 Dec 29 23:03 seq2.sql
-rwxrwxrwx. 1 root root 65 Dec 29 23:05 view1.sql
-rwxrwxrwx. 1 root root 172 Dec 29 23:07 func2.sql
-rwxrwxrwx. 1 root root 159 Dec 30 00:34 tab2.sql
I will now create the master.xml file which acts as a master index and is an ordered list of all changelogs. I have added a master.xml file with following contents. includeAll path=”/root/liquibase” is the XML tag that allows you to specify a directory that contains multiple changelog files. include file=./changelog1.xml is the XML file which I have created and will be referenced or called by the master.xml file.
If you check the ‘changelog1.xml‘ file, it has the order that you want Liquibase to follow, like in my example it will first create the sequence after reading from file seq2.sql, followed by table script tab2.sql and last file which is to create the function using func2.sql There are few parameters used with the XML and are explained below. relativeToChangelogFile=”true” : is cause we are using relative paths. ID : tag is used to assign a unique value to the action. endDelimiter: is the attribute can be set in a sql or sqlFile Change Type to override the default value of ;. The endDelimiter can be set to ” or to a character other than ; to indicate the end of the SQL statement. stripComments : Set to true to remove any comments in the SQL before executing, otherwise false. Defaults to true if not set
[root@localhost liquibase]# more master.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.9.xsd">
<includeAll path="/root/liquibase"/>
<include file="./changelog1.xml" relativeToChangelogFile="true"/>
</databaseChangeLog>
[root@localhost liquibase]#
-- Changelog XML file that is called by the above master.xml file.
[root@localhost liquibase]# more changelog1.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.9.xsd">
<changeSet author="dixdroid" id="seq2">
<sqlFile dbms="oracle"
endDelimiter=";"
path="./seq2.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="false"/>
</changeSet>
<changeSet author="dixdroid" id="table2">
<sqlFile dbms="oracle"
endDelimiter=";"
path="./tab2.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="false"/>
</changeSet>
<changeSet author="dixdroid" id="getcount" runOnChange="true">
<sqlFile dbms="oracle"
endDelimiter=";"
path="./func2.sql"
relativeToChangelogFile="true"
splitStatements="false"
stripComments="false"/>
</changeSet>
</databaseChangeLog>
Now let’s run the Liquibase ‘update‘ command using master.xml as the new changeLogFile which will perform all the changes that are mentioned in the master xml and related changelog1.xml
Let’s check if all new objects are created in the database, and we have table TESLA and its Index IDX_TESLA and RETURNTABLECOUNT function created.
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
DATABASECHANGELOGLOCK TABLE
PK_DATABASECHANGELOGLOCK INDEX
DATABASECHANGELOG TABLE
TESLA TABLE
IDX_TESLA INDEX
RETURNTABLECOUNT FUNCTION
12 rows selected.
-- Lets check DATABASECHANGELOG table to view all details about this schema level change.
SQL> select id, AUTHOR,FILENAME,DATEEXECUTED,ORDEREXECUTED,DESCRIPTION,LIQUIBASE,DEPLOYMENT_ID from DATABASECHANGELOG;
ID AUTHOR FILENAME DATEEXECUTED ORDEREXECUTED DESCRIPTIO LIQUIBASE DEPLOYMENT
---------- ---------- --------------- ------------------------------ ------------- ---------- -------------------- ----------
seq2 dixdroid changelog1.xml 30-DEC-21 12.33.02.051131 AM 1 sqlFile 4.6.2 0842381718
table2 dixdroid changelog1.xml 30-DEC-21 12.36.11.790990 AM 2 sqlFile 4.6.2 0842571528
getcount dixdroid changelog1.xml 30-DEC-21 12.36.11.937545 AM 3 sqlFile 4.6.2 0842571528
Perfect! Let me add one more object to the schema and see what happens next and how details added to the changelog table. This time I will create a new VIEW using SQL file with name view1.sql. Below are the master.xml and changelog file that I have created for this new addition to the schema and to track it.
Time to run the update command once again using modified master.xml file to create view.
[root@localhost liquibase]#
[root@localhost liquibase]# liquibase --username=dixdroid --password=dixdroid --changeLogFile="master.xml" update
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 00:57:25 (version 4.6.2 #886 built at 2021-11-30 16:20+0000)
Liquibase Version: 4.6.2
Liquibase Community 4.6.2 by Liquibase
Output saved to /root/liquibase/output_local.sql
Liquibase command 'update' was executed successfully.
[root@localhost liquibase]#
[root@localhost liquibase]#
-- Letscheck if the VIEW named PEEK is created or not!
SQL> select object_name, object_type from user_objects;
OBJECT_NAME OBJECT_TYPE
-------------------------------------------------------------------------------------------------------------------------------- -----------------------
TEST TABLE
IDX_TEST1 INDEX
DIXIT1 TABLE
DIXIT1_PK INDEX
DIXIT1_SEQ SEQUENCE
GET_DIXIT1_COUNT FUNCTION
DATABASECHANGELOGLOCK TABLE
PK_DATABASECHANGELOGLOCK INDEX
DATABASECHANGELOG TABLE
TESLA TABLE
IDX_TESLA INDEX
RETURNTABLECOUNT FUNCTION
PEEK VIEW
13 rows selected.
SQL> select id, AUTHOR,FILENAME,DATEEXECUTED,ORDEREXECUTED,DESCRIPTION,LIQUIBASE,DEPLOYMENT_ID from DATABASECHANGELOG;
ID AUTHOR FILENAME DATEEXECUTED ORDEREXECUTED DESCRIPTIO LIQUIBASE DEPLOYMENT
---------- ---------- --------------- ------------------------------ ------------- ---------- -------------------- ----------
seq2 dixdroid changelog1.xml 30-DEC-21 12.33.02.051131 AM 1 sqlFile 4.6.2 0842381718
table2 dixdroid changelog1.xml 30-DEC-21 12.36.11.790990 AM 2 sqlFile 4.6.2 0842571528
getcount dixdroid changelog1.xml 30-DEC-21 12.36.11.937545 AM 3 sqlFile 4.6.2 0842571528
view1 dixdroid changelog2.xml 30-DEC-21 12.57.31.827752 AM 4 sqlFile 4.6.2 0843851310
Great, its there! About Liquibase, you can use it as core DevOps tool to track schema changes and for deployments and automations, integrate it with your CI/CD pipelines or can be used as a migration tool. There are multiple use cases. For more details check their official website.
Recently while doing a database migration/upgrade project, we encountered a strange case where the orachk utility caught a new security vulnerability (CVE-2021-44228) on this new upgraded platform, and were related with customer’s logging platform log4j and for their EBS (E-business suite version 12.2) middleware. The vulnerability was for its JNDI features that do not protect against attacker controlled LDAP and other JNDI related endpoints, and coming with a 10 out of 10 severity score.
We checked with Oracle customer support and they asked us to apply a workaround (link below). I later on found that its not only Oracle products, but has impacted many other applications & cloud services. This weakness poses a significant risk to many applications and cloud services and it needs to be patched right away!
And finally the much loved ‘SQL Tuning Advisor‘ is now available to use in Oracle cloud DB Service. The useful utility is a built-in tool to provide suggestions or recommendations about certain SQL statements and now available to use with OCI.
Chaos Engineering is a disciplined approach of identifying potential failures before they become outages, and its engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure (including Databases, storage or networking).
Gremlin provides a ‘failure-as-a-service’ testing platform or a toolset built to make systems more reliable. It turns failure into resilience by offering engineers a fully hosted solution to safely experiment on complex systems, in order to identify weaknesses before they impact customers and cause revenue loss. It can be easily tested on any of the infrastructure components to avoid single point of failures and to remove any FPs make system more HA and failsafe.
So, this you can recommend to any of your customer before they go live, should test the infra by generating intentional chaos to test respective zones, services, software component, Storage (Disk space etc.), Databases (Cluster, replicas, standbys), Applications (Kubernetes etc.) and Networks.
About exam – This certification tests your knowledge on Chaos Engineering concepts like Gremlin platform, GameDay, MoD (Master of disaster) and other similar experiments and techniques.
Exam is free of cost and if you have any prior knowledge you can directly go and give the exam, but its good to attend their free prep session. There is not any time limit, and you have to answer 20 questions in total. Passing percentage required is 80% and you have 2 attempts in total to do that.
Todays post is about one of the tool that I have been using from last few years now and which I really like when it comes to working on Oracle databases. This is called SQLcl or SQL Developer Command Line, we can call it as an advance version of SQL Plus as it comes with lot of cool and handy features/commands those are not available with default SQL Plus command line interface or SQL prompt. So, this post is about ‘TOP 5 features of SQLcl which I like the most’ ….
Few of you who don’t know what this SQLcl is ? – This is one of tool developed by Oracle’s SQL Developer team, lead by Jeff Smith (Thanks Jeff for that!) and it’s something that makes DBAs & Developers job easy with its range of commands and features that makes it very powerful and gives upper hand over SQL Plus.
So lets get started with top 5 features of SQLcl that I liked the most.
Note: All of the below tests I have performed on SQLcl version 21.1.1.0 build: 21.1.1.113.1704
[oracle@canttowin bin]$ ./sql
SQLcl: Release 21.1 Production on Sat Jun 12 23:29:31 2021
Copyright (c) 1982, 2021, Oracle. All rights reserved.
Username? (''?) system
Password? (**********?) ********
Last Successful login time: Sat Jun 12 2021 23:29:39 -04:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> version
Oracle SQLDeveloper Command-Line (SQLcl) version: 21.1.1.0 build: 21.1.1.113.1704
Feature 1: ‘REPEAT’ command. This is pretty useful command if you want to execute any specific query for a number of times to see results. I mean it’s just like what ‘watch’ is on Linux OS. It simply executes the SQL command and provides output for number of times at a particular interval.
I.e. Below I have executed an SQL for 5 times with a gap of 5 seconds.
SQL> select count(*) from v$session where status='ACTIVE';
COUNT(*)
___________
149
SQL> repeat 5 5
Running 1 of 5 @ 11:38:25.243 with a delay of 5s
COUNT(*)
___________
89
Running 2 of 5 @ 11:38:30.251 with a delay of 5s
COUNT(*)
___________
109
Running 3 of 5 @ 11:38:35.254 with a delay of 5s
COUNT(*)
___________
199
Running 4 of 5 @ 11:38:40.258 with a delay of 5s
COUNT(*)
___________
230
Running 5 of 5 @ 11:38:45.263 with a delay of 5s
COUNT(*)
___________
409
SQL>
Feature 2: ‘Quick DDL Generation’ Now with SQLcl you don’t have to run DBMS_METADATA.get_ddl to get the definition of your Table or Index or anything. You can simply use the DDL command with syntax DDL and you will have your complete object DDL.
Feature 3: ‘Collect Object Information’ Now no need to query dynamic views and DESC commands to get your table stats (rows, analyzed date, in memory status, comments and sample size) and table descriptions. This you can get using a single command of INFORMATION or INFO.
If you want to get more details about histograms on your table, then you have INFO+ command which presents more details to you.
SQL> information system.bigtab
TABLE: BIGTAB
LAST ANALYZED:2021-06-12 23:45:33.0
ROWS :67310
SAMPLE SIZE :67310
INMEMORY :DISABLED
COMMENTS :This is a table for testing purposes
Columns
NAME DATA TYPE NULL DEFAULT COMMENTS
ID NUMBER Yes
WEIGHT NUMBER Yes
ADATE DATE Yes
SQL>
SQL> info+ system.bigtab
TABLE: BIGTAB
LAST ANALYZED:2021-06-20 13:41:19.0
ROWS :67310
SAMPLE SIZE :67310
INMEMORY :DISABLED
COMMENTS :This is a table for testing purposes
Columns
NAME DATA TYPE NULL DEFAULT LOW_VALUE HIGH_VALUE NUM_DISTINCT HISTOGRAM
ID NUMBER Yes 22 198 1000 FREQUENCY
WEIGHT NUMBER Yes -2147337077 2147453933 67310 NONE
ADATE DATE Yes 2018.09.16.23.34.01 2021.06.12.23.24.14 66224 NONE
Feature 4: ‘CTAS easy and quick’ CTAS is very useful command and is quite frequently used because of its simplicity and purpose, now with SQLcl you don’t have to type the complete command or syntax to create a new tables using existing via CTAS.
Feature 5: ‘Extended and more descriptive AUTOTRACING’. This is by far one of the best feature of SQLcl in my opinion and specially for someone who have to tune and tweak databases every now and then. This provides more advance level details or statistics (overall 37 different stats) for any SQL statement where you set the AUTOTRACING feature ON. I am big fan of this feature!
SQL>
SQL> set autotrace on
Autotrace Enabled
Shows the execution plan as well as statistics of the statement.
SQL>
SQL> select * from system.bigtab where ID =588;
...
.....
67 rows selected.
Explain Plan
-----------------------------------------------------------
PLAN_TABLE_OUTPUT
_______________________________________________________________________________
Plan hash value: 441133017
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 76 | 2660 | 63 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| BIGTAB | 76 | 2660 | 63 (0)| 00:00:01 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID"=588)
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
Statistics
-----------------------------------------------------------
1 CPU used by this session
1 CPU used when call started
3 DB time
47 Requests to/from client
46 SQL*Net roundtrips to/from client
4 buffer is not pinned count
1329 bytes received via SQL*Net from client
91534 bytes sent via SQL*Net to client
5 calls to get snapshot scn: kcmgss
11 calls to kcmgcs
302 consistent gets
302 consistent gets from cache
302 consistent gets pin
302 consistent gets pin (fastpath)
2 enqueue releases
2 enqueue requests
3 execute count
2473984 logical read bytes from cache
293 no work - consistent read gets
49 non-idle wait count
3 opened cursors cumulative
1 opened cursors current
2 parse count (hard)
3 parse count (total)
1 parse time cpu
2 parse time elapsed
12 process last non-idle time
5 recursive calls
1 recursive cpu usage
302 session logical reads
1 sorts (memory)
2010 sorts (rows)
293 table scan blocks gotten
86905 table scan disk non-IMC rows gotten
86905 table scan rows gotten
2 table scans (short tables)
47 user calls