Tales From A Lazy Fat DBA

Fan of Oracle DB & Performance, PostgreSQL, Cassandra & much more … \,,/

  • Likes

    • 278,758
  • Archives

  • ΰ₯

  • Categories

  • Subscribe

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Disclaimer!

    FatDBA or Oracle ‘Ant’ is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Posts Tagged ‘oracle’

Quarantine a SQL ? This can save from a system performance pandemic …

Posted by FatDBA on July 6, 2021

Hi Guys,

Last weekend I was asked to examine a performance problem where customer was observing sporadic resource spikes on CPU & PIOs (Physical IO scans), and had no idea how to handle the situation. They already have identified the problematic SQL and asked me to take a look, while doing initial investigation on the system, I found the resource manager was enabled and was killing the ill SQL every time when it tries to breach the limits set on CPU & other resources.

I spent some time understanding query behavior and performance, and I found this a scenario of SQL plan (PHV) flip where the query optimizer toggles between a good and a worst plan. This being a time sensitive issue and customer needs a quick fix before I identify the reason behind the PHV flip, we have to think of any technique on how we can stop the SQL execution with bad PHVs. This was an Oracle EE 19c system running on Exadata, I immediately proposed for a solution using new feature of ‘SQL Quarantine‘, which helps to prevent reuse of same execution plan which was terminated by resource managers due to resource limits.

Let me show you how to do that, here I will demonstrate how to quarantine a SQL & all its execution plans.

Well this is just a use case, you can also quarantine the SQL for all its PHVs or execution plans or even through the SQL Text. Here I will show how you can quarantine a SQL for all its execution plans.

Note: This feature is currently available only on Enterprise Edition on Engineered Systems, like Exadata and Exadata Cloud Service, but for this demo, I will be enabling the ‘_exadata_feature_on‘ parameter for demo purposes on my sandbox setup.

** Please don’t touch this parameter in your production environments if you’re not running on Oracle EE on the top of Exadata or Exadata Cloud services.

[oracle@canttowinsec ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 3 23:33:41 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           FALSE                     FALSE                     Exadata Feature On

SQL>
SQL>


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>


SQL> startup
ORACLE instance started.

Total System Global Area  922745216 bytes
Fixed Size                  8903040 bytes
Variable Size             843055104 bytes
Database Buffers           67108864 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL>



SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           TRUE                      TRUE                      Exadata Feature On

SQL>




-- For this demo I have created one table with 1000000 rows. 

SQL> select count(*) from bigtab;

  COUNT(*)
----------
   1000000

Next check the SQLID, PHV and TEXT of the SQL for which you want to setup this QUARANTINE feature and will create the quarantine configuration for it. You can set it for SQL ID, SQL ID + PHV or for SQL TEXT.

SQL> select sql_id, plan_hash_value, sql_text from v$sqlarea where sql_text like '%select * from bigtab%';

SQL_ID        PLAN_HASH_VALUE
------------- ---------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6fwqzurbc8y7k       441133017
select * from bigtab

SQL>


-- You can create CREATE_QUARANTINE BY SQL_ID 
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id')
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id', PLAN_HASH_VALUE => '&PLAN_HASH_VALUE');

-- You can create CREATE_QUARANTINE_BY_SQL_TEXT.
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_TEXt(SQL_TEXT => to_clob('select count(*) from emp'));



-- This command creates a quarantine configuration for SQL ID '6fwqzurbc8y7k' and all its execution plans.
SQL>
SQL> DECLARE
quarantine_sql VARCHAR2(30);
BEGIN
quarantine_sql :=   DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '6fwqzurbc8y7k');
END;
/  

PL/SQL procedure successfully completed.

SQL>



-- Check if quarantine config/profile is created.

SQL> select NAME, SQL_TEXT, ELAPSED_TIME, cpu_time,CREATED, ENABLED from dba_sql_quarantine;


NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           ALWAYS               ALWAYS                         04-JUL-21 12.09.25.567422 AM   YES


Now next we will use the DBMS_SQLQ.ALTER_QUARANTINE procedure to put limits on resource usage i.e. Elapsed time, CPU, IO (MBs), Number of physical (PIOs) requests, Number of logical (LIOs) requests.

SQL>
SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'CPU_TIME',
       PARAMETER_VALUE => '5');

    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'ELAPSED_TIME',
       PARAMETER_VALUE => '10');
END;
/  2    3    4    5    6    7    8    9   10   11   12

PL/SQL procedure successfully completed.

SQL>


-- let's verify limits that we have set against this SQL Quarantine profile.

NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           10                   5                              04-JUL-21 12.12.41.918609 AM   YES

Next we will setup the RESOURCE MANAGER.

-- Create a pending area.
SQL> begin
 dbms_resource_manager.create_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

-- Create a consumer group with name 'DIXITPOC_GROUP'.
SQL> begin
 dbms_resource_manager.create_consumer_group(CONSUMER_GROUP=>'DIXITPOC_GROUP',COMMENT=>'To test SQL quarantine feature for one SQL');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.


-- Now bind the consumer group with the user 'DIXIT', I mean this could be any group through which you will execute the query.
SQL> begin
 dbms_resource_manager.set_consumer_group_mapping(attribute => 'ORACLE_USER',value => 'DIXIT',consumer_group =>'DIXITPOC_GROUP' );
 end;
  /  2    3    4

PL/SQL procedure successfully completed.


-- Create a resource plan with name 'POC_FOR_QUARANTINE'.
SQL> begin
dbms_resource_manager.create_plan(plan => 'POC_FOR_QUARANTINE',comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds');
end;
/  2    3    4

PL/SQL procedure successfully completed.


-- Create a plan directive by allocating resource plan 'POC_FOR_QUARANTINE' to consumer group 'DIXITPOC_GROUP'. I am setting execution limit of 5 seconds for the SQL.
SQL> begin
 dbms_resource_manager.create_plan_directive(
 plan => 'POC_FOR_QUARANTINE',
 group_or_subplan => 'DIXITPOC_GROUP',
 comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds',
 switch_group=>'CANCEL_SQL',
 switch_time => 5,
 switch_estimate=>false);
 end;
 /  2    3    4    5    6    7    8    9   10

PL/SQL procedure successfully completed.




-- Allocate full resources to rest of the sessions which are not part of this plan.
SQL> begin
 dbms_resource_manager.create_plan_directive(PLAN=> 'POC_FOR_QUARANTINE', GROUP_OR_SUBPLAN=>'OTHER_GROUPS',COMMENT=>'leave rest as is');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.



-- Validate and submit pending area.
SQL> begin
 dbms_resource_manager.validate_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL> begin
 dbms_resource_manager.submit_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL>


-- Grant switch privilege to the DIXIT user which will switch it to 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager_privs.grant_switch_consumer_group('DIXIT','DIXITPOC_GROUP',false);
end;
/  2    3    4

PL/SQL procedure successfully completed.

-- Create initial consumer group for DIXIT user for 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager.set_initial_consumer_group('DIXIT','DIXITPOC_GROUP');
end;
/  2    3    4

PL/SQL procedure successfully completed.

SQL>

Next set the resource manager at the database level, we will have to set the RESOURCE_MANAGER_PLAN to value or name of the plan ‘POC_FOR_QUARANTINE’.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string
SQL> 


SQL> alter DIXITtem set RESOURCE_MANAGER_PLAN = 'POC_FOR_QUARANTINE';
DIXITtem altered.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string      POC_FOR_QUARANTINE
SQL>

Execute the SQL for the first time, you will get ‘active time limit exceeded’ due to RM.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-00040: active time limit exceeded - call aborted

Now, if you run the SQL for the second time, you will see your quarantine plan/profile will be used and will abort the execution of the SQL.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-56955: quarantined plan used

Hope It Helped
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

Migration assessment report, possible with ora2pg ?

Posted by FatDBA on July 3, 2021

Hi Guys,

Last week someone asked if me if it’s possible to generate the migration assessment report (of any sort) if using ora2pg for migration ? – The entire team was coming from EDB PostgreSQL migration where they used their migration toolkit, and there you can generate something similar, but with their current project, ora2pg was finalized as a migration tool for Oracle to vanilla PostgreSQL migration.

And the answer to their question was YES, it’s possible to generate the migration assessment report with ora2pg too, which is very useful as it performs an assessment for the potential migration problems or difficulties. It allocates a cost per object types and also provides other details like invalid object counts, total number of object types, comments and more description about objects etc.

It inspects all database objects , functions, stored procedures to detect if there are some objects or PL/SQL code that cannot be automatically converted by the tool. At the end of the report it calculates the overall cost (sum of all costs) and assigns an approximate number of days or time to perform the migration.

Default time per “cost evaluation unit” is 5 minutes, and outputs can be generated in text (default), html or csv formats.

Let’s generate a sample report, but first let me show you the version of ora2pg, and next will call the report for Oracle database running on 18c for a particular schema.

$
$ ora2pg -t SHOW_VERSION -c config/ora2pg.conf
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0


-- Here we used 'estimate_cost' is to activate the migration cost evaluation with SHOW_REPORT

$ ora2pg -t show_report  --estimate_cost -c config/ora2pg.conf --dump_as_html > /tmp/orapgtestmigr/ora2pgreport.html

-------------------------------------------------------------------------------
Ora2Pg v20.0 - Database Migration Report
-------------------------------------------------------------------------------
Version  Oracle Database 18c Enterprise Edition Release 18.0.0.0.0
Schema   DIXITSCH
Size     88.19 MB

Below is the screen snip of the report head.

Please click below link to view the full report.

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

Single command to prepare a DB for a Data Guard configuration in Oracle 21c – PREPARE DATABASE FOR DATA GUARD WITH

Posted by FatDBA on June 28, 2021

Hi Guys,

Another cool feature I have identified while doing a POC on Oracle 21c database!
Starting with Oracle 21c, now you can create Primary database in a Data Guard Broker configuration using one single command PREPARE DATABASE FOR DATA GUARD WITH <options>. This command will do all the work for you i.e. database preparation, enables force logging, set required parameters, creates standby redo log files, enables Flashback & enables Archivelog.

Below command will prepare the database with name dixitdb to use as a primary database in DG broker configuration.

[oracle@canttowin ~]$ dgmgrl
DGMGRL for Linux: Release 21.0.0.0.0 - Production on Sat Jun 26 11:10:11 2021
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL>

DGMGRL> PREPARE DATABASE FOR DATA GUARD WITH DB_UNIQUE_NAME IS dixitdb;
DB_RECOVERY_FILE_DEST IS "$ORACLE_BASE_HOME/dbs/"
DB_RECOVERY_FILE_DEST_SIZE is "400G"
DG_BROKER_CONFIG_FILE1 IS "$ORACLE_HOME/dbs/testfile.dat"
DG_BROKER_CONFIG_FILE2 IS "$ORACLE_HOME/dbs/testfile2.dat";
Preparing database "dixitdb" for Data Guard.
Creating server parameter file (SPFILE) from initialization parameter memory values.
Database must be restarted after creating the server parameter (SPFILE).
Shutting down database "dixitdb".
Database closed.
Database dismounted.
ORACLE instance shut down. Starting database "dixitdb" to mounted mode.
ORACLE instance started.
Database mounted. Server parameter file (SPFILE) is "ORACLE_BASE_HOME/dbs/spdixitdb.ora".
Initialization parameter DB_UNIQUE_NAME set to 'dixitdb'.
Initialization parameter DB_FILES set to 1024.
Initialization parameter LOG_BUFFER set to 401355151.
Primary database must be restarted after setting static initialization parameters. Primary database must be restarted to enable archivelog mode.
Shutting down database "dixitdb".
Database dismounted.
ORACLE instance shut down.
Starting database "dixitdb" to mounted mode.
ORACLE instance started.
Database mounted.
Initialization parameter DB_FLASHBACK_RETENTION_TARGET set to 120.
Initialization parameter DB_BLOCK_CHECKSUM set to 'TYPICAL'.
Initialization parameter DB_LOST_WRITE_PROTECT set to 'TYPICAL'.
Initialization parameter PARALLEL_THREADS_PER_CPU set to 1.
Removing RMAN archivelog deletion policy 1.
Removing RMAN archivelog deletion policy 2.
RMAN configuration archivelog deletion policy set to SHIPPED TO ALL STANDBY.
Initialization parameter DB_RECOVERY_FILE_DEST_SIZE set to '200G'.
Initialization parameter DB_RECOVERY_FILE_DEST set to 'ORACLE_BASE_HOME/dbs/'. Initialization parameter DG_BROKER_START set to FALSE.
Initialization parameter DG_BROKER_CONFIG_FILE1 set to 'ORACLE_HOME/dbs/testfile.dat'.
Initialization parameter DG_BROKER_CONFIG_FILE2 set to 'ORACLE_HOME/dbs/testfile2.dat'.
LOG_ARCHIVE_DEST_n initialization parameter already set for local archival.
Initialization parameter LOG_ARCHIVE_DEST_2 set to 'location=use_db_recovery_file_dest valid_for=(all_logfiles, all_roles)'.
Initialization parameter LOG_ARCHIVE_DEST_STATE_2 set to 'Enable'.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'MANUAL'.
Standby log group 4 will be dropped because it was not configured correctly.
Standby log group 3 will be dropped because it was not configured correctly.
Adding standby log group size 26214400 and assigning it to thread 1.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'AUTO'.
Initialization parameter DG_BROKER_START set to TRUE.
Database set to FORCE LOGGING. Database set to ARCHIVELOG.
Database set to FLASHBACK ON.
Database opened.


Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

Getting SQL Net message from dblink, have you tried DRIVING_SITE hint ?

Posted by FatDBA on June 27, 2021

Hi Guys,

Last week I was asked to take a look in to one performance problem where customer DBA reported excessive ‘SQL*Net message from dblink’ in the system. As I don’t have direct access to the box, I asked him to share both AWR and ASH performance reports for the time when they observe the problem. And yes, he was right, there were few waits on this event but with very high average elapsed times or single execution times. Next I verified the ASH data for the period and I was able to identify this SQL (SELECT) which is frequently waiting on this wait tvent.

About this event, as you guys know SQL Net messages are classified as Network classed events and happens when some part of the final dataset is coming from a remote database which is accessed via DBLink. Alright, so we know the problem and we have the problematic SQL, let’s simulate the problem and the solution.

I will first create the DBLink that I will be using to access the remote data which is present in a different database on a different server/host and next I will create the sample table with some test data both on Site A and Site B.

Reference used:
BIGTAB: Table on Site 1, local site.
BIGTAB2: Table on site 2, remote location

SQL>
SQL> CREATE DATABASE LINK dixitdlink1
  2     CONNECT TO dixit IDENTIFIED BY oracle90
   USING '(DESCRIPTION=
            (ADDRESS = (PROTOCOL = TCP)(HOST = canttowinsec.quebecdomain)(PORT = 1521))
            (CONNECT_DATA=(SERVICE_NAME=dhavaldb.quebecdomain))
          )';  

Database link created.

SQL>


-- On source database, on local host.
CREATE TABLE BIGTAB (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.

INSERT INTO BIGTAB (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.


SQL> delete from prashant.BIGTAB where rownum < 60000;

59999 rows deleted.




-- This I have created on target database, on a remote host.
CREATE TABLE BIGTAB2 (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.


INSERT INTO BIGTAB2 (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.

Alright, the test data is created on both the sites, and in order to mimic the exact case that I’d faced, I have intentionally deleted 59999 table from table that exists in site 1. So now we have table BIGTAB with 7735 records and on remote database we have a table with name BIGTAB2 with 67447 rows. Let’s execute few queries and do a Inner Join on both the tables.

Test 1: With no hint or tuning/tweaking of any kind. Asking to bring all data from remote site here to local site and then perform the join operation or this can also be done if we put the hint and mention name of the local table, or table that is small i.e. /*+ DRIVING_SITE(BIGTAB) */

SQL> select * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
........
............
...............
.................


Execution Plan
----------------------------------------------------------
Plan hash value: 2705338834

----------------------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN         |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL| BIGTAB  |  6948 |   237K|    68   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | BIGTAB2 | 19930 |   681K|    76   (0)| 00:00:01 | DIXIT~ | R->S |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ADATE"="DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT "ID","WEIGHT","DATEOFREGISTER" FROM "DIXIT"."BIGTAB2" "BIGTAB2"
       (accessing 'DIXITDLINK1.ONTADOMAIN' )


Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

So, this was happening there on that day, on that system with that query waiting on ‘SQL*Net message from dblink‘, now this is the right time I should introduce the DRIVING_SITE hint, I have been trying and experimenting with this hint for a long time now, maybe since my 9i days.

This hint is useful for distributed queries to force optimizer to use a particular site as a driving site, or it instructs the optimizer to execute the query at a different site than that selected by the database. Like in previous test case all rows from remote host which matches the condition are sent to the local site and the join is finally executed on the local site.

Now, I am going to use the DRIVING_SITE (will mention name of the remote table) and force the optimizer to perform all join operations at the remote site, hence the query will be executed there and the result set is returned to the local site.

So, you should try with the DRIVING_SITE hint on the site where the huge table resides!

Test 2: With DRIVING_HINT on remote table BIGTAB2 which is big and has 67447 records.

SQL> select /*+ DRIVING_SITE(BIGTAB2) */ * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
.....
.......
........
..........



Execution Plan
----------------------------------------------------------
Plan hash value: 2214131741

--------------------------------------------------------------------------------------------------
| Id  | Operation              | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN             |         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|   2 |   REMOTE               | BIGTAB  |    82 |  2870 |     2   (0)| 00:00:01 |      ! | R->S |
|   3 |   TABLE ACCESS FULL    | BIGTAB2 | 53126 |  1815K|    68   (0)| 00:00:01 | DHAVA~ |      |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A2"."ADATE"="A1"."DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   2 - SELECT "ID","WEIGHT","ADATE" FROM "PRASHANT"."BIGTAB" "A2" (accessing '!' )


Note
-----
   - fully remote statement
   - dynamic statistics used: dynamic sampling (level=2)

Look at the cost, rows processed and bytes (data processed), all of them are reduced by a great extent. Of course not much elapsed time difference you see here in this example as the dataset is small and is a less complex query, you can try it yourself and see the magic!

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

Running on Oracle 19c ? – Why to enable full supplemental logging when SUBSET DATABASE REPLICATION is there

Posted by FatDBA on June 26, 2021

Hi Folks,

Today’s post is about one of the cool feature added in Oracle 19c, that is very useful and I recently discovered about it while working on one GG setup.

This is about Supplemental logging which enables database to capture extra logging for a redo-based application i.e. logical standby or Golden Gate, this may require that additional columns be logged in the redo log files and causes a great stress and overhead on the database. Things looks very unworthy specially if you are doing only a partial Golden Gate replication, I mean for only few tables.

Starting from Oracle 19c, this is something that you can now control. With this version, you can enable the fine grain supplemental logging (SUBSET DATABASE REPLICATION) which will reduce lot of logging overhead. This feature allows you to disable supplemental logging on the entire database and on all tables. So, if you are running on Oracle 19c and your have the COMPATIBLE parameter to 19.0.0.0.0 or greater and have enabled enable_goldengate_replication parameter to TRUE, you can use this option.

Note: I know few of the products where ‘Supplemental Log Data Subset Database Replication’ doesn’t work or the product doesn’t support it, i.e. HVR 6

SQL> select BANNER_FULL from v$version;

BANNER_FULL
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          NO


SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

Database altered.

SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          YES

You can disable it anytime using below mentioned steps/commands.

SQL>
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL>  ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

Database altered.

SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          NO

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: , , | 2 Comments »

Data replication between Oracle & PostgreSQL using Golden Gate.

Posted by FatDBA on June 24, 2021

Hi Mates,

Recently I was doing one migration where Golden Gate was used for data movement/migration between this source which is was Oracle 19.1 database to target, that was PostgreSQL 12. This was the first time I was doing this for PostgreSQL, I earlier tried with MySQL, Teradata and Cassandra and they were little tricky, specially the one with Cassandra. Let’s see how this one with PostgreSQL goes …

Okay, so this post is all about setting a test replication setup between Oracle and PostgreSQL database. Without further ado, let’s get started!

Details:
Source: Oracle 19c (19.3.0) database
Target: Vanilla/Open-Source PostgreSQL 12.6.7
Golden Gate Version: Golden Gate 19.1 for Oracle (Source), Golden Gate 19.1 for PostgreSQL Database (Target Host).
Source Hostname: canttowinsec.quebecdomain (IP 192.168.154.129)
Target Hostname: canttowin.ontadomain (192.168.154.128)

ON SOURCE:

Let’s first create the user and one test table on the source database which we will use to do this demo.

sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 23 15:21:01 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0

SQL> create user opeth identified by opeth;

User created.

SQL> grant connect to opeth;

Grant succeeded.

SQL> conn opeth/opeth
Connected.

SQL>  create table testtable (col1 number, col2 varchar2(20));

Table created.

SQL> alter table testtable add primary key (col1);

Table altered.

Next I will install Golden Gate using a response file (silent method to install).

[oracle@canttowin Disk1]$ ./runInstaller -silent -nowait -responseFile /home/oracle/ggdirpsql/Disk1/response/oggcore.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 6549 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3945 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2021-06-18_04-33-50AM. Please wait ...[oracle@canttowin Disk1]$ You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2021-06-18_04-33-50AM.log
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2021-06-18_04-33-50AM.log' for more details.
Successfully Setup Software.

Next, I will set the environmental variables to avoid any errors while calling the GG cli and will create the manager process.

[oracle@canttowinsec ggdirpsql]$ pwd
/home/oracle/ggdirpsql
[oracle@canttowinsec ggdirpsql]$ export PATH=$PATH:/home/oracle/ggdirpsql
[oracle@canttowinsec ggdirpsql]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/home/oracle/ggdirpsql
[oracle@canttowinsec ggdirpsql]$


[oracle@canttowin ggdir]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 19.1.0.0.4 OGGCORE_19.1.0.0.0_PLATFORMS_191017.1054_FBO
Linux, x64, 64bit (optimized), Oracle 19c on Oct 17 2019 21:16:29
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


GGSCI (canttowinsec.quebecdomain) 7> view param mgr

PORT 7810


GGSCI (canttowinsec.quebecdomain) 1> start mgr
Manager started


GGSCI (canttowinsec.quebecdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

Now when manager process is running, next will login to the source database using Golden Gate and will list the table what we have created in the beginning, this is just to assure if everything is okay at the source.

GGSCI (canttowinsec.quebecdomain) 6> dblogin userid opeth, password opeth
Successfully logged into database.


GGSCI (canttowinsec.quebecdomain as opeth@dhavaldb) 10> list tables opeth.*
opeth.testtable

Found 1 tables matching list criteria.


All good, lets's create the EXTRACT process on the source system.

GGSCI (canttowinsec.quebecdomain) 2> edit param EORPSQL

GGSCI (canttowinsec.quebecdomain) 1> view params EORPSQL

EXTRACT EORPSQL
USERID opeth, password opeth
RMTHOST 192.168.154.128, MGRPORT 7810
RMTTRAIL ./dirdat/ep
TABLE opeth.testtable;


GGSCI (canttowinsec.quebecdomain) 4> add extract EORPSQL, tranlog, begin now
EXTRACT added.


GGSCI (canttowinsec.quebecdomain) 5> add exttrail ./dirdat/ep, extract EORPSQL, megabytes 5
EXTTRAIL added.

GGSCI (canttowinsec.quebecdomain) 6> start EORPSQL

Sending START request to MANAGER ...
EXTRACT EORPSQL starting



GGSCI (canttowinsec.quebecdomain) 11> info EORPSQL

EXTRACT    EORPSQL   Last Started 2021-06-23 15:10   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:07 ago)
Process ID           10714
Log Read Checkpoint  Oracle Redo Logs
                     2021-06-23 15:11:11  Seqno 15, RBA 31941120
                     SCN 0.2711866 (2711866)

ON TARGET:

Now we are done with all prerequisites, installation and configurations at the source end, let’s move to the target system now where we have our PostgreSQL database running. I will create the same table what we have created on Oracle database (source).

[postgres@canttowin ggdirpsql]$ psql -p 5432
psql (12.6.7)
Type "help" for help.

postgres=# \c opeth opeth
You are now connected to database "opeth" as user "opeth".

opeth=# CREATE TABLE "public"."testtable"
opeth-# (
opeth(#   "col1" integer NOT NULL,
opeth(#   "col2" varchar(20),
opeth(#   CONSTRAINT "PK_Col111" PRIMARY KEY ("col1")
opeth(# )
opeth-# ;
CREATE TABLE
opeth=#
opeth=#
opeth=# \dt+
                     List of relations
 Schema |  Name  | Type  |  Owner   |  Size   | Description
--------+--------+-------+----------+---------+-------------
 public | testtable | table | postgres | 0 bytes |
(1 row)

Next comes the most important part, that is to create the odbc.ini file, Golden Gate uses an ODBC connection to connect to the Postgres database. The ODBC driver is shipped with the installation and on Unix you have to create the ODBC configuration file which is commonly called odbc.ini on your own. You need to create this file in GG_HOME directory on the target system.

[postgres@canttowin ggdirpsql]$ more odbc.ini
[ODBC Data Sources]
PostgreSQL on pgsql
[ODBC]
IANAAppCodePage=4
InstallDir=/home/postgres/ggdirpsql
[pg12db]
Driver=/home/postgres/ggdirpsql/lib/GGpsql25.so
Description=Postgres driver
Database=opeth
HostName=canttowin.ontadomain
PortNumber=5432
LogonID=opeth
Password=opeth



Reference:
Driver=<your goldengate home directory>/lib/GGpsql25.so
InstallDir= <where you have installed your GG software on target server.
IANAAppCodePage= <The value 4 represents the ISO-8859-1 character set>
Database=<Postgres Database Name>
HostName=<Hostname of the Postgres database>
PortNumber=<Port number of the Postgres database>
LogonID=<Username of the Postgres database>
Password=<Password of the Postgres database>

Now, let’s set the environmental variables to point locations for odbc file along with lib directory and installation dir and create all required GG specific directories on the target database server.

[postgres@canttowin ggdirpsql]$ export ODBCINI=/home/postgres/ggdirpsql/odbc.ini
[postgres@canttowin ggdirpsql]$ export PATH=$PATH:/home/postgres/ggdirpsql
[postgres@canttowin ggdirpsql]$  export LD_LIBRARY_PATH=/home/postgres/ggdirpsql/lib


[postgres@canttowin ggdirpsql]$ ./ggsci

Oracle GoldenGate Command Interpreter for PostgreSQL
Version 19.1.0.0.200714 OGGCORE_19.1.0.0.0OGGBP_PLATFORMS_200628.2141
Linux, x64, 64bit (optimized), PostgreSQL on Jun 29 2020 03:59:15
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.



GGSCI (canttowin.ontadomain) 1> create subdirs

Creating subdirectories under current directory /home/postgres/ggdirpsql

Parameter file                 /home/postgres/ggdirpsql/dirprm: created.
Report file                    /home/postgres/ggdirpsql/dirrpt: created.
Checkpoint file                /home/postgres/ggdirpsql/dirchk: created.
Process status files           /home/postgres/ggdirpsql/dirpcs: created.
SQL script files               /home/postgres/ggdirpsql/dirsql: created.
Database definitions files     /home/postgres/ggdirpsql/dirdef: created.
Extract data files             /home/postgres/ggdirpsql/dirdat: created.
Temporary files                /home/postgres/ggdirpsql/dirtmp: created.
Credential store files         /home/postgres/ggdirpsql/dircrd: created.
Masterkey wallet files         /home/postgres/ggdirpsql/dirwlt: created.
Dump files                     /home/postgres/ggdirpsql/dirdmp: created.




Now, time to create all GG related processes, will start with manager process, followed by replicat 
GGSCI (canttowin.ontadomain) 2> edit param mgr

GGSCI (canttowin.ontadomain) 3> start mgr
Manager started.



GGSCI (canttowin.ontadomain) 2> dblogin sourcedb pg12db userid opeth
Password:

2021-06-23 15:00:58  INFO    OGG-03036  Database character set identified as UTF-8. Locale: en_US.UTF-8.

2021-06-23 15:00:58  INFO    OGG-03037  Session character set identified as UTF-8.
Successfully logged into database.


GGSCI (canttowin.ontadomain as opeth@pg12db) 5> list tables public.*
public.testtable

Found 1 tables matching list criteria.




GGSCI (canttowin.ontadomain as opeth@pg12db) 15> view params RORPSQL

REPLICAT RORPSQL
SOURCEDEFS ./dirdef/testtable.def
SETENV ( PGCLIENTENCODING = "UTF8" )
SETENV (ODBCINI="/home/postgres/ggdirpsql/odbc.ini" )
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
TARGETDB pg12db, USERID opeth, PASSWORD opeth
DISCARDFILE ./dirrpt/diskg.dsc, purge
MAP opeth.testtable, TARGET public.testtable, COLMAP (COL1=col1,COL2=col2);



GGSCI (canttowin.ontadomain as opeth@pg12db) 7> add replicat RORPSQL, NODBCHECKPOINT, exttrail ./dirdat/ep
REPLICAT added.


GGSCI (canttowin.ontadomain as opeth@pg12db) 8> start RORPSQL

Sending START request to MANAGER ...
REPLICAT RORPSQL starting


GGSCI (canttowin.ontadomain as opeth@pg12db) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     RORPSQL     00:00:59      00:00:03



GGSCI (canttowin.ontadomain as opeth@pg12db) 13>

GGSCI (canttowin.ontadomain as opeth@pg12db) 13> info RORPSQL

REPLICAT   RORPSQL   Last Started 2021-06-23 15:20   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           97138
Log Read Checkpoint  File ./dirdat/ep000000000
                     2021-06-23 15:21:41.005117  RBA 29169

All set, we have all processes running at both source and target system. Now, if you running on Golden gate version 12.3 and above, no need to do generate the definition of tables, as the metadata is now present in the trail file itself, like in this demo I am using GG 19.1.

So, only if you running on GG version < 12.3 you need to generate the definition (using defgen file) of the table on source system and move it to the DIRDAT directory on target so that replicat considers that.

Now, let’s do some testing to see if records are flowing from source to target database.

ON SOURCE:

[oracle@canttowinsec dirdef]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 23 15:21:01 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0

SQL> conn opeth/opeth
Connected.
SQL> insert into testtable values (1,'hello world!');

1 row created.

SQL> select * from testtable;

      COL1 COL2
---------- --------------------
         1 hello world!

SQL>
SQL> commit;

Commit complete.

Let’s see if the same record is reached the target side.

ON TARGET:

[postgres@canttowin ggdirpsql]$ psql -p 5432
psql (12.6.7)
Type "help" for help.

postgres=# \c opeth opeth
You are now connected to database "opeth" as user "opeth".

psql (12.6.7)
Type "help" for help.

opeth=# select * from testtable;
 col1 |  col2
------+---------
   10 | hello world!

(1 rows)

opeth=# 

Cool! it’s pretty easy to setup this solution to migrate your data from Oracle to PostgreSQL database.

Few things to note here
–> You don’t need to install Golden Gate for PostgreSQL, on source system which is where your oracle database is, requires Golden Gate for Oracle.
–> Only on target database is where you have to install Golden Gate for PostgreSQL
–> odbc.ini is the file which you have to create on target end (where your postgresql server is).
–> Always before calling ggsci, set PATH and LD_LIBRARY_PATH, else you might get an error message while calling the cli.
–> Be very careful while creating the odbc.ini file otherwise you will end up with some obscure errors,

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

‘WARNING OGG-00552, ODBC error: SQLSTATE IM002 native database error 0. [DataDirect][ODBC lib]’ while doing data migration from Oracle to PostgreSQL using Golden Gate …

Posted by FatDBA on June 24, 2021

Hey Guys,

Just a question – Have you ever received below error message while doing the data migration from Oracle to PostgreSQL database using Golden Gate ?

GGSCI (canttowin.ontadomain) 2> dblogin sourcedb ora19topg12 userid opeth
Password:

2021-06-23 14:52:32  WARNING OGG-00552 Database operation failed: Couldn't connect to odbc. 
ODBC error: SQLSTATE IM002 native database error 0. 
[DataDirect][ODBC lib] Data source name not found and no default driver specified.
ERROR: Database operation failed: Couldn't connect to ora19topg12. 
ODBC error: SQLSTATE IM002 native database error 0. 
[DataDirect][ODBC lib] Data source name not found and no default driver specified Failed to open data source ora19topg12 for user opeth.

If yes, then few of things that you can always go and check!

First, check if your pg_hba.conf has below entries, so that ALL clients can connect. The sample below just means that all clients can connect to the Postgres database. Commonly this could show more restrictive setting depending on business rules, so it is always worth to have a look at this file when clients can not connect to the Postgres database using the ODBC driver.

# IPv4 local connections:
host    all             all             0.0.0.0/0             md5

Second, check for listen_adress parameter in postgresql.conf file on your PostgreSQL host. You can set it as ‘*’ (just like below), that means that Postgres listens on all available addresses.

# - Connection Settings -

listen_addresses = '*'          # what IP address(es) to listen on;
                                # comma-separated list of addresses;
                                # defaults to 'localhost'; use '*' for all
                                # (change requires restart)

Third, after all these changes, don’t forget to rebooted your PostgreSQL database ….

Hopefully after all above actions, you will be able to resolve the problem, if not, please let me know that through comments, I can try and help!

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

Golden Gate HEARTBEAT TABLE and its improvements in GG 19.1

Posted by FatDBA on June 23, 2021

Hi Mates,

This week I was working on one Golden Gate upgrade from 12.2 to 19.1, and have noticed few new features & improvements related to the HEARTBEAT TABLE. If you are yet to explore about it, you should do it now!. It’s a fantastic feature which provides some interesting delay/wait related information which LAG command failed to provide. Starting from GG 12.2 Oracle has a built in Heartbeat table which presents some really cool information and houses historical information too.

Let me show you why is it good to have them in your GG setup – Let’s quickly add the table to your setup, connect with your gg user and add the table.

GGSCI (canttowin.ontadomain as gger@dixitdb) 31> add heartbeattable
2021-06-22 14:42:57  INFO    OGG-14001  Successfully created heartbeat seed table ""gger"."GG_HEARTBEAT_SEED"".
2021-06-22 14:42:57  INFO    OGG-14089  Successfully tracking extract restart position with heartbeat table ""gger"."GG_HEARTBEAT_SEED"".
2021-06-22 14:43:04  INFO    OGG-14032  Successfully added supplemental logging for heartbeat seed table ""gger"."GG_HEARTBEAT_SEED"".
2021-06-22 14:43:04  INFO    OGG-14000  Successfully created heartbeat table ""gger"."GG_HEARTBEAT"".
2021-06-22 14:43:04  INFO    OGG-14089  Successfully tracking extract restart position with heartbeat table ""gger"."GG_HEARTBEAT"".
2021-06-22 14:43:04  INFO    OGG-14033  Successfully added supplemental logging for heartbeat table ""gger"."GG_HEARTBEAT"".
2021-06-22 14:43:04  INFO    OGG-14016  Successfully created heartbeat history table ""gger"."GG_HEARTBEAT_HISTORY"".
2021-06-22 14:43:04  INFO    OGG-14089  Successfully tracking extract restart position with heartbeat table ""gger"."GG_HEARTBEAT_HISTORY"".
2021-06-22 14:43:04  INFO    OGG-14086  Successfully disabled partitioning for heartbeat history table ""gger"."GG_HEARTBEAT_HISTORY"".
2021-06-22 14:43:04  INFO    OGG-14023  Successfully created heartbeat lag view ""gger"."GG_LAG"".
2021-06-22 14:43:04  INFO    OGG-14024  Successfully created heartbeat lag history view ""gger"."GG_LAG_HISTORY"".
2021-06-22 14:43:05  INFO    OGG-14003  Successfully populated heartbeat seed table with "DIXITDB".
2021-06-22 14:43:05  INFO    OGG-14004  Successfully created procedure ""gger"."GG_UPDATE_HB_TAB"" to update the heartbeat tables.
2021-06-22 14:43:05  INFO    OGG-14017  Successfully created procedure ""gger"."GG_PURGE_HB_TAB"" to purge the heartbeat history table.
2021-06-22 14:43:06  INFO    OGG-14005  Successfully created scheduler job ""gger"."GG_UPDATE_HEARTBEATS"" to update the heartbeat tables.
2021-06-22 14:43:06  INFO    OGG-14018  Successfully created scheduler job ""gger"."GG_PURGE_HEARTBEATS"" to purge the heartbeat history table.

Let’s check some details about it.

GGSCI (canttowin.ontadomain as gger@dixitdb) 45> info heartbeattable

HEARTBEAT table gger.gg_heartbeat exists.
HEARTBEAT table gger.gg_heartbeat_seed exists.
HEARTBEAT table gger.gg_heartbeat_history exists.
HEARTBEAT table gger.gg_heartbeat supplemental logging ENABLED.
HEARTBEAT table gger.gg_heartbeat_seed supplemental logging ENABLED.
HEARTBEAT table gger.gg_heartbeat_history partitioning DISABLED.
Frequency interval: 60 seconds.
Purge frequency interval: 1 days.
Retention time: 30 days.

Now when the table is successfully created, let’s check lag/delay stats both using LAG command and through new dynamic views created for the purpose.

GGSCI (canttowin.ontadomain as gger@dixitdb) 32> lag

Lag Information From Heartbeat Table

LAG                 AGE                 FROM                TO                  PATH
6.97s               24.24s              DHAVALDB            DIXITDB             DXQUEX ==> DXQUDP ==> DXONRP
4.82s               59.12s              DIXITDB             DHAVALDB            DXONEX ==> DXONDP ==> DXQUERP



SQL> col local_database format a10
col current_local_ts format a30
col remote_database format a10
col INCOMING_HEARTBEAT_AGE for a30
col incoming_path format a30
col incoming_lag format 999,999.999999

select LOCAL_DATABASE, CURRENT_LOCAL_TS, REMOTE_DATABASE,
INCOMING_HEARTBEAT_AGE, INCOMING_PATH, INCOMING_LAG
from GGER.GG_LAG;

LOCAL_DATA CURRENT_LOCAL_TS               REMOTE_DAT INCOMING_HEARTBEAT_AGE INCOMING_PATH                     INCOMING_LAG
---------- ------------------------------ ---------- ---------------------- ------------------------------ ---------------
DHAVALDB   22-JUN-21 06.45.42.581334 PM   DIXITDB                ##########  DXONEX ==> DXONDP ==> DXQUERP        4.754468



set pagesize 100
col local_database format a10
col heartbeat_received_ts format a30
col remote_database format a10
col incoming_path format a32
col incoming_lag format 999,999.999999

select local_database, heartbeat_received_ts, remote_database, incoming_path, incoming_lag from gger.gg_lag_history;

LOCAL_DATA HEARTBEAT_RECEIVED_TS          REMOTE_DAT INCOMING_PATH                       INCOMING_LAG
---------- ------------------------------ ---------- -------------------------------- ---------------
DHAVALDB   22-JUN-21 06.43.19.624247 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP         12.401984
DHAVALDB   22-JUN-21 06.44.11.776929 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          5.167206
DHAVALDB   22-JUN-21 06.45.10.898463 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          4.754468
DHAVALDB   22-JUN-21 06.46.10.996575 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          4.823870
DHAVALDB   22-JUN-21 06.47.13.057097 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          6.839796
DHAVALDB   22-JUN-21 06.48.13.163290 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          7.016913

Now coming back to the learnings from this upgrade, I have learned two new things about GG heartbeat table which are specific to GG version 19.1

  1. Stating with Oracle Golden Gate 19.1 the command UPGRADE HEARTBEATTABLE is available to alter heartbeat table generated by the ADD HEARTBEATTABLE command using a previous Oracle Golden Gate release. So, if you have just completed your upgrade of GG, run the UPGRADE HEARTBEATTABLE command to add extra columns for tables and lag views. These extra columns are used to track the Extract restart position
GGSCI (canttowin.ontadomain as gger@dixitdb) 44> UPGRADE HEARTBEATTABLE
  1. NOADDTRANDATA option of ADD HEARTBEATTABLE has been deprecated in 19.1

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

My favorite 5 SQLcl Features ….

Posted by FatDBA on June 13, 2021

Hi Guys,

Todays post is about one of the tool that I have been using from last few years now and which I really like when it comes to working on Oracle databases. This is called SQLcl or SQL Developer Command Line, we can call it as an advance version of SQL Plus as it comes with lot of cool and handy features/commands those are not available with default SQL Plus command line interface or SQL prompt. So, this post is about ‘TOP 5 features of SQLcl which I like the most’ ….

Few of you who don’t know what this SQLcl is ? – This is one of tool developed by Oracle’s SQL Developer team, lead by Jeff Smith (Thanks Jeff for that!) and it’s something that makes DBAs & Developers job easy with its range of commands and features that makes it very powerful and gives upper hand over SQL Plus.

So lets get started with top 5 features of SQLcl that I liked the most.

Note: All of the below tests I have performed on SQLcl version 21.1.1.0 build: 21.1.1.113.1704

[oracle@canttowin bin]$ ./sql

SQLcl: Release 21.1 Production on Sat Jun 12 23:29:31 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Username? (''?) system
Password? (**********?) ********
Last Successful login time: Sat Jun 12 2021 23:29:39 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> version
Oracle SQLDeveloper Command-Line (SQLcl) version: 21.1.1.0 build: 21.1.1.113.1704

Feature 1: ‘REPEAT’ command.
This is pretty useful command if you want to execute any specific query for a number of times to see results. I mean it’s just like what ‘watch’ is on Linux OS.
It simply executes the SQL command and provides output for number of times at a particular interval.

I.e. Below I have executed an SQL for 5 times with a gap of 5 seconds.

SQL> select count(*) from v$session where status='ACTIVE';

   COUNT(*)
___________
         149

SQL> repeat 5 5
Running 1 of 5  @ 11:38:25.243 with a delay of 5s

   COUNT(*)
___________
89
Running 2 of 5  @ 11:38:30.251 with a delay of 5s

   COUNT(*)
___________
109
Running 3 of 5  @ 11:38:35.254 with a delay of 5s

   COUNT(*)
___________
199
Running 4 of 5  @ 11:38:40.258 with a delay of 5s

   COUNT(*)
___________
230
Running 5 of 5  @ 11:38:45.263 with a delay of 5s

   COUNT(*)
___________
409

SQL>

Feature 2: ‘Quick DDL Generation’
Now with SQLcl you don’t have to run DBMS_METADATA.get_ddl to get the definition of your Table or Index or anything. You can simply use the DDL command with syntax DDL and you will have your complete object DDL.

SQL>
SQL>
SQL> ddl system.bigtab table

  CREATE TABLE "SYSTEM"."BIGTAB"
   (    "ID" NUMBER,
        "WEIGHT" NUMBER,
        "ADATE" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ;
SQL>

Feature 3: ‘Collect Object Information’
Now no need to query dynamic views and DESC commands to get your table stats (rows, analyzed date, in memory status, comments and sample size) and table descriptions. This you can get using a single command of INFORMATION or INFO.

If you want to get more details about histograms on your table, then you have INFO+ command which presents more details to you.

SQL> information system.bigtab
TABLE: BIGTAB
         LAST ANALYZED:2021-06-12 23:45:33.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         INMEMORY     :DISABLED
         COMMENTS     :This is a table for testing purposes

Columns
NAME         DATA TYPE   NULL  DEFAULT    COMMENTS
 ID          NUMBER      Yes
 WEIGHT      NUMBER      Yes
 ADATE       DATE        Yes

SQL>


SQL> info+ system.bigtab
TABLE: BIGTAB
         LAST ANALYZED:2021-06-20 13:41:19.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         INMEMORY     :DISABLED
         COMMENTS     :This is a table for testing purposes

Columns
NAME         DATA TYPE   NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM
 ID          NUMBER      Yes                  22                     198                   1000          FREQUENCY
 WEIGHT      NUMBER      Yes                  -2147337077           2147453933            67310          NONE
 ADATE       DATE        Yes                  2018.09.16.23.34.01   2021.06.12.23.24.14   66224          NONE

Feature 4: ‘CTAS easy and quick’
CTAS is very useful command and is quite frequently used because of its simplicity and purpose, now with SQLcl you don’t have to type the complete command or syntax to create a new tables using existing via CTAS.

SQL> ctas bigtab bigtable666

  CREATE TABLE "SYSTEM"."BIGTABLE666"
   (    "ID",
        "WEIGHT",
        "ADATE",
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM"
 as
select * from BIGTAB
SQL>

Feature 5: ‘Extended and more descriptive AUTOTRACING’.
This is by far one of the best feature of SQLcl in my opinion and specially for someone who have to tune and tweak databases every now and then. This provides more advance level details or statistics (overall 37 different stats) for any SQL statement where you set the AUTOTRACING feature ON. I am big fan of this feature!

SQL>
SQL> set autotrace on
Autotrace Enabled
Shows the execution plan as well as statistics of the statement.
SQL>

SQL> select * from system.bigtab where ID =588;
...
.....

67 rows selected.

Explain Plan
-----------------------------------------------------------
                                                              PLAN_TABLE_OUTPUT
_______________________________________________________________________________
Plan hash value: 441133017

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |    76 |  2660 |    63   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| BIGTAB |    76 |  2660 |    63   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=588)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

Statistics
-----------------------------------------------------------
               1  CPU used by this session
               1  CPU used when call started
               3  DB time
              47  Requests to/from client
              46  SQL*Net roundtrips to/from client
               4  buffer is not pinned count
            1329  bytes received via SQL*Net from client
           91534  bytes sent via SQL*Net to client
               5  calls to get snapshot scn: kcmgss
              11  calls to kcmgcs
             302  consistent gets
             302  consistent gets from cache
             302  consistent gets pin
             302  consistent gets pin (fastpath)
               2  enqueue releases
               2  enqueue requests
               3  execute count
         2473984  logical read bytes from cache
             293  no work - consistent read gets
              49  non-idle wait count
               3  opened cursors cumulative
               1  opened cursors current
               2  parse count (hard)
               3  parse count (total)
               1  parse time cpu
               2  parse time elapsed
              12  process last non-idle time
               5  recursive calls
               1  recursive cpu usage
             302  session logical reads
               1  sorts (memory)
            2010  sorts (rows)
             293  table scan blocks gotten
           86905  table scan disk non-IMC rows gotten
           86905  table scan rows gotten
               2  table scans (short tables)
              47  user calls

Hope It Helped!
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | 1 Comment »

Display SQLID for a statement, no need to use Dynamic Views …

Posted by FatDBA on March 22, 2021

Hi Guys,

I have been busy lately, so couldn’t write much, but I have few things which I will be soon sharing in the form of blog posts, so stay tuned! Alright, so this one is about one of the feature which was though introduced in Oracle 18c, but I first tried it on 19c while doing a recent migration (few stories from that episode too!) – This is to display the SQLID for the currently executed SQL/PLSQL’s.

It’s pretty simple and a really cool feature, so, now you need not to query V$SQL, V$SQLAREA or even V$SESSION to get the SQLID of the last SQL. You only need to set the ‘SET FEEDBACK ON SQL_ID‘ and it’s done, it will first execute your statement and will display the SQLID at the end.

Let me show how to get that, I am doing this test on Oracle 19.3.0.0.0.

SQL>
SQL> SET FEEDBACK ON SQL_ID
SQL>
SQL> select name, open_mode, DBID, database_role from v$database;

NAME      OPEN_MODE                  DBID DATABASE_ROLE
--------- -------------------- ---------- ----------------
DIXITDB   READ WRITE            805967680 PRIMARY

1 row selected.

SQL_ID: buv1x6g781ug2
SQL>

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

 
%d bloggers like this: