Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for the ‘Advanced’ Category

Advance/Troubleshooting/Error-Bug Fixing

Migration assessment report, possible with ora2pg ?

Posted by FatDBA on July 3, 2021

Hi Guys,

Last week someone asked if me if it’s possible to generate the migration assessment report (of any sort) if using ora2pg for migration ? – The entire team was coming from EDB PostgreSQL migration where they used their migration toolkit, and there you can generate something similar, but with their current project, ora2pg was finalized as a migration tool for Oracle to vanilla PostgreSQL migration.

And the answer to their question was YES, it’s possible to generate the migration assessment report with ora2pg too, which is very useful as it performs an assessment for the potential migration problems or difficulties. It allocates a cost per object types and also provides other details like invalid object counts, total number of object types, comments and more description about objects etc.

It inspects all database objects , functions, stored procedures to detect if there are some objects or PL/SQL code that cannot be automatically converted by the tool. At the end of the report it calculates the overall cost (sum of all costs) and assigns an approximate number of days or time to perform the migration.

Default time per “cost evaluation unit” is 5 minutes, and outputs can be generated in text (default), html or csv formats.

Let’s generate a sample report, but first let me show you the version of ora2pg, and next will call the report for Oracle database running on 18c for a particular schema.

$
$ ora2pg -t SHOW_VERSION -c config/ora2pg.conf
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0


-- Here we used 'estimate_cost' is to activate the migration cost evaluation with SHOW_REPORT

$ ora2pg -t show_report  --estimate_cost -c config/ora2pg.conf --dump_as_html > /tmp/orapgtestmigr/ora2pgreport.html

-------------------------------------------------------------------------------
Ora2Pg v20.0 - Database Migration Report
-------------------------------------------------------------------------------
Version  Oracle Database 18c Enterprise Edition Release 18.0.0.0.0
Schema   DIXITSCH
Size     88.19 MB

Below is the screen snip of the report head.

Please click below link to view the full report.

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | 1 Comment »

Single command to prepare a DB for a Data Guard configuration in Oracle 21c – PREPARE DATABASE FOR DATA GUARD WITH

Posted by FatDBA on June 28, 2021

Hi Guys,

Another cool feature I have identified while doing a POC on Oracle 21c database!
Starting with Oracle 21c, now you can create Primary database in a Data Guard Broker configuration using one single command PREPARE DATABASE FOR DATA GUARD WITH <options>. This command will do all the work for you i.e. database preparation, enables force logging, set required parameters, creates standby redo log files, enables Flashback & enables Archivelog.

Below command will prepare the database with name dixitdb to use as a primary database in DG broker configuration.

[oracle@canttowin ~]$ dgmgrl
DGMGRL for Linux: Release 21.0.0.0.0 - Production on Sat Jun 26 11:10:11 2021
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL>

DGMGRL> PREPARE DATABASE FOR DATA GUARD WITH DB_UNIQUE_NAME IS dixitdb;
DB_RECOVERY_FILE_DEST IS "$ORACLE_BASE_HOME/dbs/"
DB_RECOVERY_FILE_DEST_SIZE is "400G"
DG_BROKER_CONFIG_FILE1 IS "$ORACLE_HOME/dbs/testfile.dat"
DG_BROKER_CONFIG_FILE2 IS "$ORACLE_HOME/dbs/testfile2.dat";
Preparing database "dixitdb" for Data Guard.
Creating server parameter file (SPFILE) from initialization parameter memory values.
Database must be restarted after creating the server parameter (SPFILE).
Shutting down database "dixitdb".
Database closed.
Database dismounted.
ORACLE instance shut down. Starting database "dixitdb" to mounted mode.
ORACLE instance started.
Database mounted. Server parameter file (SPFILE) is "ORACLE_BASE_HOME/dbs/spdixitdb.ora".
Initialization parameter DB_UNIQUE_NAME set to 'dixitdb'.
Initialization parameter DB_FILES set to 1024.
Initialization parameter LOG_BUFFER set to 401355151.
Primary database must be restarted after setting static initialization parameters. Primary database must be restarted to enable archivelog mode.
Shutting down database "dixitdb".
Database dismounted.
ORACLE instance shut down.
Starting database "dixitdb" to mounted mode.
ORACLE instance started.
Database mounted.
Initialization parameter DB_FLASHBACK_RETENTION_TARGET set to 120.
Initialization parameter DB_BLOCK_CHECKSUM set to 'TYPICAL'.
Initialization parameter DB_LOST_WRITE_PROTECT set to 'TYPICAL'.
Initialization parameter PARALLEL_THREADS_PER_CPU set to 1.
Removing RMAN archivelog deletion policy 1.
Removing RMAN archivelog deletion policy 2.
RMAN configuration archivelog deletion policy set to SHIPPED TO ALL STANDBY.
Initialization parameter DB_RECOVERY_FILE_DEST_SIZE set to '200G'.
Initialization parameter DB_RECOVERY_FILE_DEST set to 'ORACLE_BASE_HOME/dbs/'. Initialization parameter DG_BROKER_START set to FALSE.
Initialization parameter DG_BROKER_CONFIG_FILE1 set to 'ORACLE_HOME/dbs/testfile.dat'.
Initialization parameter DG_BROKER_CONFIG_FILE2 set to 'ORACLE_HOME/dbs/testfile2.dat'.
LOG_ARCHIVE_DEST_n initialization parameter already set for local archival.
Initialization parameter LOG_ARCHIVE_DEST_2 set to 'location=use_db_recovery_file_dest valid_for=(all_logfiles, all_roles)'.
Initialization parameter LOG_ARCHIVE_DEST_STATE_2 set to 'Enable'.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'MANUAL'.
Standby log group 4 will be dropped because it was not configured correctly.
Standby log group 3 will be dropped because it was not configured correctly.
Adding standby log group size 26214400 and assigning it to thread 1.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'AUTO'.
Initialization parameter DG_BROKER_START set to TRUE.
Database set to FORCE LOGGING. Database set to ARCHIVELOG.
Database set to FLASHBACK ON.
Database opened.


Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

Getting SQL Net message from dblink, have you tried DRIVING_SITE hint ?

Posted by FatDBA on June 27, 2021

Hi Guys,

Last week I was asked to take a look in to one performance problem where customer DBA reported excessive ‘SQL*Net message from dblink’ in the system. As I don’t have direct access to the box, I asked him to share both AWR and ASH performance reports for the time when they observe the problem. And yes, he was right, there were few waits on this event but with very high average elapsed times or single execution times. Next I verified the ASH data for the period and I was able to identify this SQL (SELECT) which is frequently waiting on this wait tvent.

About this event, as you guys know SQL Net messages are classified as Network classed events and happens when some part of the final dataset is coming from a remote database which is accessed via DBLink. Alright, so we know the problem and we have the problematic SQL, let’s simulate the problem and the solution.

I will first create the DBLink that I will be using to access the remote data which is present in a different database on a different server/host and next I will create the sample table with some test data both on Site A and Site B.

Reference used:
BIGTAB: Table on Site 1, local site.
BIGTAB2: Table on site 2, remote location

SQL>
SQL> CREATE DATABASE LINK dixitdlink1
  2     CONNECT TO dixit IDENTIFIED BY oracle90
   USING '(DESCRIPTION=
            (ADDRESS = (PROTOCOL = TCP)(HOST = canttowinsec.quebecdomain)(PORT = 1521))
            (CONNECT_DATA=(SERVICE_NAME=dhavaldb.quebecdomain))
          )';  

Database link created.

SQL>


-- On source database, on local host.
CREATE TABLE BIGTAB (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.

INSERT INTO BIGTAB (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.


SQL> delete from prashant.BIGTAB where rownum < 60000;

59999 rows deleted.




-- This I have created on target database, on a remote host.
CREATE TABLE BIGTAB2 (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.


INSERT INTO BIGTAB2 (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.

Alright, the test data is created on both the sites, and in order to mimic the exact case that I’d faced, I have intentionally deleted 59999 table from table that exists in site 1. So now we have table BIGTAB with 7735 records and on remote database we have a table with name BIGTAB2 with 67447 rows. Let’s execute few queries and do a Inner Join on both the tables.

Test 1: With no hint or tuning/tweaking of any kind. Asking to bring all data from remote site here to local site and then perform the join operation or this can also be done if we put the hint and mention name of the local table, or table that is small i.e. /*+ DRIVING_SITE(BIGTAB) */

SQL> select * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
........
............
...............
.................


Execution Plan
----------------------------------------------------------
Plan hash value: 2705338834

----------------------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN         |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL| BIGTAB  |  6948 |   237K|    68   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | BIGTAB2 | 19930 |   681K|    76   (0)| 00:00:01 | DIXIT~ | R->S |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ADATE"="DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT "ID","WEIGHT","DATEOFREGISTER" FROM "DIXIT"."BIGTAB2" "BIGTAB2"
       (accessing 'DIXITDLINK1.ONTADOMAIN' )


Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

So, this was happening there on that day, on that system with that query waiting on ‘SQL*Net message from dblink‘, now this is the right time I should introduce the DRIVING_SITE hint, I have been trying and experimenting with this hint for a long time now, maybe since my 9i days.

This hint is useful for distributed queries to force optimizer to use a particular site as a driving site, or it instructs the optimizer to execute the query at a different site than that selected by the database. Like in previous test case all rows from remote host which matches the condition are sent to the local site and the join is finally executed on the local site.

Now, I am going to use the DRIVING_SITE (will mention name of the remote table) and force the optimizer to perform all join operations at the remote site, hence the query will be executed there and the result set is returned to the local site.

So, you should try with the DRIVING_SITE hint on the site where the huge table resides!

Test 2: With DRIVING_HINT on remote table BIGTAB2 which is big and has 67447 records.

SQL> select /*+ DRIVING_SITE(BIGTAB2) */ * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
.....
.......
........
..........



Execution Plan
----------------------------------------------------------
Plan hash value: 2214131741

--------------------------------------------------------------------------------------------------
| Id  | Operation              | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN             |         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|   2 |   REMOTE               | BIGTAB  |    82 |  2870 |     2   (0)| 00:00:01 |      ! | R->S |
|   3 |   TABLE ACCESS FULL    | BIGTAB2 | 53126 |  1815K|    68   (0)| 00:00:01 | DHAVA~ |      |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A2"."ADATE"="A1"."DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   2 - SELECT "ID","WEIGHT","ADATE" FROM "PRASHANT"."BIGTAB" "A2" (accessing '!' )


Note
-----
   - fully remote statement
   - dynamic statistics used: dynamic sampling (level=2)

Look at the cost, rows processed and bytes (data processed), all of them are reduced by a great extent. Of course not much elapsed time difference you see here in this example as the dataset is small and is a less complex query, you can try it yourself and see the magic!

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

Running on Oracle 19c ? – Why to enable full supplemental logging when SUBSET DATABASE REPLICATION is there

Posted by FatDBA on June 26, 2021

Hi Folks,

Today’s post is about one of the cool feature added in Oracle 19c, that is very useful and I recently discovered about it while working on one GG setup.

This is about Supplemental logging which enables database to capture extra logging for a redo-based application i.e. logical standby or Golden Gate, this may require that additional columns be logged in the redo log files and causes a great stress and overhead on the database. Things looks very unworthy specially if you are doing only a partial Golden Gate replication, I mean for only few tables.

Starting from Oracle 19c, this is something that you can now control. With this version, you can enable the fine grain supplemental logging (SUBSET DATABASE REPLICATION) which will reduce lot of logging overhead. This feature allows you to disable supplemental logging on the entire database and on all tables. So, if you are running on Oracle 19c and your have the COMPATIBLE parameter to 19.0.0.0.0 or greater and have enabled enable_goldengate_replication parameter to TRUE, you can use this option.

Note: I know few of the products where ‘Supplemental Log Data Subset Database Replication’ doesn’t work or the product doesn’t support it, i.e. HVR 6

SQL> select BANNER_FULL from v$version;

BANNER_FULL
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          NO


SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

Database altered.

SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          YES

You can disable it anytime using below mentioned steps/commands.

SQL>
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL>  ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

Database altered.

SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          NO

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: , , | 2 Comments »

‘WARNING OGG-00552, ODBC error: SQLSTATE IM002 native database error 0. [DataDirect][ODBC lib]’ while doing data migration from Oracle to PostgreSQL using Golden Gate …

Posted by FatDBA on June 24, 2021

Hey Guys,

Just a question – Have you ever received below error message while doing the data migration from Oracle to PostgreSQL database using Golden Gate ?

GGSCI (canttowin.ontadomain) 2> dblogin sourcedb ora19topg12 userid opeth
Password:

2021-06-23 14:52:32  WARNING OGG-00552 Database operation failed: Couldn't connect to odbc. 
ODBC error: SQLSTATE IM002 native database error 0. 
[DataDirect][ODBC lib] Data source name not found and no default driver specified.
ERROR: Database operation failed: Couldn't connect to ora19topg12. 
ODBC error: SQLSTATE IM002 native database error 0. 
[DataDirect][ODBC lib] Data source name not found and no default driver specified Failed to open data source ora19topg12 for user opeth.

If yes, then few of things that you can always go and check!

First, check if your pg_hba.conf has below entries, so that ALL clients can connect. The sample below just means that all clients can connect to the Postgres database. Commonly this could show more restrictive setting depending on business rules, so it is always worth to have a look at this file when clients can not connect to the Postgres database using the ODBC driver.

# IPv4 local connections:
host    all             all             0.0.0.0/0             md5

Second, check for listen_adress parameter in postgresql.conf file on your PostgreSQL host. You can set it as ‘*’ (just like below), that means that Postgres listens on all available addresses.

# - Connection Settings -

listen_addresses = '*'          # what IP address(es) to listen on;
                                # comma-separated list of addresses;
                                # defaults to 'localhost'; use '*' for all
                                # (change requires restart)

Third, after all these changes, don’t forget to rebooted your PostgreSQL database ….

Hopefully after all above actions, you will be able to resolve the problem, if not, please let me know that through comments, I can try and help!

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

Golden Gate HEARTBEAT TABLE and its improvements in GG 19.1

Posted by FatDBA on June 23, 2021

Hi Mates,

This week I was working on one Golden Gate upgrade from 12.2 to 19.1, and have noticed few new features & improvements related to the HEARTBEAT TABLE. If you are yet to explore about it, you should do it now!. It’s a fantastic feature which provides some interesting delay/wait related information which LAG command failed to provide. Starting from GG 12.2 Oracle has a built in Heartbeat table which presents some really cool information and houses historical information too.

Let me show you why is it good to have them in your GG setup – Let’s quickly add the table to your setup, connect with your gg user and add the table.

GGSCI (canttowin.ontadomain as gger@dixitdb) 31> add heartbeattable
2021-06-22 14:42:57  INFO    OGG-14001  Successfully created heartbeat seed table ""gger"."GG_HEARTBEAT_SEED"".
2021-06-22 14:42:57  INFO    OGG-14089  Successfully tracking extract restart position with heartbeat table ""gger"."GG_HEARTBEAT_SEED"".
2021-06-22 14:43:04  INFO    OGG-14032  Successfully added supplemental logging for heartbeat seed table ""gger"."GG_HEARTBEAT_SEED"".
2021-06-22 14:43:04  INFO    OGG-14000  Successfully created heartbeat table ""gger"."GG_HEARTBEAT"".
2021-06-22 14:43:04  INFO    OGG-14089  Successfully tracking extract restart position with heartbeat table ""gger"."GG_HEARTBEAT"".
2021-06-22 14:43:04  INFO    OGG-14033  Successfully added supplemental logging for heartbeat table ""gger"."GG_HEARTBEAT"".
2021-06-22 14:43:04  INFO    OGG-14016  Successfully created heartbeat history table ""gger"."GG_HEARTBEAT_HISTORY"".
2021-06-22 14:43:04  INFO    OGG-14089  Successfully tracking extract restart position with heartbeat table ""gger"."GG_HEARTBEAT_HISTORY"".
2021-06-22 14:43:04  INFO    OGG-14086  Successfully disabled partitioning for heartbeat history table ""gger"."GG_HEARTBEAT_HISTORY"".
2021-06-22 14:43:04  INFO    OGG-14023  Successfully created heartbeat lag view ""gger"."GG_LAG"".
2021-06-22 14:43:04  INFO    OGG-14024  Successfully created heartbeat lag history view ""gger"."GG_LAG_HISTORY"".
2021-06-22 14:43:05  INFO    OGG-14003  Successfully populated heartbeat seed table with "DIXITDB".
2021-06-22 14:43:05  INFO    OGG-14004  Successfully created procedure ""gger"."GG_UPDATE_HB_TAB"" to update the heartbeat tables.
2021-06-22 14:43:05  INFO    OGG-14017  Successfully created procedure ""gger"."GG_PURGE_HB_TAB"" to purge the heartbeat history table.
2021-06-22 14:43:06  INFO    OGG-14005  Successfully created scheduler job ""gger"."GG_UPDATE_HEARTBEATS"" to update the heartbeat tables.
2021-06-22 14:43:06  INFO    OGG-14018  Successfully created scheduler job ""gger"."GG_PURGE_HEARTBEATS"" to purge the heartbeat history table.

Let’s check some details about it.

GGSCI (canttowin.ontadomain as gger@dixitdb) 45> info heartbeattable

HEARTBEAT table gger.gg_heartbeat exists.
HEARTBEAT table gger.gg_heartbeat_seed exists.
HEARTBEAT table gger.gg_heartbeat_history exists.
HEARTBEAT table gger.gg_heartbeat supplemental logging ENABLED.
HEARTBEAT table gger.gg_heartbeat_seed supplemental logging ENABLED.
HEARTBEAT table gger.gg_heartbeat_history partitioning DISABLED.
Frequency interval: 60 seconds.
Purge frequency interval: 1 days.
Retention time: 30 days.

Now when the table is successfully created, let’s check lag/delay stats both using LAG command and through new dynamic views created for the purpose.

GGSCI (canttowin.ontadomain as gger@dixitdb) 32> lag

Lag Information From Heartbeat Table

LAG                 AGE                 FROM                TO                  PATH
6.97s               24.24s              DHAVALDB            DIXITDB             DXQUEX ==> DXQUDP ==> DXONRP
4.82s               59.12s              DIXITDB             DHAVALDB            DXONEX ==> DXONDP ==> DXQUERP



SQL> col local_database format a10
col current_local_ts format a30
col remote_database format a10
col INCOMING_HEARTBEAT_AGE for a30
col incoming_path format a30
col incoming_lag format 999,999.999999

select LOCAL_DATABASE, CURRENT_LOCAL_TS, REMOTE_DATABASE,
INCOMING_HEARTBEAT_AGE, INCOMING_PATH, INCOMING_LAG
from GGER.GG_LAG;

LOCAL_DATA CURRENT_LOCAL_TS               REMOTE_DAT INCOMING_HEARTBEAT_AGE INCOMING_PATH                     INCOMING_LAG
---------- ------------------------------ ---------- ---------------------- ------------------------------ ---------------
DHAVALDB   22-JUN-21 06.45.42.581334 PM   DIXITDB                ##########  DXONEX ==> DXONDP ==> DXQUERP        4.754468



set pagesize 100
col local_database format a10
col heartbeat_received_ts format a30
col remote_database format a10
col incoming_path format a32
col incoming_lag format 999,999.999999

select local_database, heartbeat_received_ts, remote_database, incoming_path, incoming_lag from gger.gg_lag_history;

LOCAL_DATA HEARTBEAT_RECEIVED_TS          REMOTE_DAT INCOMING_PATH                       INCOMING_LAG
---------- ------------------------------ ---------- -------------------------------- ---------------
DHAVALDB   22-JUN-21 06.43.19.624247 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP         12.401984
DHAVALDB   22-JUN-21 06.44.11.776929 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          5.167206
DHAVALDB   22-JUN-21 06.45.10.898463 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          4.754468
DHAVALDB   22-JUN-21 06.46.10.996575 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          4.823870
DHAVALDB   22-JUN-21 06.47.13.057097 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          6.839796
DHAVALDB   22-JUN-21 06.48.13.163290 PM   DIXITDB     DXONEX ==> DXONDP ==> DXQUERP          7.016913

Now coming back to the learnings from this upgrade, I have learned two new things about GG heartbeat table which are specific to GG version 19.1

  1. Stating with Oracle Golden Gate 19.1 the command UPGRADE HEARTBEATTABLE is available to alter heartbeat table generated by the ADD HEARTBEATTABLE command using a previous Oracle Golden Gate release. So, if you have just completed your upgrade of GG, run the UPGRADE HEARTBEATTABLE command to add extra columns for tables and lag views. These extra columns are used to track the Extract restart position
GGSCI (canttowin.ontadomain as gger@dixitdb) 44> UPGRADE HEARTBEATTABLE
  1. NOADDTRANDATA option of ADD HEARTBEATTABLE has been deprecated in 19.1

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Something cool was introduced in Oracle Golden Gate 12.2 …

Posted by FatDBA on June 16, 2021

Hi Guys,

Last week someone from my team contacted me for a problem that he was facing with the Golden Gate, he was getting an error message with replicat process saying ‘Error mapping TEST.TABLE1 to TEST.TABLE1’ and the replicat process is abending at the target end. He was using Golden Gate version 12.1.2.0. Now during the initial investigation we found the table structure both at the source and target is same except column order/positions, just like below.

Source
-----------------------
    TEST.TABLE1
ID Varchar2(10)
Name Varchar2(10)
Codes Numeric(10)

Target:
------------------------
    TEST.TABLE1
ID Varchar2(10)
Codes Numeric(10)
Name Varchar2(10)

As GG works with the column order and not explicitly with the column names, it lead to that error at the target while doing the INSERTS in to the table. We knew that this is now auto handled by the GG, but not sure on the version when it got fixed, so immediately, we started looking for newer releases and their documentations, and luckily we found release notes for Golden Gate 12.2 which clearly explained that now there isn’t any need to generate the DEFGEN files, or to use ASSUMETARGETDEFS or SOURCEDEFS parameter (in some cases).

Starting from Golden Gate version 12.2 (released in year 2016), the trail files now contains the metadata (TDR or table definition records), so GG knows table details like column order/positions, data type, column length etc. before applying the changes to the table. So, now even if you add a new table, drop a column, add a new column, rename an existing column, change column datatype length, you don’t need to worry as Golden Gate will take care by its own.

So, about the solution, we had two options to choose from – Upgrade GG from 12.1 to 12.2 or else use FORMAT RELEASE 12.2 parameter, with that, there is no need to use any of the above mentioned parameters. We choose to upgraded GG version to 12.2 and that fixed the problem!

Hope It Helped
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

My favorite 5 SQLcl Features ….

Posted by FatDBA on June 13, 2021

Hi Guys,

Todays post is about one of the tool that I have been using from last few years now and which I really like when it comes to working on Oracle databases. This is called SQLcl or SQL Developer Command Line, we can call it as an advance version of SQL Plus as it comes with lot of cool and handy features/commands those are not available with default SQL Plus command line interface or SQL prompt. So, this post is about ‘TOP 5 features of SQLcl which I like the most’ ….

Few of you who don’t know what this SQLcl is ? – This is one of tool developed by Oracle’s SQL Developer team, lead by Jeff Smith (Thanks Jeff for that!) and it’s something that makes DBAs & Developers job easy with its range of commands and features that makes it very powerful and gives upper hand over SQL Plus.

So lets get started with top 5 features of SQLcl that I liked the most.

Note: All of the below tests I have performed on SQLcl version 21.1.1.0 build: 21.1.1.113.1704

[oracle@canttowin bin]$ ./sql

SQLcl: Release 21.1 Production on Sat Jun 12 23:29:31 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Username? (''?) system
Password? (**********?) ********
Last Successful login time: Sat Jun 12 2021 23:29:39 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> version
Oracle SQLDeveloper Command-Line (SQLcl) version: 21.1.1.0 build: 21.1.1.113.1704

Feature 1: ‘REPEAT’ command.
This is pretty useful command if you want to execute any specific query for a number of times to see results. I mean it’s just like what ‘watch’ is on Linux OS.
It simply executes the SQL command and provides output for number of times at a particular interval.

I.e. Below I have executed an SQL for 5 times with a gap of 5 seconds.

SQL> select count(*) from v$session where status='ACTIVE';

   COUNT(*)
___________
         149

SQL> repeat 5 5
Running 1 of 5  @ 11:38:25.243 with a delay of 5s

   COUNT(*)
___________
89
Running 2 of 5  @ 11:38:30.251 with a delay of 5s

   COUNT(*)
___________
109
Running 3 of 5  @ 11:38:35.254 with a delay of 5s

   COUNT(*)
___________
199
Running 4 of 5  @ 11:38:40.258 with a delay of 5s

   COUNT(*)
___________
230
Running 5 of 5  @ 11:38:45.263 with a delay of 5s

   COUNT(*)
___________
409

SQL>

Feature 2: ‘Quick DDL Generation’
Now with SQLcl you don’t have to run DBMS_METADATA.get_ddl to get the definition of your Table or Index or anything. You can simply use the DDL command with syntax DDL and you will have your complete object DDL.

SQL>
SQL>
SQL> ddl system.bigtab table

  CREATE TABLE "SYSTEM"."BIGTAB"
   (    "ID" NUMBER,
        "WEIGHT" NUMBER,
        "ADATE" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ;
SQL>

Feature 3: ‘Collect Object Information’
Now no need to query dynamic views and DESC commands to get your table stats (rows, analyzed date, in memory status, comments and sample size) and table descriptions. This you can get using a single command of INFORMATION or INFO.

If you want to get more details about histograms on your table, then you have INFO+ command which presents more details to you.

SQL> information system.bigtab
TABLE: BIGTAB
         LAST ANALYZED:2021-06-12 23:45:33.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         INMEMORY     :DISABLED
         COMMENTS     :This is a table for testing purposes

Columns
NAME         DATA TYPE   NULL  DEFAULT    COMMENTS
 ID          NUMBER      Yes
 WEIGHT      NUMBER      Yes
 ADATE       DATE        Yes

SQL>


SQL> info+ system.bigtab
TABLE: BIGTAB
         LAST ANALYZED:2021-06-20 13:41:19.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         INMEMORY     :DISABLED
         COMMENTS     :This is a table for testing purposes

Columns
NAME         DATA TYPE   NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM
 ID          NUMBER      Yes                  22                     198                   1000          FREQUENCY
 WEIGHT      NUMBER      Yes                  -2147337077           2147453933            67310          NONE
 ADATE       DATE        Yes                  2018.09.16.23.34.01   2021.06.12.23.24.14   66224          NONE

Feature 4: ‘CTAS easy and quick’
CTAS is very useful command and is quite frequently used because of its simplicity and purpose, now with SQLcl you don’t have to type the complete command or syntax to create a new tables using existing via CTAS.

SQL> ctas bigtab bigtable666

  CREATE TABLE "SYSTEM"."BIGTABLE666"
   (    "ID",
        "WEIGHT",
        "ADATE",
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM"
 as
select * from BIGTAB
SQL>

Feature 5: ‘Extended and more descriptive AUTOTRACING’.
This is by far one of the best feature of SQLcl in my opinion and specially for someone who have to tune and tweak databases every now and then. This provides more advance level details or statistics (overall 37 different stats) for any SQL statement where you set the AUTOTRACING feature ON. I am big fan of this feature!

SQL>
SQL> set autotrace on
Autotrace Enabled
Shows the execution plan as well as statistics of the statement.
SQL>

SQL> select * from system.bigtab where ID =588;
...
.....

67 rows selected.

Explain Plan
-----------------------------------------------------------
                                                              PLAN_TABLE_OUTPUT
_______________________________________________________________________________
Plan hash value: 441133017

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |    76 |  2660 |    63   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| BIGTAB |    76 |  2660 |    63   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=588)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

Statistics
-----------------------------------------------------------
               1  CPU used by this session
               1  CPU used when call started
               3  DB time
              47  Requests to/from client
              46  SQL*Net roundtrips to/from client
               4  buffer is not pinned count
            1329  bytes received via SQL*Net from client
           91534  bytes sent via SQL*Net to client
               5  calls to get snapshot scn: kcmgss
              11  calls to kcmgcs
             302  consistent gets
             302  consistent gets from cache
             302  consistent gets pin
             302  consistent gets pin (fastpath)
               2  enqueue releases
               2  enqueue requests
               3  execute count
         2473984  logical read bytes from cache
             293  no work - consistent read gets
              49  non-idle wait count
               3  opened cursors cumulative
               1  opened cursors current
               2  parse count (hard)
               3  parse count (total)
               1  parse time cpu
               2  parse time elapsed
              12  process last non-idle time
               5  recursive calls
               1  recursive cpu usage
             302  session logical reads
               1  sorts (memory)
            2010  sorts (rows)
             293  table scan blocks gotten
           86905  table scan disk non-IMC rows gotten
           86905  table scan rows gotten
               2  table scans (short tables)
              47  user calls

Hope It Helped!
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | 3 Comments »

Dockers – OCI runtime create failed: container_linux.go:349: starting container process caused – process_linux.go:449: container init caused \ – write /proc/self/attr/keycreate: permission denied\’

Posted by FatDBA on September 29, 2020

Hi Guys,

Today I would be discussing about one of the problem that I have encountered while starting PostgreSQL on a docker container. This is the very first time we are calling any container to run on this machine. The error says something like ‘OCI runtime create failed: container_linux.go:349’, followed by “process_linux.go:449: container init caused: permission denied”.I was totally dumbstruck as the error doesn’t give us any clue or idea where and what is failing.

The exact error is given below, and is simulated case on my personal sandbox, but with exact error and issue.


[root@fatdba-doccass ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
postgres            latest              817f2d3d51ec        4 days ago          314MB
[root@fatdba-doccass ~]# docker run --name postgres -e POSTGRES_PASSWORD=postgres -d postgres
c90d92ea603044d72ffed2449e550bfd39d328beacb6a55e17c4515861f86140
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused 
\"write /proc/self/attr/keycreate: permission denied\"": unknown.
 


I remember we fixed something similar, not exactly the same on one another docker setup, where we disabled the SELINUX and that worked for me. So, we planned to give it a try to see if that works, this being a test setup, we didn’t hesitate to try the said option. It was set up to value ‘ENFORCING’ and we will have to set it to value ‘disabled’ and reboot the machine.


[root@fatdba-doccass ~]# more /etc/selinux/config |grep "SELINUX="
SELINUX=disabled
[root@fatdba-doccass ~]# reboot
 


Now, when the system is back, we are all set to run the postgresql image.


[root@fatdba-doccass ~]# docker run --name postgres -e POSTGRES_PASSWORD=postgres -d postgres
6aebd2ea4304202980daeff761857f5aa53deaf51cf7d13b1d00974219b6f80c
[root@fatdba-doccass ~]#
[root@fatdba-doccass ~]#
 


Awesome, it worked, let’s check the status of the container.


[root@fatdba-doccass ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
6aebd2ea4304        postgres            "docker-entrypoint.s…"   8 seconds ago       Up 4 seconds        5432/tcp            postgres
[root@fatdba-doccass ~]#
[root@fatdba-doccass ~]#
 


Next, we tried to connect with the host and psql terminal and that worked too!


[root@fatdba-doccass ~]# docker exec -it fatdba_psql bash
Error: No such container: fatdba_psql
[root@fatdba-doccass ~]# docker exec -it postgres bash
root@6aebd2ea4304:/#
root@6aebd2ea4304:/# psql -U postgres postgres
psql (13.0 (Debian 13.0-1.pgdg100+1))
Type "help" for help.

postgres=#
postgres=# select version();
                                                     version
------------------------------------------------------------------------------------------------------------------
 PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
(1 row)

postgres=#
postgres=#
postgres-# \dt+
                             List of relations
 Schema | Name  | Type  |  Owner   | Persistence |    Size    | Description
--------+-------+-------+----------+-------------+------------+-------------
 public | dixit | table | postgres | permanent   | 8192 bytes |

 

Hope That Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

oracle.net.ns.NetException: Got minus one from a read call

Posted by FatDBA on September 3, 2020

Hi Guys,

Would like to discuss one problem that I was facing today in on one of the Oracle 12c Release 1 standalone database where application team started explaining the problem that they are getting when doing application restart, specially oracle NET exception of ‘Got minus one from a read call


Caused by: java.lang.RuntimeException: java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:/jdbc/ProdMonkeyPD
    Caused by: java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:/jdbc/ProdMonkeyPD
    Caused by: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:/jdbc/ProdMonkeyPD
    Caused by: javax.resource.ResourceException: IJ031084: Unable to create connection
    Caused by: java.sql.SQLRecoverableException: IO Error: Got minus one from a read call
    Caused by: oracle.net.ns.NetException: Got minus one from a read call"}}
 

They were suspecting the issue with the high number of opened files on OS. The files count was too high when listing using lsof command on this RHEL7 system and the count goes down once the services are stopped. well I was able to explain the difference that exists between RHEL6 & RHEL7 when doing count using lsof. It was in RHEL7/EL7 that shows output including TID as default compared to RHEL6/OL6. Hence the number of open files count increases in RHEL7 as compared to RHEL6. So, it has nothing to do with the error that they have reported.

So, next we have checked database alert log and it was all good, all clean, no errors nothing. I immediately checked the value of “OS_AUTHENT_PREFIX” parameter as it specifies a prefix that Oracle uses to authenticate users attempting to connect to the system. Oracle simply appends this value to the beginning of user’s operating system account name and password and which it later on compares. So, it was set to its default value that is OPS$ and was set for the backward compatibility with previous versions.

So, I have two solutions for the problem

– Set “OS_AUTHENT_PREFIX” to “” (a null string), thereby eliminating the addition of any prefix to operating system account names.
– Set “tcp.validnode_checking = no” in SQLNET.ora file
This is to enable and disable valid node checking for incoming connections. If this parameter is set to yes, then incoming connections are allowed only if they originate from a node that conforms to list specified by TCP.INVITED_NODES or TCP.EXCLUDED_NODES parameters.

So, I tried with the first option and rebooted the database to make changes persistent (this parameter is static) and asked application team to give it a try again, and as expected it worked. The error or the ORACLE NET exception ‘Got minus one from a read call‘ was resolved after applying the first fix itself.

Here the second option is valid too as that also does the same thing, but one fix at a time.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »