Tales From A Lazy Fat DBA

Love all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Posts Tagged ‘oracle’

OGG-01201 Error reported by MGR Access denied

Posted by FatDBA on August 22, 2021

Hi Guys,

Last week encountered a problem with one old GG setup running on v12.2 where the extract was failing with errors OGG-01201/OGG-01668 when doing Initial load.

ERROR   OGG-01201  Oracle GoldenGate Capture for Oracle, exld1.prm:  Error reported by MGR : Access denied
ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, exld1.prm:  PROCESS ABENDING

This ‘access denied’ error was there even when the login information was correct for both source and target systems. I was overly confused and wasn’t sure what was causing the issue!

What I come to know after reading a particular piece of documentation, in version 12.2 of GG, the default behavior is the MANAGER and related EXTRACT/REPLICAT cannot be started or stopped remotely as by default there is only deny rule. And while I was trying to do the initial load on the source server and attempts to starts the replicat on target server, I hit the error. This is a security feature and is to prevent unauthorized access to Oracle GoldenGate manager processes and the processes under its control.

Solution to the problem is add “ACCESSRULE, PROG *, IPADDR *, ALLOW” to your manager parameter file on the target system, something like below. The ACCESSRULE parameter restricts the remote system access.

-- GoldenGate Manager Parameter File (mgr.prm) on Target system
--
userid xxxxxxx, password xxxxxxx
PORT 7810
ACCESSRULE, PROG REPLICAT, IPADDR 10.11.01.15, ALLOW
PURGEOLDEXTRACTS ./dirdat/rp*, USECHECKPOINTS, MINKEEPHOURS 4

Here you can also set priority using PRI (0-99) which specifies the priority. The PROG parameter could be anything like GGSCI, GUI, MGR/MANAGER, REPLICAT, COLLECTOR|SERVER and * for all options (default). IPADDR specifies from which IP can access the specified program. Login_ID specifies with RMTHOST configuration and ALLOW | DENY specifies allow or deny the access.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

New JSON features – The old lovely database is new again with 21c

Posted by FatDBA on August 21, 2021

Oracle Database supports relational, graph, spatial, text, OLAP, XML, and JSON data – yes, all at once in one database.
The Oracle Database 21c provides a native JSON data type in binary format. This data type can be used in tables, uses less space and is faster. It’s uniquely designed Oracle Binary JSON format (OSON) is able to speed up both OLAP and OLTP workloads over JSON documents.

I’d recently did some tests and found JSON datatype is now fully integrated into all components of the 21c database and have few new things added to improve its performance. This post is all about JSON datatype in the Oracle 21c Database eco-system, new features, improvements etc.

So, before I move ahead, would like to first build the foundation for the readers, lets create a table with with JSON Data and do some examples.

-- Create a table with JSON datatype.
CREATE TABLE testOrder
 (did NUMBER PRIMARY KEY, jdoc JSON)


-- Let's insert some data to the table
INSERT INTO testOrder
VALUES (1, ' {"testOrder": {
"podate": "2015-06-03",
"shippingAddress": {"street": "3467 35th Ave",
 "city" : "Clara", “state”: “CA”, "zip":
94612},
"comments" : "Discounted sales Foundation Day",
"sparse_id" :"PFHA35",
"items": [
 {"name" : "TV", "price": 341.55, "quantity": 2,
 "parts": [
 {"partName": "remoteCon", "partQuantity": 1},
 {"partName": "antenna”, "partQuantity": 2}]},
 {"name": “PC”, “price”: 441.78, "quantity": 10,
 "parts": [
 {"partName": "mousepad", "partQuantity": 2},
 {"partName": "keyboard", "partQuantity": 1}]}
]}}');



-- Do some SELECT ops
SELECT did,
 po.jdoc.testOrder.podate.date(),
 po.jdoc.testOrder.shippingAddress,
 po.jdoc.testOrder.items[*].count(),
 po.jdoc.testOrder.item[1]
FROM testOrder po
WHERE po.jdoc.testOrder.podate.date() =
TO_DATE(‘2015-06-03’,'YYYY-MM-DD') AND
po.jdoc.testOrder.shippingAddress.zip.number()
BETWEEN 84610 AND 84620;


SELECT JSON {
 ‘name’ : li.itemName,
 ‘sales’ : li.price * li.quantity
}
FROM lineItems_rel li 




-- That's how to UPDATE 
UPDATE testOrder po
SET jdoc = JSON_TRANSFORM(jdoc,
 REPLACE
‘$.testOrder.shippingAddress.city’
 = ‘Oakland’,
 REPLACE ‘$.testOrder.shippingAddress.zip’
 = 94607,
 SET '$.testOrder.contactPhone' =
 JSON('["(415)-867-8560","(500)312-8198"]'),
 REMOVE ‘$.testOrder.sparse_id’,
 APPEND ‘$.testOrder.items’ =
 JSON(‘{“items” :[{“name”:”iphone”,
 “price” : 635.54, “quantity” :2}]}’))
WHERE po.jdoc,testOrder.podate.date() =
 TO_DATE(‘2019-07-01’); 

So, that’s how you can create, query, update your JSON data in any table, pretty cool right 🙂

Okay, coming back to the purpose of the post – What is new in Oracle 21c in terms of JSON support ?

  • Though JSON data type was added in Oracle 20c to provide native support, but is generally available in version 21c.
  • Earlier to 21c, users can only use a single-value functional index to accelerate JSON_VALUE() predicate evaluation. Antecedently, a functional index was bounded to index at most one value per row for JSON that meant a field value having at most one occurrence. In 21c, a user can create a multi-value functional index on a JSON datatype column to index elements within a JSON array. This speeds up the rating of JSON_EXISTS() – an operator allowing the use array of equivalence predicates of the SQL/JSON path language.
  • Oracle 21c includes several other enhancements to the JSON functionality in the database JSON_SCALAR function, that creates an instance of a JSON type from a SQL scalar value.
  • JSON_TRANSFORM function was introduced in Oracle Database 21c to make JSON data alterations simpler or easier in complexity.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

What’s new in Golden Gate version 21c ?

Posted by FatDBA on August 20, 2021

Hi Guys,

Oracle has recently released Golden Gate version 21.1, this happened immediately after they released database version 21c (21.3) for on-prem. Today’s post is all about new features and changes happened with this new GG version.

  • Oracle GoldenGate is available with Microservices Architecture : This release of Oracle GoldenGate is available with Microservices Architecture only.
  • This release of Oracle GoldenGate is available with Microservices Architecture only.
  • Automatic Extract of tables with supplemental logging is supported : Oracle GoldenGate provides a new auto_capture mode to capture changes for all the tables that are enabled for logical replication. You can list the tables enabled for auto-capture using the LIST TABLES AUTO_CAPTURE command option. Use the TRANLOGOPTIONS INTEGRATEDPARAMS auto_capture option to set up automatic capture.
  • Oracle native JSON datatype is supported : Oracle GoldenGate capture and apply processes now support the new native JSON datatype, which is supported by Oracle Database 21c and higher.
  • Enhanced Automatic Conflict Detection and Resolution for Oracle Database 21c
  • Autonomous Database Extract is supported : Oracle GoldenGate can now capture from the Autonomous Databases in OCI.
  • Large DDL (greater than 4 MB) replication is supported : DDLs that are greater than 4 MB in size will be provided replication support.
  • DB_UNIQUE_NAME with heartbeat table : DB_UNIQUE_NAME is available with the heartbeat table to allow users to uniquely identify the source of the heartbeat.
  • Oracle GoldenGate binaries are no longer installed on a shared drive : Oracle always recommended installing the Oracle GoldenGate binaries (OGG_HOME) on a local file system as a best practice. From this release onward, it is a requirement. The binaries must be installed on local drives.
  • Partition Filtering
  • A new Extract needs to be created when the DB timezone is changed : You need to create new Extract if DB timezone is changed, especially in case of Oracle Cloud deployment.
  • DB_UNIQUE_NAME with trail file header : DB_UNIQUE_NAME is added in the trail file header along with DB_NAME, which helps in troubleshooting replication in active-active environments, where mostly all replicas have the same DB_NAME but identify each replica site uniquely using the DB_UNIQUE_NAME.
  • Per PDB Capture
  • Parallel Replicat Core Infrastructure Support for Heterogeneous Databases : Parallel Replicat is supported with SQL Server, DB2 z/OS, and MySQL.

Release announcement link : https://blogs.oracle.com/dataintegration/oracle-goldengate-21c-release-announcement

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Quarantine a SQL ? This can save from a system performance pandemic …

Posted by FatDBA on July 6, 2021

Hi Guys,

Last weekend I was asked to examine a performance problem where customer was observing sporadic resource spikes on CPU & PIOs (Physical IO scans), and had no idea how to handle the situation. They already have identified the problematic SQL and asked me to take a look, while doing initial investigation on the system, I found the resource manager was enabled and was killing the ill SQL every time when it tries to breach the limits set on CPU & other resources.

I spent some time understanding query behavior and performance, and I found this a scenario of SQL plan (PHV) flip where the query optimizer toggles between a good and a worst plan. This being a time sensitive issue and customer needs a quick fix before I identify the reason behind the PHV flip, we have to think of any technique on how we can stop the SQL execution with bad PHVs. This was an Oracle EE 19c system running on Exadata, I immediately proposed for a solution using new feature of ‘SQL Quarantine‘, which helps to prevent reuse of same execution plan which was terminated by resource managers due to resource limits.

Let me show you how to do that, here I will demonstrate how to quarantine a SQL & all its execution plans.

Well this is just a use case, you can also quarantine the SQL for all its PHVs or execution plans or even through the SQL Text. Here I will show how you can quarantine a SQL for all its execution plans.

Note: This feature is currently available only on Enterprise Edition on Engineered Systems, like Exadata and Exadata Cloud Service, but for this demo, I will be enabling the ‘_exadata_feature_on‘ parameter for demo purposes on my sandbox setup.

** Please don’t touch this parameter in your production environments if you’re not running on Oracle EE on the top of Exadata or Exadata Cloud services.

[oracle@canttowinsec ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 3 23:33:41 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           FALSE                     FALSE                     Exadata Feature On

SQL>
SQL>


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>


SQL> startup
ORACLE instance started.

Total System Global Area  922745216 bytes
Fixed Size                  8903040 bytes
Variable Size             843055104 bytes
Database Buffers           67108864 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL>



SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           TRUE                      TRUE                      Exadata Feature On

SQL>




-- For this demo I have created one table with 1000000 rows. 

SQL> select count(*) from bigtab;

  COUNT(*)
----------
   1000000

Next check the SQLID, PHV and TEXT of the SQL for which you want to setup this QUARANTINE feature and will create the quarantine configuration for it. You can set it for SQL ID, SQL ID + PHV or for SQL TEXT.

SQL> select sql_id, plan_hash_value, sql_text from v$sqlarea where sql_text like '%select * from bigtab%';

SQL_ID        PLAN_HASH_VALUE
------------- ---------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6fwqzurbc8y7k       441133017
select * from bigtab

SQL>


-- You can create CREATE_QUARANTINE BY SQL_ID 
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id')
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id', PLAN_HASH_VALUE => '&PLAN_HASH_VALUE');

-- You can create CREATE_QUARANTINE_BY_SQL_TEXT.
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_TEXt(SQL_TEXT => to_clob('select count(*) from emp'));



-- This command creates a quarantine configuration for SQL ID '6fwqzurbc8y7k' and all its execution plans.
SQL>
SQL> DECLARE
quarantine_sql VARCHAR2(30);
BEGIN
quarantine_sql :=   DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '6fwqzurbc8y7k');
END;
/  

PL/SQL procedure successfully completed.

SQL>



-- Check if quarantine config/profile is created.

SQL> select NAME, SQL_TEXT, ELAPSED_TIME, cpu_time,CREATED, ENABLED from dba_sql_quarantine;


NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           ALWAYS               ALWAYS                         04-JUL-21 12.09.25.567422 AM   YES


Now next we will use the DBMS_SQLQ.ALTER_QUARANTINE procedure to put limits on resource usage i.e. Elapsed time, CPU, IO (MBs), Number of physical (PIOs) requests, Number of logical (LIOs) requests.

SQL>
SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'CPU_TIME',
       PARAMETER_VALUE => '5');

    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'ELAPSED_TIME',
       PARAMETER_VALUE => '10');
END;
/  2    3    4    5    6    7    8    9   10   11   12

PL/SQL procedure successfully completed.

SQL>


-- let's verify limits that we have set against this SQL Quarantine profile.

NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           10                   5                              04-JUL-21 12.12.41.918609 AM   YES

Next we will setup the RESOURCE MANAGER.

-- Create a pending area.
SQL> begin
 dbms_resource_manager.create_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

-- Create a consumer group with name 'DIXITPOC_GROUP'.
SQL> begin
 dbms_resource_manager.create_consumer_group(CONSUMER_GROUP=>'DIXITPOC_GROUP',COMMENT=>'To test SQL quarantine feature for one SQL');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.


-- Now bind the consumer group with the user 'DIXIT', I mean this could be any group through which you will execute the query.
SQL> begin
 dbms_resource_manager.set_consumer_group_mapping(attribute => 'ORACLE_USER',value => 'DIXIT',consumer_group =>'DIXITPOC_GROUP' );
 end;
  /  2    3    4

PL/SQL procedure successfully completed.


-- Create a resource plan with name 'POC_FOR_QUARANTINE'.
SQL> begin
dbms_resource_manager.create_plan(plan => 'POC_FOR_QUARANTINE',comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds');
end;
/  2    3    4

PL/SQL procedure successfully completed.


-- Create a plan directive by allocating resource plan 'POC_FOR_QUARANTINE' to consumer group 'DIXITPOC_GROUP'. I am setting execution limit of 5 seconds for the SQL.
SQL> begin
 dbms_resource_manager.create_plan_directive(
 plan => 'POC_FOR_QUARANTINE',
 group_or_subplan => 'DIXITPOC_GROUP',
 comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds',
 switch_group=>'CANCEL_SQL',
 switch_time => 5,
 switch_estimate=>false);
 end;
 /  2    3    4    5    6    7    8    9   10

PL/SQL procedure successfully completed.




-- Allocate full resources to rest of the sessions which are not part of this plan.
SQL> begin
 dbms_resource_manager.create_plan_directive(PLAN=> 'POC_FOR_QUARANTINE', GROUP_OR_SUBPLAN=>'OTHER_GROUPS',COMMENT=>'leave rest as is');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.



-- Validate and submit pending area.
SQL> begin
 dbms_resource_manager.validate_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL> begin
 dbms_resource_manager.submit_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL>


-- Grant switch privilege to the DIXIT user which will switch it to 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager_privs.grant_switch_consumer_group('DIXIT','DIXITPOC_GROUP',false);
end;
/  2    3    4

PL/SQL procedure successfully completed.

-- Create initial consumer group for DIXIT user for 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager.set_initial_consumer_group('DIXIT','DIXITPOC_GROUP');
end;
/  2    3    4

PL/SQL procedure successfully completed.

SQL>

Next set the resource manager at the database level, we will have to set the RESOURCE_MANAGER_PLAN to value or name of the plan ‘POC_FOR_QUARANTINE’.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string
SQL> 


SQL> alter DIXITtem set RESOURCE_MANAGER_PLAN = 'POC_FOR_QUARANTINE';
DIXITtem altered.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string      POC_FOR_QUARANTINE
SQL>

Execute the SQL for the first time, you will get ‘active time limit exceeded’ due to RM.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-00040: active time limit exceeded - call aborted

Now, if you run the SQL for the second time, you will see your quarantine plan/profile will be used and will abort the execution of the SQL.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-56955: quarantined plan used

Hope It Helped
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , , , | Leave a Comment »

Migration assessment report, possible with ora2pg ?

Posted by FatDBA on July 3, 2021

Hi Guys,

Last week someone asked if me if it’s possible to generate the migration assessment report (of any sort) if using ora2pg for migration ? – The entire team was coming from EDB PostgreSQL migration where they used their migration toolkit, and there you can generate something similar, but with their current project, ora2pg was finalized as a migration tool for Oracle to vanilla PostgreSQL migration.

And the answer to their question was YES, it’s possible to generate the migration assessment report with ora2pg too, which is very useful as it performs an assessment for the potential migration problems or difficulties. It allocates a cost per object types and also provides other details like invalid object counts, total number of object types, comments and more description about objects etc.

It inspects all database objects , functions, stored procedures to detect if there are some objects or PL/SQL code that cannot be automatically converted by the tool. At the end of the report it calculates the overall cost (sum of all costs) and assigns an approximate number of days or time to perform the migration.

Default time per “cost evaluation unit” is 5 minutes, and outputs can be generated in text (default), html or csv formats.

Let’s generate a sample report, but first let me show you the version of ora2pg, and next will call the report for Oracle database running on 18c for a particular schema.

$
$ ora2pg -t SHOW_VERSION -c config/ora2pg.conf
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0


-- Here we used 'estimate_cost' is to activate the migration cost evaluation with SHOW_REPORT

$ ora2pg -t show_report  --estimate_cost -c config/ora2pg.conf --dump_as_html > /tmp/orapgtestmigr/ora2pgreport.html

-------------------------------------------------------------------------------
Ora2Pg v20.0 - Database Migration Report
-------------------------------------------------------------------------------
Version  Oracle Database 18c Enterprise Edition Release 18.0.0.0.0
Schema   DIXITSCH
Size     88.19 MB

Below is the screen snip of the report head.

Please click below link to view the full report.

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

Single command to prepare a DB for a Data Guard configuration in Oracle 21c – PREPARE DATABASE FOR DATA GUARD WITH

Posted by FatDBA on June 28, 2021

Hi Guys,

Another cool feature I have identified while doing a POC on Oracle 21c database!
Starting with Oracle 21c, now you can create Primary database in a Data Guard Broker configuration using one single command PREPARE DATABASE FOR DATA GUARD WITH <options>. This command will do all the work for you i.e. database preparation, enables force logging, set required parameters, creates standby redo log files, enables Flashback & enables Archivelog.

Below command will prepare the database with name dixitdb to use as a primary database in DG broker configuration.

[oracle@canttowin ~]$ dgmgrl
DGMGRL for Linux: Release 21.0.0.0.0 - Production on Sat Jun 26 11:10:11 2021
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL>

DGMGRL> PREPARE DATABASE FOR DATA GUARD WITH DB_UNIQUE_NAME IS dixitdb;
DB_RECOVERY_FILE_DEST IS "$ORACLE_BASE_HOME/dbs/"
DB_RECOVERY_FILE_DEST_SIZE is "400G"
DG_BROKER_CONFIG_FILE1 IS "$ORACLE_HOME/dbs/testfile.dat"
DG_BROKER_CONFIG_FILE2 IS "$ORACLE_HOME/dbs/testfile2.dat";
Preparing database "dixitdb" for Data Guard.
Creating server parameter file (SPFILE) from initialization parameter memory values.
Database must be restarted after creating the server parameter (SPFILE).
Shutting down database "dixitdb".
Database closed.
Database dismounted.
ORACLE instance shut down. Starting database "dixitdb" to mounted mode.
ORACLE instance started.
Database mounted. Server parameter file (SPFILE) is "ORACLE_BASE_HOME/dbs/spdixitdb.ora".
Initialization parameter DB_UNIQUE_NAME set to 'dixitdb'.
Initialization parameter DB_FILES set to 1024.
Initialization parameter LOG_BUFFER set to 401355151.
Primary database must be restarted after setting static initialization parameters. Primary database must be restarted to enable archivelog mode.
Shutting down database "dixitdb".
Database dismounted.
ORACLE instance shut down.
Starting database "dixitdb" to mounted mode.
ORACLE instance started.
Database mounted.
Initialization parameter DB_FLASHBACK_RETENTION_TARGET set to 120.
Initialization parameter DB_BLOCK_CHECKSUM set to 'TYPICAL'.
Initialization parameter DB_LOST_WRITE_PROTECT set to 'TYPICAL'.
Initialization parameter PARALLEL_THREADS_PER_CPU set to 1.
Removing RMAN archivelog deletion policy 1.
Removing RMAN archivelog deletion policy 2.
RMAN configuration archivelog deletion policy set to SHIPPED TO ALL STANDBY.
Initialization parameter DB_RECOVERY_FILE_DEST_SIZE set to '200G'.
Initialization parameter DB_RECOVERY_FILE_DEST set to 'ORACLE_BASE_HOME/dbs/'. Initialization parameter DG_BROKER_START set to FALSE.
Initialization parameter DG_BROKER_CONFIG_FILE1 set to 'ORACLE_HOME/dbs/testfile.dat'.
Initialization parameter DG_BROKER_CONFIG_FILE2 set to 'ORACLE_HOME/dbs/testfile2.dat'.
LOG_ARCHIVE_DEST_n initialization parameter already set for local archival.
Initialization parameter LOG_ARCHIVE_DEST_2 set to 'location=use_db_recovery_file_dest valid_for=(all_logfiles, all_roles)'.
Initialization parameter LOG_ARCHIVE_DEST_STATE_2 set to 'Enable'.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'MANUAL'.
Standby log group 4 will be dropped because it was not configured correctly.
Standby log group 3 will be dropped because it was not configured correctly.
Adding standby log group size 26214400 and assigning it to thread 1.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'AUTO'.
Initialization parameter DG_BROKER_START set to TRUE.
Database set to FORCE LOGGING. Database set to ARCHIVELOG.
Database set to FLASHBACK ON.
Database opened.


Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

Getting SQL Net message from dblink, have you tried DRIVING_SITE hint ?

Posted by FatDBA on June 27, 2021

Hi Guys,

Last week I was asked to take a look in to one performance problem where customer DBA reported excessive ‘SQL*Net message from dblink’ in the system. As I don’t have direct access to the box, I asked him to share both AWR and ASH performance reports for the time when they observe the problem. And yes, he was right, there were few waits on this event but with very high average elapsed times or single execution times. Next I verified the ASH data for the period and I was able to identify this SQL (SELECT) which is frequently waiting on this wait tvent.

About this event, as you guys know SQL Net messages are classified as Network classed events and happens when some part of the final dataset is coming from a remote database which is accessed via DBLink. Alright, so we know the problem and we have the problematic SQL, let’s simulate the problem and the solution.

I will first create the DBLink that I will be using to access the remote data which is present in a different database on a different server/host and next I will create the sample table with some test data both on Site A and Site B.

Reference used:
BIGTAB: Table on Site 1, local site.
BIGTAB2: Table on site 2, remote location

SQL>
SQL> CREATE DATABASE LINK dixitdlink1
  2     CONNECT TO dixit IDENTIFIED BY oracle90
   USING '(DESCRIPTION=
            (ADDRESS = (PROTOCOL = TCP)(HOST = canttowinsec.quebecdomain)(PORT = 1521))
            (CONNECT_DATA=(SERVICE_NAME=dhavaldb.quebecdomain))
          )';  

Database link created.

SQL>


-- On source database, on local host.
CREATE TABLE BIGTAB (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.

INSERT INTO BIGTAB (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.


SQL> delete from prashant.BIGTAB where rownum < 60000;

59999 rows deleted.




-- This I have created on target database, on a remote host.
CREATE TABLE BIGTAB2 (
   id         NUMBER,
   weight     NUMBER,
   adate      DATE
);

Table created.


INSERT INTO BIGTAB2 (id, weight, adate)
SELECT MOD(ROWNUM,1000),
       DBMS_RANDOM.RANDOM,
       SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
 FROM all_objects
/

67447 rows created.

Alright, the test data is created on both the sites, and in order to mimic the exact case that I’d faced, I have intentionally deleted 59999 table from table that exists in site 1. So now we have table BIGTAB with 7735 records and on remote database we have a table with name BIGTAB2 with 67447 rows. Let’s execute few queries and do a Inner Join on both the tables.

Test 1: With no hint or tuning/tweaking of any kind. Asking to bring all data from remote site here to local site and then perform the join operation or this can also be done if we put the hint and mention name of the local table, or table that is small i.e. /*+ DRIVING_SITE(BIGTAB) */

SQL> select * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
........
............
...............
.................


Execution Plan
----------------------------------------------------------
Plan hash value: 2705338834

----------------------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN         |         | 19931 |  1362K|   144   (0)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL| BIGTAB  |  6948 |   237K|    68   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | BIGTAB2 | 19930 |   681K|    76   (0)| 00:00:01 | DIXIT~ | R->S |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ADATE"="DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT "ID","WEIGHT","DATEOFREGISTER" FROM "DIXIT"."BIGTAB2" "BIGTAB2"
       (accessing 'DIXITDLINK1.ONTADOMAIN' )


Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

So, this was happening there on that day, on that system with that query waiting on ‘SQL*Net message from dblink‘, now this is the right time I should introduce the DRIVING_SITE hint, I have been trying and experimenting with this hint for a long time now, maybe since my 9i days.

This hint is useful for distributed queries to force optimizer to use a particular site as a driving site, or it instructs the optimizer to execute the query at a different site than that selected by the database. Like in previous test case all rows from remote host which matches the condition are sent to the local site and the join is finally executed on the local site.

Now, I am going to use the DRIVING_SITE (will mention name of the remote table) and force the optimizer to perform all join operations at the remote site, hence the query will be executed there and the result set is returned to the local site.

So, you should try with the DRIVING_SITE hint on the site where the huge table resides!

Test 2: With DRIVING_HINT on remote table BIGTAB2 which is big and has 67447 records.

SQL> select /*+ DRIVING_SITE(BIGTAB2) */ * from prashant.BIGTAB, dixit.BIGTAB2@dixitdlink1 where ADATE=DATEOFREGISTER;

        ID     WEIGHT ADATE             ID     WEIGHT DATEOFREG
---------- ---------- --------- ---------- ---------- ---------
       502 -1.363E+09 09-MAR-20        989 1353403341 09-MAR-20
       577 -185863752 22-FEB-21        247 -626839962 22-FEB-21
.....
.......
........
..........



Execution Plan
----------------------------------------------------------
Plan hash value: 2214131741

--------------------------------------------------------------------------------------------------
| Id  | Operation              | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|*  1 |  HASH JOIN             |         |    82 |  5740 |    70   (0)| 00:00:01 |        |      |
|   2 |   REMOTE               | BIGTAB  |    82 |  2870 |     2   (0)| 00:00:01 |      ! | R->S |
|   3 |   TABLE ACCESS FULL    | BIGTAB2 | 53126 |  1815K|    68   (0)| 00:00:01 | DHAVA~ |      |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A2"."ADATE"="A1"."DATEOFREGISTER")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   2 - SELECT "ID","WEIGHT","ADATE" FROM "PRASHANT"."BIGTAB" "A2" (accessing '!' )


Note
-----
   - fully remote statement
   - dynamic statistics used: dynamic sampling (level=2)

Look at the cost, rows processed and bytes (data processed), all of them are reduced by a great extent. Of course not much elapsed time difference you see here in this example as the dataset is small and is a less complex query, you can try it yourself and see the magic!

Hope It Helped!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

Running on Oracle 19c ? – Why to enable full supplemental logging when SUBSET DATABASE REPLICATION is there

Posted by FatDBA on June 26, 2021

Hi Folks,

Today’s post is about one of the cool feature added in Oracle 19c, that is very useful and I recently discovered about it while working on one GG setup.

This is about Supplemental logging which enables database to capture extra logging for a redo-based application i.e. logical standby or Golden Gate, this may require that additional columns be logged in the redo log files and causes a great stress and overhead on the database. Things looks very unworthy specially if you are doing only a partial Golden Gate replication, I mean for only few tables.

Starting from Oracle 19c, this is something that you can now control. With this version, you can enable the fine grain supplemental logging (SUBSET DATABASE REPLICATION) which will reduce lot of logging overhead. This feature allows you to disable supplemental logging on the entire database and on all tables. So, if you are running on Oracle 19c and your have the COMPATIBLE parameter to 19.0.0.0.0 or greater and have enabled enable_goldengate_replication parameter to TRUE, you can use this option.

Note: I know few of the products where ‘Supplemental Log Data Subset Database Replication’ doesn’t work or the product doesn’t support it, i.e. HVR 6

SQL> select BANNER_FULL from v$version;

BANNER_FULL
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          NO


SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

Database altered.

SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          YES

You can disable it anytime using below mentioned steps/commands.

SQL>
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL>  ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

Database altered.

SQL> SELECT  FORCE_LOGGING, supplemental_log_data_min, supplemental_log_data_sr from v$database;

FORCE_LOGGING                           SUPPLEMENTAL_LOG_DATA_MIN                                    SUPPLEMENTAL_LOG_DATA_SR
--------------------------------------- ------------------------------------------------------------ ----------------------------------------
YES                                     YES                                                          NO

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: , , | 2 Comments »

Data replication between Oracle & PostgreSQL using Golden Gate.

Posted by FatDBA on June 24, 2021

Hi Mates,

Recently I was doing one migration where Golden Gate was used for data movement/migration between this source which is was Oracle 19.1 database to target, that was PostgreSQL 12. This was the first time I was doing this for PostgreSQL, I earlier tried with MySQL, Teradata and Cassandra and they were little tricky, specially the one with Cassandra. Let’s see how this one with PostgreSQL goes …

Okay, so this post is all about setting a test replication setup between Oracle and PostgreSQL database. Without further ado, let’s get started!

Details:
Source: Oracle 19c (19.3.0) database
Target: Vanilla/Open-Source PostgreSQL 12.6.7
Golden Gate Version: Golden Gate 19.1 for Oracle (Source), Golden Gate 19.1 for PostgreSQL Database (Target Host).
Source Hostname: canttowinsec.quebecdomain (IP 192.168.154.129)
Target Hostname: canttowin.ontadomain (192.168.154.128)

ON SOURCE:

Let’s first create the user and one test table on the source database which we will use to do this demo.

sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 23 15:21:01 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0

SQL> create user opeth identified by opeth;

User created.

SQL> grant connect to opeth;

Grant succeeded.

SQL> conn opeth/opeth
Connected.

SQL>  create table testtable (col1 number, col2 varchar2(20));

Table created.

SQL> alter table testtable add primary key (col1);

Table altered.

Next I will install Golden Gate using a response file (silent method to install).

[oracle@canttowin Disk1]$ ./runInstaller -silent -nowait -responseFile /home/oracle/ggdirpsql/Disk1/response/oggcore.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 6549 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3945 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2021-06-18_04-33-50AM. Please wait ...[oracle@canttowin Disk1]$ You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2021-06-18_04-33-50AM.log
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2021-06-18_04-33-50AM.log' for more details.
Successfully Setup Software.

Next, I will set the environmental variables to avoid any errors while calling the GG cli and will create the manager process.

[oracle@canttowinsec ggdirpsql]$ pwd
/home/oracle/ggdirpsql
[oracle@canttowinsec ggdirpsql]$ export PATH=$PATH:/home/oracle/ggdirpsql
[oracle@canttowinsec ggdirpsql]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/home/oracle/ggdirpsql
[oracle@canttowinsec ggdirpsql]$


[oracle@canttowin ggdir]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 19.1.0.0.4 OGGCORE_19.1.0.0.0_PLATFORMS_191017.1054_FBO
Linux, x64, 64bit (optimized), Oracle 19c on Oct 17 2019 21:16:29
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


GGSCI (canttowinsec.quebecdomain) 7> view param mgr

PORT 7810


GGSCI (canttowinsec.quebecdomain) 1> start mgr
Manager started


GGSCI (canttowinsec.quebecdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

Now when manager process is running, next will login to the source database using Golden Gate and will list the table what we have created in the beginning, this is just to assure if everything is okay at the source.

GGSCI (canttowinsec.quebecdomain) 6> dblogin userid opeth, password opeth
Successfully logged into database.


GGSCI (canttowinsec.quebecdomain as opeth@dhavaldb) 10> list tables opeth.*
opeth.testtable

Found 1 tables matching list criteria.


All good, lets's create the EXTRACT process on the source system.

GGSCI (canttowinsec.quebecdomain) 2> edit param EORPSQL

GGSCI (canttowinsec.quebecdomain) 1> view params EORPSQL

EXTRACT EORPSQL
USERID opeth, password opeth
RMTHOST 192.168.154.128, MGRPORT 7810
RMTTRAIL ./dirdat/ep
TABLE opeth.testtable;


GGSCI (canttowinsec.quebecdomain) 4> add extract EORPSQL, tranlog, begin now
EXTRACT added.


GGSCI (canttowinsec.quebecdomain) 5> add exttrail ./dirdat/ep, extract EORPSQL, megabytes 5
EXTTRAIL added.

GGSCI (canttowinsec.quebecdomain) 6> start EORPSQL

Sending START request to MANAGER ...
EXTRACT EORPSQL starting



GGSCI (canttowinsec.quebecdomain) 11> info EORPSQL

EXTRACT    EORPSQL   Last Started 2021-06-23 15:10   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:07 ago)
Process ID           10714
Log Read Checkpoint  Oracle Redo Logs
                     2021-06-23 15:11:11  Seqno 15, RBA 31941120
                     SCN 0.2711866 (2711866)

ON TARGET:

Now we are done with all prerequisites, installation and configurations at the source end, let’s move to the target system now where we have our PostgreSQL database running. I will create the same table what we have created on Oracle database (source).

[postgres@canttowin ggdirpsql]$ psql -p 5432
psql (12.6.7)
Type "help" for help.

postgres=# \c opeth opeth
You are now connected to database "opeth" as user "opeth".

opeth=# CREATE TABLE "public"."testtable"
opeth-# (
opeth(#   "col1" integer NOT NULL,
opeth(#   "col2" varchar(20),
opeth(#   CONSTRAINT "PK_Col111" PRIMARY KEY ("col1")
opeth(# )
opeth-# ;
CREATE TABLE
opeth=#
opeth=#
opeth=# \dt+
                     List of relations
 Schema |  Name  | Type  |  Owner   |  Size   | Description
--------+--------+-------+----------+---------+-------------
 public | testtable | table | postgres | 0 bytes |
(1 row)

Next comes the most important part, that is to create the odbc.ini file, Golden Gate uses an ODBC connection to connect to the Postgres database. The ODBC driver is shipped with the installation and on Unix you have to create the ODBC configuration file which is commonly called odbc.ini on your own. You need to create this file in GG_HOME directory on the target system.

[postgres@canttowin ggdirpsql]$ more odbc.ini
[ODBC Data Sources]
PostgreSQL on pgsql
[ODBC]
IANAAppCodePage=4
InstallDir=/home/postgres/ggdirpsql
[pg12db]
Driver=/home/postgres/ggdirpsql/lib/GGpsql25.so
Description=Postgres driver
Database=opeth
HostName=canttowin.ontadomain
PortNumber=5432
LogonID=opeth
Password=opeth



Reference:
Driver=<your goldengate home directory>/lib/GGpsql25.so
InstallDir= <where you have installed your GG software on target server.
IANAAppCodePage= <The value 4 represents the ISO-8859-1 character set>
Database=<Postgres Database Name>
HostName=<Hostname of the Postgres database>
PortNumber=<Port number of the Postgres database>
LogonID=<Username of the Postgres database>
Password=<Password of the Postgres database>

Now, let’s set the environmental variables to point locations for odbc file along with lib directory and installation dir and create all required GG specific directories on the target database server.

[postgres@canttowin ggdirpsql]$ export ODBCINI=/home/postgres/ggdirpsql/odbc.ini
[postgres@canttowin ggdirpsql]$ export PATH=$PATH:/home/postgres/ggdirpsql
[postgres@canttowin ggdirpsql]$  export LD_LIBRARY_PATH=/home/postgres/ggdirpsql/lib


[postgres@canttowin ggdirpsql]$ ./ggsci

Oracle GoldenGate Command Interpreter for PostgreSQL
Version 19.1.0.0.200714 OGGCORE_19.1.0.0.0OGGBP_PLATFORMS_200628.2141
Linux, x64, 64bit (optimized), PostgreSQL on Jun 29 2020 03:59:15
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.



GGSCI (canttowin.ontadomain) 1> create subdirs

Creating subdirectories under current directory /home/postgres/ggdirpsql

Parameter file                 /home/postgres/ggdirpsql/dirprm: created.
Report file                    /home/postgres/ggdirpsql/dirrpt: created.
Checkpoint file                /home/postgres/ggdirpsql/dirchk: created.
Process status files           /home/postgres/ggdirpsql/dirpcs: created.
SQL script files               /home/postgres/ggdirpsql/dirsql: created.
Database definitions files     /home/postgres/ggdirpsql/dirdef: created.
Extract data files             /home/postgres/ggdirpsql/dirdat: created.
Temporary files                /home/postgres/ggdirpsql/dirtmp: created.
Credential store files         /home/postgres/ggdirpsql/dircrd: created.
Masterkey wallet files         /home/postgres/ggdirpsql/dirwlt: created.
Dump files                     /home/postgres/ggdirpsql/dirdmp: created.




Now, time to create all GG related processes, will start with manager process, followed by replicat 
GGSCI (canttowin.ontadomain) 2> edit param mgr

GGSCI (canttowin.ontadomain) 3> start mgr
Manager started.



GGSCI (canttowin.ontadomain) 2> dblogin sourcedb pg12db userid opeth
Password:

2021-06-23 15:00:58  INFO    OGG-03036  Database character set identified as UTF-8. Locale: en_US.UTF-8.

2021-06-23 15:00:58  INFO    OGG-03037  Session character set identified as UTF-8.
Successfully logged into database.


GGSCI (canttowin.ontadomain as opeth@pg12db) 5> list tables public.*
public.testtable

Found 1 tables matching list criteria.




GGSCI (canttowin.ontadomain as opeth@pg12db) 15> view params RORPSQL

REPLICAT RORPSQL
SOURCEDEFS ./dirdef/testtable.def
SETENV ( PGCLIENTENCODING = "UTF8" )
SETENV (ODBCINI="/home/postgres/ggdirpsql/odbc.ini" )
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
TARGETDB pg12db, USERID opeth, PASSWORD opeth
DISCARDFILE ./dirrpt/diskg.dsc, purge
MAP opeth.testtable, TARGET public.testtable, COLMAP (COL1=col1,COL2=col2);



GGSCI (canttowin.ontadomain as opeth@pg12db) 7> add replicat RORPSQL, NODBCHECKPOINT, exttrail ./dirdat/ep
REPLICAT added.


GGSCI (canttowin.ontadomain as opeth@pg12db) 8> start RORPSQL

Sending START request to MANAGER ...
REPLICAT RORPSQL starting


GGSCI (canttowin.ontadomain as opeth@pg12db) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     RORPSQL     00:00:59      00:00:03



GGSCI (canttowin.ontadomain as opeth@pg12db) 13>

GGSCI (canttowin.ontadomain as opeth@pg12db) 13> info RORPSQL

REPLICAT   RORPSQL   Last Started 2021-06-23 15:20   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           97138
Log Read Checkpoint  File ./dirdat/ep000000000
                     2021-06-23 15:21:41.005117  RBA 29169

All set, we have all processes running at both source and target system. Now, if you running on Golden gate version 12.3 and above, no need to do generate the definition of tables, as the metadata is now present in the trail file itself, like in this demo I am using GG 19.1.

So, only if you running on GG version < 12.3 you need to generate the definition (using defgen file) of the table on source system and move it to the DIRDAT directory on target so that replicat considers that.

Now, let’s do some testing to see if records are flowing from source to target database.

ON SOURCE:

[oracle@canttowinsec dirdef]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 23 15:21:01 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0

SQL> conn opeth/opeth
Connected.
SQL> insert into testtable values (1,'hello world!');

1 row created.

SQL> select * from testtable;

      COL1 COL2
---------- --------------------
         1 hello world!

SQL>
SQL> commit;

Commit complete.

Let’s see if the same record is reached the target side.

ON TARGET:

[postgres@canttowin ggdirpsql]$ psql -p 5432
psql (12.6.7)
Type "help" for help.

postgres=# \c opeth opeth
You are now connected to database "opeth" as user "opeth".

psql (12.6.7)
Type "help" for help.

opeth=# select * from testtable;
 col1 |  col2
------+---------
   10 | hello world!

(1 rows)

opeth=# 

Cool! it’s pretty easy to setup this solution to migrate your data from Oracle to PostgreSQL database.

Few things to note here
–> You don’t need to install Golden Gate for PostgreSQL, on source system which is where your oracle database is, requires Golden Gate for Oracle.
–> Only on target database is where you have to install Golden Gate for PostgreSQL
–> odbc.ini is the file which you have to create on target end (where your postgresql server is).
–> Always before calling ggsci, set PATH and LD_LIBRARY_PATH, else you might get an error message while calling the cli.
–> Be very careful while creating the odbc.ini file otherwise you will end up with some obscure errors,

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

‘WARNING OGG-00552, ODBC error: SQLSTATE IM002 native database error 0. [DataDirect][ODBC lib]’ while doing data migration from Oracle to PostgreSQL using Golden Gate …

Posted by FatDBA on June 24, 2021

Hey Guys,

Just a question – Have you ever received below error message while doing the data migration from Oracle to PostgreSQL database using Golden Gate ?

GGSCI (canttowin.ontadomain) 2> dblogin sourcedb ora19topg12 userid opeth
Password:

2021-06-23 14:52:32  WARNING OGG-00552 Database operation failed: Couldn't connect to odbc. 
ODBC error: SQLSTATE IM002 native database error 0. 
[DataDirect][ODBC lib] Data source name not found and no default driver specified.
ERROR: Database operation failed: Couldn't connect to ora19topg12. 
ODBC error: SQLSTATE IM002 native database error 0. 
[DataDirect][ODBC lib] Data source name not found and no default driver specified Failed to open data source ora19topg12 for user opeth.

If yes, then few of things that you can always go and check!

First, check if your pg_hba.conf has below entries, so that ALL clients can connect. The sample below just means that all clients can connect to the Postgres database. Commonly this could show more restrictive setting depending on business rules, so it is always worth to have a look at this file when clients can not connect to the Postgres database using the ODBC driver.

# IPv4 local connections:
host    all             all             0.0.0.0/0             md5

Second, check for listen_adress parameter in postgresql.conf file on your PostgreSQL host. You can set it as ‘*’ (just like below), that means that Postgres listens on all available addresses.

# - Connection Settings -

listen_addresses = '*'          # what IP address(es) to listen on;
                                # comma-separated list of addresses;
                                # defaults to 'localhost'; use '*' for all
                                # (change requires restart)

Third, after all these changes, don’t forget to rebooted your PostgreSQL database ….

Hopefully after all above actions, you will be able to resolve the problem, if not, please let me know that through comments, I can try and help!

Hope It Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

 
%d bloggers like this: