Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘goldengate’

Real Time Data Replication from Oracle to DataStax Cassandra using Oracle GoldenGate 23ai DAA

Posted by FatDBA on March 16, 2026

Modern enterprise architectures rarely live inside a single database engine anymore. It is very common to see operational data stored in relational systems like Oracle while analytics or distributed applications rely on NoSQL platforms such as Cassandra.

In several real-world projects I have worked on, Cassandra clusters were used to power high-volume applications while Oracle remained the authoritative system of record. In these cases, Oracle GoldenGate D23ai DAA becomes a powerful bridge that streams transactional changes from Oracle redo logs directly into Cassandra in near real time.

In this article we will walk through a complete end-to-end implementation of Oracle → DataStax Cassandra replication using Oracle GoldenGate.

The goal is to demonstrate:

• Initial data load
• Continuous change capture
• Replication of DML operations
• Validation on the Cassandra side

Source System : Oracle Database Server
Hostname: ora-prod01.lab.internal
IP: 10.10.10.11

Operating System: Oracle Linux 8
Database Version: Oracle 19c (19.21 RU)
GoldenGate Version: 23ai Classic Deployment

Target System: DataStax Enterprise Cassandra Node
Hostname: cass-node01.lab.internal
IP: 10.10.10.21

Operating System: Rocky Linux 8
Cassandra Distribution: DataStax Enterprise 6.8.50

GoldenGate Big Data / Java Adapter is installed on the Cassandra side.

Step 1 – Prepare Oracle Source Database

Oracle GoldenGate captures data changes directly from the Oracle redo logs, allowing near real-time replication without querying tables. To support this, the database must have ARCHIVELOG mode, force logging, and supplemental logging enabled so that all row-level changes are fully recorded and available for capture.

sqlplus / as sysdba

archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled

-- enable archivelog mode
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;

-- enable force logging
alter database force logging;

-- enable supp logging 
alter database add supplemental log data;

-- enabled GG parameter in source database
alter system set enable_goldengate_replication=true scope=both;

-- streams pool size 
alter system set streams_pool_size=2G scope=both;

Step 2 – Create GoldenGate User in Oracle

A dedicated database user is required for GoldenGate to capture changes from the Oracle redo logs and access the necessary metadata. This user is granted the required privileges to read transaction logs, access dictionary information, and manage replication objects.

CREATE TABLESPACE ogg_ts 
DATAFILE '/u02/oradata/ogg_ts01.dbf' 
SIZE 500M AUTOEXTEND ON;

CREATE USER ggadmin IDENTIFIED BY "Welcome#123"
DEFAULT TABLESPACE ogg_ts
TEMPORARY TABLESPACE temp;

GRANT CONNECT, RESOURCE TO ggadmin;
GRANT SELECT ANY TABLE TO ggadmin;
GRANT SELECT ANY TRANSACTION TO ggadmin;
GRANT SELECT ANY DICTIONARY TO ggadmin;
GRANT FLASHBACK ANY TABLE TO ggadmin;
GRANT ALTER SYSTEM TO ggadmin;
GRANT UNLIMITED TABLESPACE TO ggadmin;


-- Granted OGG admin privs 
exec dbms_goldengate_auth.grant_admin_privilege(
   grantee => 'GGADMIN',
   privilege_type => 'CAPTURE',
   grant_select_privileges => TRUE,
   do_grants => TRUE);

Step 3 – Create Source Table in Oracle

To validate the replication pipeline, create a sample table and insert a few test records in the Oracle source database. These records will later be used to verify that GoldenGate successfully captures and replicates the data to the DataStax Cassandra target.

CREATE TABLE SALES_TRANSACTIONS
(
   TXN_ID NUMBER PRIMARY KEY,
   CUSTOMER_NAME VARCHAR2(100),
   PRODUCT_CODE VARCHAR2(50),
   TXN_AMOUNT NUMBER,
   TXN_TIMESTAMP DATE
);

INSERT INTO SALES_TRANSACTIONS VALUES
(101,'John Carter','PRD-A',1200,SYSDATE);

INSERT INTO SALES_TRANSACTIONS VALUES
(102,'Alice Brown','PRD-B',850,SYSDATE);

INSERT INTO SALES_TRANSACTIONS VALUES
(103,'Michael Lee','PRD-C',430,SYSDATE);

COMMIT;


select * from SALES_TRANSACTIONS;

TXN_ID CUSTOMER_NAME PRODUCT_CODE TXN_AMOUNT TXN_TIMESTAMP
-----------------------------------------------------------
101    John Carter   PRD-A        1200       05-MAR-26
102    Alice Brown   PRD-B        850        05-MAR-26
103    Michael Lee   PRD-C        430        05-MAR-26

Step 4 – Install DataStax Cassandra

On the target node install DataStax Enterprise. Example repository configuration:

sudo vi /etc/yum.repos.d/datastax.repo

[datastax]
name=DataStax Repo
baseurl=https://rpm.datastax.com/enterprise
enabled=1
gpgcheck=0

-- Install DSE 
sudo yum install dse-full-6.8.50 -y

-- Start DSE 
sudo systemctl start dse

-- Veroify cluster/nodes status 
nodetool status
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal

--  Address       Load       Tokens Owns  Host ID                               Rack
UN  10.10.10.21   120 KB     256    100%  8c34a0c1-b8c1-4b35-a6a7-9c2d8bfae111   rack1

Step 5 – Create Cassandra Keyspace

In DataStax Cassandra, a keyspace is the top-level namespace that defines how data is replicated across the cluster. Before creating tables for replication, we first create a keyspace and configure its replication strategy and replication factor.

cqlsh 10.10.10.21 <port> -u xxxx -p xxx

CREATE KEYSPACE retail_ks
WITH replication = {
'class': 'NetworkTopologyStrategy',
'DC1': 1
};

-- verify 
DESCRIBE KEYSPACES;


-- Created target tables in cassandra
USE retail_ks;

CREATE TABLE sales_transactions
(
   txn_id int,
   customer_name text,
   product_code text,
   txn_amount decimal,
   txn_timestamp timestamp,
   PRIMARY KEY(txn_id)
);

-- verofy table structure 
DESCRIBE TABLE retail_ks.sales_transactions;

Step 6 – Install GoldenGate on Oracle

Oracle GoldenGate software must be installed on the source server to capture database changes. In production environments, the silent installation method is commonly used because it allows automated and repeatable deployments without manual interaction.

-- Unpack GoldenGate binaries.
unzip 231000_fbo_ggs_Linux_x64_Oracle.zip

-- Run silent installation. Respoonse file
INSTALL_OPTION=ora23ai
SOFTWARE_LOCATION=/u01/app/ogg
START_MANAGER=true
MANAGER_PORT=7809
DATABASE_LOCATION=/u01/app/oracle/product/19c


-- Install
./runInstaller -silent -responseFile ogg.rsp

-- LaUnch OGG
cd /u01/app/ogg
./ggsci

GGSCI> INFO ALL
Program     Status
MANAGER     RUNNING

Step 7 – Enable Table Level Replication

Login to database and enable supplemental logging for the table.

GGSCI> DBLOGIN USERID ggadmin PASSWORD Welcome#123

GGSCI> ADD TRANDATA SALES_TRANSACTIONS
OGG-15132 Logging supplemental redo data enabled

Step 8 – Configure Extract

Create parameter file.

GGSCI> EDIT PARAMS EXT_SALES

EXTRACT EXT_SALES
USERID ggadmin PASSWORD Welcome#123
EXTTRAIL ./dirdat/st
TRANLOGOPTIONS INTEGRATEDPARAMS (PARALLELISM 4)
TABLE SALES_TRANSACTIONS;


-- Create extract
ADD EXTRACT EXT_SALES INTEGRATED TRANLOG BEGIN NOW
ADD EXTTRAIL ./dirdat/st EXTRACT EXT_SALES

-- start & verifgy its status 
START EXTRACT EXT_SALES
INFO EXTRACT EXT_SALES

EXTRACT    EXT_SALES     Last Started 2026-03-15 21:48   Status RUNNING
Checkpoint Lag           00:00:00 (updated 00:00:04 ago)
Process ID               32418
Log Read Checkpoint      Oracle Redo Logs
                        SCN  845732918 (0x0000.3267a3b6)
                        2026-03-15 21:48:12

Step 9 – Configure Data Pump

The GoldenGate Data Pump reads captured changes from the local trail files and forwards them to the target GoldenGate deployment. This process improves reliability and allows data to be securely transported to the target environment.

EDIT PARAMS PUMP_SALES

EXTRACT PUMP_SALES
RMTHOST 10.10.10.21, MGRPORT 7809
RMTTRAIL ./dirdat/rt
TABLE SALES_TRANSACTIONS;

-- add pump
ADD EXTRACT PUMP_SALES EXTTRAILSOURCE ./dirdat/st
ADD RMTTRAIL ./dirdat/rt EXTRACT PUMP_SALES

-- Start Pump
START EXTRACT PUMP_SALES
INFO PUMP_SALES

EXTRACT    PUMP_SALES    Last Started 2026-03-15 21:48   Status RUNNING
Checkpoint Lag           00:00:00 (updated 00:00:02 ago)
Process ID               32455
Log Read Checkpoint      File ./dirdat/es000000000
                        RBA 18425

Step 10 – Configure Cassandra Handler

On the GoldenGate Big Data / DAA node, configure the Cassandra handler properties so that the Replicat process knows how to connect to the DataStax Cassandra cluster and apply incoming changes. This configuration defines connection details, consistency level, datacenter information, and the Java classpath required for the Cassandra driver.

This configuration allows the GoldenGate Replicat process to establish a connection with the Cassandra cluster and apply transactional changes coming from the Oracle source database.

[oggadmin@cass-hub01 ~]$ vi /u01/app/ogg_23ai/dirprm/cassandra.props
gg.handlerlist=cassandra

gg.handler.cassandra.type=cassandra
gg.handler.cassandra.mode=op
gg.handler.cassandra.contactPoints=10.20.30.41
gg.handler.cassandra.port=9042
gg.handler.cassandra.keyspace=retail_ops_ks
gg.handler.cassandra.consistencyLevel=LOCAL_QUORUM
gg.handler.cassandra.cassandraMode=async
gg.handler.cassandra.compressedUpdates=true
gg.handler.cassandra.datacenter=DC_MAIN
gg.log=log4j
gg.log.level=INFO
gg.report.time=30sec
gg.classpath=/opt/datastax/cassandra/resources/cassandra/lib/*:/u01/app/ogg_23ai/DependencyDownloader/dependencies/cassandra_4.1/*
javawriter.stats.full=true
javawriter.stats.display=TRUE

-- If authentication is enabled in DataStax Cassandra, the property file can also include credentials depending on the adapter version and security model being used.
-- exampl;e
gg.handler.cassandra.username=oggsvc
gg.handler.cassandra.password=Str0ngPassw0rd

Step 11 – Configure Replicat

At this stage, the source-side Extract and Data Pump are already running and writing trail records to the remote trail on the target GoldenGate deployment. The next step is to configure the Replicat process on the Cassandra side so that trail records can be applied into the target DataStax Cassandra keyspace and table.

In a GoldenGate Big Data or DAA style deployment, the first thing you should always do is connect to the deployment administration interface and verify that the deployment is up before creating the Replicat.

-- Loginto deployment
[oggadmin@cass-hub01 ~]$ cd /u01/app/ogg_23ai/bin
[oggadmin@cass-hub01 bin]$ ./adminclient

Oracle GoldenGate Admin Client for Oracle
Version 23.10.0.25.10 OGGCORE_23.10.0.0.0_PLATFORMS_251018.0830

Copyright (C) 1995, 2025, Oracle and/or its affiliates. All rights reserved.

Linux, x64, 64bit (optimized), Generic on Oct 18 2025 14:00:54
Operating system character set identified as UTF-8.


-- connect to the deployment
OGG (not connected) 1> CONNECT http://10.10.40.25:9001 deployment CassandraDAA as oggadmin password "xxxxxx";
Successfully connected to CassandraDAA.

Before creating the Replicat, verify that the remote trail files are arriving from the Data Pump. This is a simple but important validation step. If the trail is not being written, the Replicat will start but will have nothing to process.

[oggadmin@cass-hub01 ~]$ cd /u01/app/ogg_23ai/var/lib/data
[oggadmin@cass-hub01 data]$ ls -ltr rt*

-rw-r----- 1 oggadmin oinstall  1048576 Mar 15 20:11 rt000000000
-rw-r----- 1 oggadmin oinstall   524288 Mar 15 20:16 rt000000001

This confirms that the remote trail prefix rt is present and the target deployment is receiving records from the source-side Pump process.

Next lets create the replicate.

OGG (http://10.10.40.25:9001 CassandraDAA as oggadmin@CassandraDAA) 3> EDIT PARAMS REP_CASS

REPLICAT REP_CASS
TARGETDB LIBFILE libggjava.so SET property=dirprm/cassandra.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 1000
DISCARDFILE ./var/lib/data/REP_CASS.dsc, APPEND, MEGABYTES 500
DISCARDROLLOVER AT 02:00
MAP APP_OWNER.SALES_ORDERS_SRC,
TARGET retail_ops_ks.sales_orders_rt;

-- add the repl process
OGG (http://10.10.40.25:9001 CassandraDAA as oggadmin@CassandraDAA) 4> ADD REPLICAT REP_CASS, EXTTRAIL ./var/lib/data/rt
REPLICAT added.

-- staret the process
OGG (http://10.10.40.25:9001 CassandraDAA as oggadmin@CassandraDAA) 5> START REPLICAT REP_CASS
Sending START request to Manager ...
REPLICAT REP_CASS starting.

REPLICAT   REP_CASS      Last Started 2026-03-15 21:49   Status RUNNING
Checkpoint Lag           00:00:00 (updated 00:00:01 ago)
Process ID               32504
Log Read Checkpoint      File ./dirdat/rt000000000
                        RBA 18874


Params explanations …

REPLICAT REP_CASS defines the Replicat group name.

TARGETDB LIBFILE libggjava.so SET property=dirprm/cassandra.props tells GoldenGate to use the Java-based Big Data adapter library and load all Cassandra handler settings from the referenced properties file.

REPORTCOUNT EVERY 1 MINUTES, RATE prints transaction statistics regularly into the report file, which is very useful during initial testing and performance validation.

GROUPTRANSOPS 1000 batches operations to improve throughput. This value can be tuned depending on transaction profile, row size, and Cassandra write latency.
DISCARDFILE captures records that fail processing, while DISCARDROLLOVER keeps the discard file manageable.
The MAP statement defines the source Oracle table and the target DataStax Cassandra table.

In this example:
Oracle source table: APP_OWNER.SALES_ORDERS_SRC
Cassandra target table: retail_ops_ks.sales_orders_rt

Step 12 – Testing End-to-End Replication (Oracle to DataStax Cassandra)

In this section we will validate that the replication pipeline is working correctly by performing a few test transactions on the Oracle source database and confirming that the changes are applied on the DataStax Cassandra target table.

I am not going to discuss the initial load process here since it is relatively straightforward and typically performed using standard GoldenGate initial load methods or bulk data movement tools. For the purpose of this test, we assume that the base dataset has already been loaded into Cassandra and that GoldenGate is responsible only for capturing and replicating incremental changes.

-- Source DB

SQL*Plus: Release 19.0.0.0.0 - Production
Version 19.21.0.0.0

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0

SELECT * FROM SALES_ORDERS_SRC;
ORDER_ID   CUSTOMER_NAME   ORDER_STATUS   ORDER_AMOUNT
--------   -------------   ------------   ------------
1001       NORTH RETAIL    CREATED        450
1002       ALPHA STORES    SHIPPED        720
1003       OMEGA MART      CREATED        610

On Target/Cassandra cluster

cqlsh 10.20.30.41 9042 -u cassandra -p cassandra
Connected to RetailCluster at 10.20.30.41:9042
[cqlsh 6.1.0 | DSE 6.8.x | CQL spec 3.4.6]

USE retail_ops_ks;

SELECT * FROM sales_orders_rt;
order_id | customer_name | order_status | order_amount
---------+---------------+--------------+--------------
1001     | NORTH RETAIL  | CREATED      | 450
1002     | ALPHA STORES  | SHIPPED      | 720
1003     | OMEGA MART    | CREATED      | 610

-- This confirms that the initial dataset already exists on both systems.

Lets do some changes on source and see how it reaches on target.

INSERT INTO SALES_ORDERS_SRC
VALUES (1004,'PACIFIC TRADING','CREATED',980);
COMMIT;

1 row created.

Commit complete.



-- Veroify OGG 
INFO EXTRACT EXT_SALES
EXTRACT EXT_SALES Last Started 2026-03-16 20:12 Status RUNNING
Checkpoint Lag 00:00:00
Log Read Checkpoint
SCN 845736122

-- scn moVED. This confirms that the Extract process captured the transaction from the Oracle redo logs.

Now lets check it on target

SELECT * FROM sales_orders_rt
WHERE order_id = 1004;

order_id | customer_name    | order_status | order_amount
---------+------------------+--------------+--------------
1004     | PACIFIC TRADING  | CREATED      | 980

The inserted record has successfully replicated.

By performing these test operations, we validated that:

• GoldenGate Extract captured Oracle redo changes
• Data Pump delivered trail files to the target deployment
• Replicat applied changes into DataStax Cassandra

The Oracle —> GoldenGate —-> Cassandra replication pipeline is now functioning correctly.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , | Leave a Comment »

Parquet, hadoop, and a quietly dying process : lessons from a migration test using GoldenGate 23ai DAA

Posted by FatDBA on February 8, 2026

I was doing some hands-on testing with Oracle GoldenGate 23ai DAA, trying to move data from an old but reliable Oracle 11g database into Microsoft Azure Fabric. The idea was simple enough. Capture changes from Oracle 11g, push them through GoldenGate 23ai, and land them in Fabric OneLake so they could be used by a Lakehouse or a Mirrored Database. On paper, it sounded clean. In real life… well, it took a bit of digging.

The source side was boring in a good way. Oracle 11g behaved exactly as expected. Extracts were running, trails were getting generated, no drama there. The real work was on the target side. I configured a Replicat using the File Writer with Parquet output, since Parquet is the natural fit for Microsoft Fabric. Fabric loves Parquet. Lakehouse loves Parquet. Mirrored databases too. So far, so good.

I started the Replicat and GoldenGate politely told me it had started. That tiny moment of relief you get when a command doesn’t fail right away. But then I checked the status… and it was STOPPED. No lag, no progress, nothing. That’s usually when you know something went wrong very early, before any real work even started.

So I opened the report file. And there it was. A Java error staring right back at me:

OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 18> START REPLICAT FATD11D
2025-12-12T21:25:18Z  INFO    OGG-00975  Replicat group FATD11D starting.
2025-12-12T21:25:18Z  INFO    OGG-15445  Replicat group FATD11D started.


OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 20> info replicat FATD11D

Replicat   FATD11D    Initialized  2025-12-12 16:24   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:55 ago)
Log Read Checkpoint  File dirdat/i1000000000
                     First Record  RBA 0
Encryption Profile   LocalWallet





OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 21> view report FATD11D

***********************************************************************
     Oracle GoldenGate for Distributed Applications and Analytics
                   Version 23.10.0.25.10 (Build 001)

                      Oracle GoldenGate Delivery
 Version 23.10.1.25.10 OGGCORE_23.10.0.0.0OGGRU_LINUX.X64_251018.0830
    Linux, x64, 64bit (optimized), Generic on Oct 18 2025 14:00:54

Copyright (C) 1995, 2025, Oracle and/or its affiliates. All rights reserved.

                    Starting at 2025-12-12 16:25:18
***********************************************************************

2025-12-12 16:25:19  INFO    OGG-15052  Using Java class path: /testgg/app/ogg/ogg23ai/ogg23aidaa_MA//ggjava/ggjava.jar:/testgg/app/ogg/ogg23ai/ogg23aidaa_DEPLOYMENT/etc/conf/ogg:/u01/app/ogg/ogg
23ai/ogg23aidaa_MA/.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/parquet/hadoop/metadata/CompressionCodecName
        at oracle.goldengate.eventhandler.parquet.ParquetEventHandlerProperties.<init>(ParquetEventHandlerProperties.java:43)
        at oracle.goldengate.eventhandler.parquet.ParquetEventHandler.<init>(ParquetEventHandler.java:53)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
        at java.base/java.lang.Class.newInstance(Class.java:587)
        at oracle.goldengate.datasource.eventhandler.EventHandlerFramework.instantiateEventHandler(EventHandlerFramework.java:219)
        at oracle.goldengate.datasource.eventhandler.EventHandlerFramework.initEventHandler(EventHandlerFramework.java:163)
        at oracle.goldengate.datasource.eventhandler.EventHandlerFramework.init(EventHandlerFramework.java:58)
        at oracle.goldengate.handler.filewriter.FileWriterHandlerEO.init(FileWriterHandlerEO.java:627)
        at oracle.goldengate.datasource.AbstractDataSource.addDataSourceListener(AbstractDataSource.java:602)
        at oracle.goldengate.datasource.factory.DataSourceFactory.getDataSource(DataSourceFactory.java:164)
        at oracle.goldengate.datasource.UserExitDataSourceLauncher.<init>(UserExitDataSourceLauncher.java:45)
        at oracle.goldengate.datasource.UserExitMain.main(UserExitMain.java:109)
Caused by: java.lang.ClassNotFoundException: org.apache.parquet.hadoop.metadata.CompressionCodecName
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
        ... 15 more
2025-12-12 16:25:22  WARNING OGG-00869  java.lang.ClassNotFoundException: org.apache.parquet.hadoop.metadata.CompressionCodecName.

Source Context :
  SourceFile              : [/ade/aime_phxdbifa87/oggcore/OpenSys/src/gglib/ggdal/Adapter/Java/JavaAdapter.cpp]
  SourceMethod            : [HandleJavaException]
  SourceLine              : [350]
  ThreadBacktrace         : [19] elements
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libgglog.so(CMessageContext::AddThreadContext())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libgglog.so(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libgglog.so(_MSG_String(CSourceContext*, int, char const*, CMessageFactory::MessageDisposition))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libggjava.so()]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libggjava.so(ggs::gglib::ggdal::CJavaAdapter::Open())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::ggdal::CDALAdapter::Open())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(GenericImpl::Open())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(GenericImpl::GetWriter())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(GenericImpl::GetGenericDBType())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::er::ReplicatContext::ReplicatContext(ggs::gglib::ggapp::ReplicationContextParams const&, bool, ggs::gglib::
ggmetadata::MetadataContext*, ggs::er::ReplicatContext::LogBSNManager*))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::er::ReplicatContext::createReplicatContext(ggs::gglib::ggapp::ReplicationContextParams const&, ggs::gglib::
ggdatasource::DataSourceParams const&, ggs::gglib::ggmetadata::MetadataContext*))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat()]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::MultiThreading::MainThread::ExecMain())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::MultiThreading::Thread::RunThread(ggs::gglib::MultiThreading::Thread::ThreadArgs*))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::MultiThreading::MainThread::Run(int, char**))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(main)]
                          : [/lib64/libc.so.6()]
                          : [/lib64/libc.so.6(__libc_start_main)]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(_start)]

2025-12-12 16:25:22  ERROR   OGG-15051  Java or JNI exception:
java.lang.NoClassDefFoundError: org/apache/parquet/hadoop/metadata/CompressionCodecName.

2025-12-12 16:25:22  ERROR   OGG-01668  PROCESS ABENDING.

At that point it clicked. GoldenGate itself was fine. Oracle 11g was fine. Fabric wasn’t even in the picture yet. The problem was simpler. The Parquet libraries were missing.

All of the pre-reqs are there in the DependencyDownloader directory. Inside you will find all scripts for everything… Parquet, Hadoop, OneLake, Kafka, and more. Before touching anything, I checked Java. Java 17 was already installed. I ran the Parquet dependency script. Maven kicked in, downloaded a bunch of JARs, and finished successfully. I restarted the Replicat, feeling pretty confident. And… it failed again. Different error this time, though, which honestly felt like progress.

[oggadmin@D-ADON-01-CC-VM bin]$
[oggadmin@D-ADON-01-CC-VM bin]$ find /u01/app/ogg/ogg23ai -name "*.properties" | egrep -i "sample|example|handler|parquet|filewriter" | head -n 20
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/oci.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/kafka.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/hbase.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/parquet.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/kafkaconnect.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/azureservicebus.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/mongo.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/filewriter.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/bigquery.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/nosql.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/hdfs.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/synapse.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/redshift.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/pubsub.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/s3.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/redis.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/elasticsearch.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/jdbc.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/adw.properties
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/AdapterExamples/templates/jms.properties
[oggadmin@D-ADON-01-CC-VM bin]$




[oggadmin@D-ADON-01-CC-VM bin]$
[oggadmin@D-ADON-01-CC-VM bin]$ ls -ltrh /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/ggjava
total 60K
-rwxrwxr-x. 1 oggadmin ogg  34K Jun  5  2024 NOTICES.txt
-rwxrwxr-x. 1 oggadmin ogg   95 Oct 21 10:50 ggjava-version.txt
-rwxrwxr-x. 1 oggadmin ogg 9.5K Oct 21 10:50 ggjava.jar
drwxr-xr-x. 5 oggadmin ogg 4.0K Jan 29 16:51 resources
drwxr-xr-x. 6 oggadmin ogg 4.0K Jan 29 16:51 maven-3.9.6



[oggadmin@D-ADON-01-CC-VM bin]$ find /u01/app/ogg/ogg23ai -iname "onelake.sh" -o -iname "*parquet*.sh" -o -iname "*dependency*.sh"
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/onelake.sh
/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/parquet.sh
[oggadmin@D-ADON-01-CC-VM bin]$ /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/onelake.sh


[oggadmin@D-ADON-01-CC-VM bin]$ cd /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$ ls
aws.sh                    cassandra_dse.sh          gcs.sh                    hbase_hortonworks.sh         kafka.sh             orc.sh             snowflake.sh
azure_blob_storage.sh     cassandra.sh              googlepubsub.sh           hbase.sh                     kinesis.sh           parquet.sh         snowflakestreaming.sh
bigquery.sh               config_proxy.sh           hadoop_azure_cloudera.sh  internal_scripts             mongodb_capture.sh   project            synapse.sh
bigquerystreaming.sh      databricks.sh             hadoop_cloudera.sh        kafka_cloudera.sh            mongodb.sh           redis.sh           velocity.sh
cassandra_capture_3x.sh   docs                      hadoop_hortonworks.sh     kafka_confluent_protobuf.sh  onelake.sh           redshift.sh        xmls
cassandra_capture_4x.sh   download_dependencies.sh  hadoop.sh                 kafka_confluent.sh           oracle_nosql_sdk.sh  s3.sh
cassandra_capture_dse.sh  elasticsearch_java.sh     hbase_cloudera.sh         kafka_hortonworks.sh         oracle_oci.sh        snowflake-fips.sh
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$






[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$ java -version
openjdk version "17.0.18" 2026-01-20 LTS
OpenJDK Runtime Environment (Red_Hat-17.0.18.0.8-1.0.1) (build 17.0.18+8-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-17.0.18.0.8-1.0.1) (build 17.0.18+8-LTS, mixed mode, sharing)
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$






[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$ ./onelake.sh
openjdk version "17.0.18" 2026-01-20 LTS
Java is installed.
Apache Maven 3.9.6 (bc0240f3c744dd6b6ec2920b3cd08dcc295161ae)
Maven is accessible.
Root Configuration Script
INFO: This is the Maven binary [../../ggjava/maven-3.9.6/bin/mvn].
INFO: This is the location of the settings.xml file [./docs/settings_np.xml].
INFO: This is the location of the toolchains.xml file [./docs/toolchains.xml].
INFO: The dependencies will be written to the following directory[../dependencies/onelake].
INFO: The Maven coordinates are the following:
INFO: Dependency 1
INFO: Group ID [com.azure].
INFO: Artifact ID [azure-storage-file-datalake].
INFO: Version [12.20.0]
INFO: Dependency 2
INFO: Group ID [com.azure].
INFO: Artifact ID [azure-identity].
INFO: Version [1.13.1]
[INFO] Scanning for projects...
[INFO]
[INFO] ---------------< oracle.goldengate:dependencyDownloader >---------------
[INFO] Building dependencyDownloader 1.0
[INFO]   from pom_central_v2.xml
[INFO] --------------------------------[ pom ]---------------------------------
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/3.2.0/maven-clean-plugin-3.2.0.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/3.2.0/maven-clean-plugin-3.2.0.pom (5.3 kB at 24 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-plugins/35/maven-plugins-35.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-plugins/35/maven-plugins-35.pom (9.9 kB at 431 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-parent/35/maven-parent-35.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-parent/35/maven-parent-35.pom (45 kB at 1.7 MB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/apache/25/apache-25.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/apache/25/apache-25.pom (21 kB at 1.0 MB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/3.2.0/maven-clean-plugin-3.2.0.jar
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/3.2.0/maven-clean-plugin-3.2.0.jar (36 kB at 1.4 MB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-dependency-plugin/2.9/maven-dependency-plugin-2.9.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-dependency-plugin/2.9/maven-dependency-plugin-2.9.pom (13 kB at 602 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins
.........
...............
...................
[INFO] Copying netty-tcnative-boringssl-static-2.0.65.Final-windows-x86_64.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/onelake/netty-tcnative-boringssl-static-2.0.65.Final-windows-x86_64.jar
[INFO] Copying reactive-streams-1.0.4.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/onelake/reactive-streams-1.0.4.jar
[INFO] Copying oauth2-oidc-sdk-11.9.1.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/onelake/oauth2-oidc-sdk-11.9.1.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  8.334 s
[INFO] Finished at: 2025-12-12T16:45:52-05:00
[INFO] ------------------------------------------------------------------------
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$













[oggadmin@D-ADON-01-CC-VM templates]$ cd /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader
[oggadmin@D-ADON-01-CC-VM templates]$   ./parquet.sh 1.13.1
openjdk version "17.0.18" 2026-01-20 LTS
Java is installed.
Apache Maven 3.9.6 (bc0240f3c744dd6b6ec2920b3cd08dcc295161ae)
Maven is accessible.
Root Configuration Script
INFO: This is the Maven binary [../../ggjava/maven-3.9.6/bin/mvn].
INFO: This is the location of the settings.xml file [./docs/settings_np.xml].
INFO: This is the location of the toolchains.xml file [./docs/toolchains.xml].
INFO: The dependencies will be written to the following directory[../dependencies/parquet_1.13.1].
.....
...........
.................
.....
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-hadoop/1.13.1/parquet-hadoop-1.13.1.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-hadoop/1.13.1/parquet-hadoop-1.13.1.pom (15 kB at 69 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet/1.13.1/parquet-1.13.1.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet/1.13.1/parquet-1.13.1.pom (25 kB at 790 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-column/1.13.1/parquet-column-1.13.1.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-column/1.13.1/parquet-column-1.13.1.pom (6.0 kB at 238 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-common/1.13.1/parquet-common-1.13.1.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-common/1.13.1/parquet-common-1.13.1.pom (3.4 kB at 143 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/parquet/parquet-format-structures/1.13.1/parquet-format-structures-1.13.1.pom
......
..............
...............
[INFO] Copying jackson-annotations-2.12.7.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/parquet_1.13.1/jackson-annotations-2.12.7.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  2.119 s
[INFO] Finished at: 2025-12-12T16:52:03-05:00
[INFO] ------------------------------------------------------------------------

Once again the replicate on target side failed to start and this time with a different error.

OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 8>  info REPLICAT FATD11D

Replicat   FATD11D    Initialized  2025-12-12 16:24   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:34:28 ago)
Log Read Checkpoint  File dirdat/i1000000000
                     First Record  RBA 0
Encryption Profile   LocalWallet


OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 9> view report FATD11D

***********************************************************************
     Oracle GoldenGate for Distributed Applications and Analytics
                   Version 23.10.0.25.10 (Build 001)

                      Oracle GoldenGate Delivery
 Version 23.10.1.25.10 OGGCORE_23.10.0.0.0OGGRU_LINUX.X64_251018.0830
    Linux, x64, 64bit (optimized), Generic on Oct 18 2025 14:00:54

Copyright (C) 1995, 2025, Oracle and/or its affiliates. All rights reserved.

                    Starting at 2025-12-12 16:58:47
***********************************************************************

2025-12-12 16:58:47  INFO    OGG-15052  Using Java class path: /testgg/app/ogg/ogg23ai/ogg23aidaa_MA//ggjava/ggjava.jar:/testgg/app/ogg/ogg23ai/ogg23aidaa_DEPLOYMENT/etc/conf/ogg:/u01/app/ogg/ogg
23ai/ogg23aidaa_MA/:/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/onelake/*:/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/parquet_1.13.
1/*.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
        at oracle.goldengate.eventhandler.parquet.GGParquetWriter.init(GGParquetWriter.java:72)
        at oracle.goldengate.eventhandler.parquet.ParquetEventHandler.init(ParquetEventHandler.java:219)
        at oracle.goldengate.datasource.eventhandler.EventHandlerFramework.initEventHandler(EventHandlerFramework.java:168)
        at oracle.goldengate.datasource.eventhandler.EventHandlerFramework.init(EventHandlerFramework.java:58)
        at oracle.goldengate.handler.filewriter.FileWriterHandlerEO.init(FileWriterHandlerEO.java:627)
        at oracle.goldengate.datasource.AbstractDataSource.addDataSourceListener(AbstractDataSource.java:602)
        at oracle.goldengate.datasource.factory.DataSourceFactory.getDataSource(DataSourceFactory.java:164)
        at oracle.goldengate.datasource.UserExitDataSourceLauncher.<init>(UserExitDataSourceLauncher.java:45)
        at oracle.goldengate.datasource.UserExitMain.main(UserExitMain.java:109)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
        ... 9 more

2025-12-12 16:58:48  WARNING OGG-00869  java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration.

Source Context :
  SourceFile              : [/ade/aime_phxdbifa87/oggcore/OpenSys/src/gglib/ggdal/Adapter/Java/JavaAdapter.cpp]
  SourceMethod            : [HandleJavaException]
  SourceLine              : [350]
  ThreadBacktrace         : [19] elements
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libgglog.so(CMessageContext::AddThreadContext())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libgglog.so(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libgglog.so(_MSG_String(CSourceContext*, int, char const*, CMessageFactory::MessageDisposition))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libggjava.so()]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/../lib/libggjava.so(ggs::gglib::ggdal::CJavaAdapter::Open())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::ggdal::CDALAdapter::Open())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(GenericImpl::Open())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(GenericImpl::GetWriter())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(GenericImpl::GetGenericDBType())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::er::ReplicatContext::ReplicatContext(ggs::gglib::ggapp::ReplicationContextParams const&, bool, ggs::gglib::
ggmetadata::MetadataContext*, ggs::er::ReplicatContext::LogBSNManager*))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::er::ReplicatContext::createReplicatContext(ggs::gglib::ggapp::ReplicationContextParams const&, ggs::gglib::
ggdatasource::DataSourceParams const&, ggs::gglib::ggmetadata::MetadataContext*))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat()]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::MultiThreading::MainThread::ExecMain())]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::MultiThreading::Thread::RunThread(ggs::gglib::MultiThreading::Thread::ThreadArgs*))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(ggs::gglib::MultiThreading::MainThread::Run(int, char**))]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(main)]
                          : [/lib64/libc.so.6()]
                          : [/lib64/libc.so.6(__libc_start_main)]
                          : [/testgg/app/ogg/ogg23ai/ogg23aidaa_MA/bin/replicat(_start)]

2025-12-12 16:58:48  ERROR   OGG-15051  Java or JNI exception:
java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration.

2025-12-12 16:58:48  ERROR   OGG-01668  PROCESS ABENDING.

That one made me pause for a second. The target wasn’t HDFS. I wasn’t running Hadoop. This was Microsoft Fabric. But here’s the catch. Parquet depends on Hadoop, even when you’re not using Hadoop directly. Some core Parquet classes expect Hadoop configuration classes to exist. No Hadoop libs, no Parquet writer.

So back to the DependencyDownloader I went, this time running the Hadoop script. More downloads, more JARs, more waiting.

[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$ cd /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader

[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$ ./hadoop.sh 3.4.2
openjdk version "17.0.18" 2026-01-20 LTS
Java is installed.
Apache Maven 3.9.6 (bc0240f3c744dd6b6ec2920b3cd08dcc295161ae)
Maven is accessible.
Root Configuration Script
INFO: This is the Maven binary [../../ggjava/maven-3.9.6/bin/mvn].
INFO: This is the location of the settings.xml file [./docs/settings_np.xml].
INFO: This is the location of the toolchains.xml file [./docs/toolchains.xml].
INFO: The dependencies will be written to the following directory[../dependencies/hadoop_3.4.2].
[INFO] ---------------< oracle.goldengate:dependencyDownloader >---------------
[INFO] Building dependencyDownloader 1.0
[INFO]   from pom_central_v2.xml
[INFO] --------------------------------[ pom ]---------------------------------
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/hadoop/hadoop-client/3.4.2/hadoop-client-3.4.2.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/hadoop/hadoop-client/3.4.2/hadoop-client-3.4.2.pom (11 kB at 58 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/hadoop/hadoop-project-dist/3.4.2/hadoop-project-dist-3.4.2.pom
Downloaded from central: https://repo.maven.apach
..........
................
.....................
[INFO] Copying netty-codec-stomp-4.1.118.Final.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/hadoop_3.4.2/netty-codec-stomp-4.1.118.Final.jar
[INFO] Copying dnsjava-3.6.1.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/hadoop_3.4.2/dnsjava-3.6.1.jar
[INFO] Copying netty-transport-native-unix-common-4.1.118.Final.jar to /testgg/app/ogg/ogg23ai/ogg23aidaa_MA/opt/DependencyDownloader/dependencies/hadoop_3.4.2/netty-transport-native-unix-common-4.1.118.Final.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  7.627 s
[INFO] Finished at: 2025-12-12T18:02:30-05:00
[INFO] ------------------------------------------------------------------------
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$
[oggadmin@D-ADON-01-CC-VM DependencyDownloader]$

Once that finished, I restarted the Replicat again. No big expectations this time. This time it stayed up.

OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 2> START REPLICAT FATD11D
2025-12-12T23:07:54Z  INFO    OGG-00975  Replicat group FATD11D starting.
2025-12-12T23:07:54Z  INFO    OGG-15445  Replicat group FATD11D started.

OGG (http://192.168.10.10:9001 OGG23AIDAA as BigData@) 3> info FATD11D
No Extract groups exist.

Replicat   FATD11D    Last Started 2025-12-12 18:07   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           47420
Log Read Checkpoint  File dirdat/i10000000001
                     First Record  RBA 167873
Encryption Profile   LocalWallet

The big takeaway from this whole exercise is pretty simple. When you’re doing Oracle database to Microsoft Azure Fabric using GoldenGate 23ai DAA, the tricky part is not Oracle, and not Fabric. It’s the middle layer. Parquet is the bridge, and Parquet brings Hadoop with it, whether you like it or not. If those dependencies aren’t staged correctly, the OGG processes will start, smile at you, and then quietly fall over 😀

Once everything was in place, though, the setup worked exactly the way it should. A clean path from a legacy Oracle 11g database into a modern Microsoft Fabric Lakehouse. No magic. Just the right pieces, in the right order… and a bit of patience

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Lessons from Integrating Oracle 11g with GoldenGate – ORA-06512 ORA-06502

Posted by FatDBA on February 7, 2026

This one took time. More than I’d like to admit.

I was trying to run GoldenGate 21c Integrated Extract from a remote extraction server, sourcing redo from an Oracle 11g (11.2.0.4.201020 (October 2020 PSU)) database. Everything worked fine, I was able to create the extract, login to the database, but got a weird issue when tried to register the extract with 11g source database from remote extraction.

GGSCI (fatdbatestlab1) 7> dblogin useridalias ogg11g
Successfully logged into database.

GGSCI (fatdbatestlab1) GGREPAZUR@boom1) 8> REGISTER EXTRACT ext11g DATABASE

xxxx:xxx:xxxx   ERROR   OGG-08221  Cannot register or unregister Extract group EXT11G because of the following SQL error: OCI Error ORA 
(status = 6502-ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.DBMS_XSTREAM_GG_ADM", line 145
ORA-06512: at "SYS.DBMS_XSTREAM_GG_ADM", line 186
ORA-06512: at line 1).

So Integrated Extract was the plan from day one. Even with that clarity, things didn’t go smoothly. Integrated Extract should work with 11g, specially with remote-extraction option due to security concerns and performance issues. The docs say so (if you are on compatible 11.2.0.4), compatibility matrices agree.

Now, just to be clear upfront .. I already knew that if I tried classic (non-integrated) extract remotely, I would hit “OGG-02022 Logmining server does not exist on this Oracle database.” .. That part wasn’t a surprise. Classic extract + remote server + 11g… yeah, that’s expected.

Yet I kept running into weird behavior that just didn’t add up. That’s when the doubt started creeping in … “Am I missing something?” “Is Integrated Extract actually usable with 11g in real life… not just on paper?”

Before touching the source server, I paused and went deeper into Oracle notes and bugs. That’s when I landed on the real issue. This wasn’t a GoldenGate 21c problem. And it wasn’t a remote extraction limitation either. It was an Oracle 11g RDBMS bug.

The Actual Root Cause was the database Bug 28367006.. Once I applied Patch 21683400 & the datapatch things finally started behaving like a sane system again.

GGSCI (fatdbatestlab1) GGREPAZUR@boom1) 8> REGISTER EXTRACT ext11g DATABASE
xxxx:xxx:xxx INFO    OGG-02003  Extract group EXT11G successfully registered with database at SCN 189381938103811.
 

It’s worth calling out that while Oracle 11g is technically supported for Integrated Extract, it is still a very old database release, and expectations need to be set accordingly. Running the latest available PSU is not optional in this kind of setup, and being aware of known bugs, defects, and architectural limitations is part of the job when working with legacy versions. In practice, if you stumble into a new or undocumented issue on 11g, Oracle Support is unlikely to engage development for a fresh bug fix, which means the only real options are workarounds, existing patches, or architectural adjustments. That reality alone makes proactive patching and careful design choices even more critical when pairing modern GoldenGate versions with older database platforms.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Demo on how to use Oracle XStream Replication Setup Using Java Client

Posted by FatDBA on June 15, 2025

Oracle XStream is a feature of Oracle Database that allows for real-time data replication between databases. Unlike Oracle GoldenGate, which is more GUI-driven and feature-rich, XStream is more code-centric. This makes it highly customizable, especially suitable when you want a Java or C-based client to control how data flows.

XStream vs GoldenGate

  • XStream is older and less frequently updated compared to Oracle GoldenGate.
  • XStream is limited to Oracle-to-Oracle replication and does not support heterogeneous environments.
  • GoldenGate supports a wide range of sources and targets, is easier to manage with its GUI-based tools, and has better support for DDL replication, integrated capture, and coordinated replication.

When and Why to Use XStream

Use XStream when:

  • You need fine-grained control using custom applications written in Java or C.
  • Your environment is Oracle-to-Oracle only.
  • You want a lightweight, programmatic replication tool that can be embedded in your application logic.
  • Licensing or infrastructure limitations do not permit GoldenGate and licen sing fees is a concern.

Do not use XStream if:

  • You require GUI-driven replication monitoring and setup.
  • You need heterogeneous replication (e.g., Oracle to PostgreSQL, MySQL, etc.).
  • Your use case demands continuous support and new feature releases.

Lets do a quick demo on how to use it, I have setup a lab environment for the test.

  • Source Database: dixitdb on 192.168.68.79:1521
  • Target Database: targetdb on 192.168.68.85:1521

Java Application (xio.java) will:

  • Connect to XStream Outbound on the source DB
  • Connect to XStream Inbound on the target DB
  • Transfer changes using Logical Change Records (LCRs)

Source Database Setup (dixitdb)

Step 1: Enable Replication Features

ALTER SYSTEM SET enable_goldengate_replication = TRUE SCOPE=BOTH;

Step 2: Ensure Archive Logging is Enabled

archive log list;

Must show: Database log mode              Archive Mode

Step 3: Create and Grant User for XStream

CREATE USER xstream_admin IDENTIFIED BY Welcome123;
GRANT CONNECT, RESOURCE, DBA TO xstream_admin;

Step 4: Enable Supplemental Logging

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Step 5: Create Demo Table

CREATE TABLE xstream_admin.demo_table (
  id    NUMBER PRIMARY KEY,
  name  VARCHAR2(100),
  value NUMBER
);

Step 6: Grant XStream Admin Privilege

BEGIN
  DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'XSTREAM_ADMIN',
    privilege_type => 'CAPTURE',
    grant_select_privileges => TRUE
  );
END;
/

Step 7: Create Outbound Server

BEGIN
  DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
    server_name => 'XOUT_SRV',
    connect_user => 'XSTREAM_ADMIN'
  );
END;
/

Check status:

SELECT capture_name, status FROM dba_capture;


Target Database Setup (targetdb)

Step 1: Create the Same User

CREATE USER xstream_admin IDENTIFIED BY Welcome123;
GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO xstream_admin;

Step 2: Setup Queue

BEGIN
  DBMS_XSTREAM_ADM.SET_UP_QUEUE(
    queue_table => 'xstream_admin.XIN_SRV',
    queue_name  => 'xstream_admin.XIN_SRV'
  );
END;
/

Step 3: Create Inbound Server

BEGIN
  DBMS_XSTREAM_ADM.CREATE_INBOUND(
    server_name =>'XIN_SRV',
    queue_name  =>'xstream_admin.XIN_SRV',
    apply_user  =>'xstream_admin',
    comment     =>'xstream_admin in'
  );
END;
/

Step 4: Create Target Table

CREATE TABLE xstream_admin.demo_table (
  id    NUMBER PRIMARY KEY,
  name  VARCHAR2(100),
  value NUMBER
);

Step 5: Add Table Rules

VAR dml_rule VARCHAR2(30);
VAR ddl_rule VARCHAR2(30);
BEGIN
  DBMS_XSTREAM_ADM.ADD_TABLE_RULES(
    table_name => 'xstream_admin.demo_table',
    streams_type => 'APPLY',
    streams_name => 'XIN_SRV',
    queue_name => 'xstream_admin.XIN_SRV',
    source_database => 'dixitdb',
    dml_rule_name => :dml_rule,
    ddl_rule_name => :ddl_rule
  );
END;
/

Step 6: Start Apply Process

BEGIN
  DBMS_APPLY_ADM.START_APPLY(apply_name => 'XIN_SRV');
END;
/

Check status:
SELECT apply_name, status FROM dba_apply;

Java Application: xio.java

The xio.java program acts as a custom replication engine. It:

  • Connects to XStream Inbound and Outbound
  • Receives LCRs (Logical Change Records) from the source
  • Sends them to the target

Pre-Requisites:

  • ojdbc8-19.15.0.0.jar (Oracle JDBC driver)
  • xstreams.jar (from $ORACLE_HOME/rdbms/jlib/)
import oracle.streams.*;
import oracle.jdbc.internal.OracleConnection;
import oracle.jdbc.*;
import oracle.sql.*;
import java.sql.*;
import java.util.*;

public class xio
{
public static String xsinusername = null;
public static String xsinpasswd = null;
public static String xsinName = null;
public static String xsoutusername = null;
public static String xsoutpasswd = null;
public static String xsoutName = null;
public static String in_url = null;
public static String out_url = null;
public static Connection in_conn = null;
public static Connection out_conn = null;
public static XStreamIn xsIn = null;
public static XStreamOut xsOut = null;
public static byte[] lastPosition = null;
public static byte[] processedLowPosition = null;

public static void main(String args[])
{
// get connection url to inbound and outbound server
in_url = parseXSInArguments(args);
out_url = parseXSOutArguments(args);

// create connection to inbound and outbound server
in_conn = createConnection(in_url, xsinusername, xsinpasswd);
out_conn = createConnection(out_url, xsoutusername, xsoutpasswd);

// attach to inbound and outbound server
xsIn = attachInbound(in_conn);
xsOut = attachOutbound(out_conn);

// main loop to get lcrs
get_lcrs(xsIn, xsOut);

// detach from inbound and outbound server
detachInbound(xsIn);
detachOutbound(xsOut);
}

// parse the arguments to get the conncetion url to inbound db
public static String parseXSInArguments(String args[])
{
String trace, pref;
String orasid, host, port;

if (args.length != 12)
{
printUsage();
System.exit(0);
}

orasid = args[0];
host = args[1];
port = args[2];
xsinusername = args[3];
xsinpasswd = args[4];
xsinName = args[5];

System.out.println("xsin_host = "+host);
System.out.println("xsin_port = "+port);
System.out.println("xsin_ora_sid = "+orasid);

String in_url = "jdbc:oracle:oci:@"+host+":"+port+":"+orasid;
System.out.println("xsin connection url: "+ in_url);

return in_url;
}

// parse the arguments to get the conncetion url to outbound db
public static String parseXSOutArguments(String args[])
{
String trace, pref;
String orasid, host, port;

if (args.length != 12)
{
printUsage();
System.exit(0);
}

orasid = args[6];
host = args[7];
port = args[8];
xsoutusername = args[9];
xsoutpasswd = args[10];
xsoutName = args[11];


System.out.println("xsout_host = "+host);
System.out.println("xsout_port = "+port);
System.out.println("xsout_ora_sid = "+orasid);

String out_url = "jdbc:oracle:oci:@"+host+":"+port+":"+orasid;
System.out.println("xsout connection url: "+ out_url);

return out_url;
}

// print out sample program usage message
public static void printUsage()
{
System.out.println("");
System.out.println("Usage: java xio "+"<xsin_oraclesid> " + "<xsin_host> "
+ "<xsin_port> ");
System.out.println(" "+"<xsin_username> " + "<xsin_passwd> "
+ "<xsin_servername> ");
System.out.println(" "+"<xsout_oraclesid> " + "<xsout_host> "
+ "<xsout_port> ");
System.out.println(" "+"<xsout_username> " + "<xsout_passwd> "
+ "<xsout_servername> ");
}

// create a connection to an Oracle Database
public static Connection createConnection(String url,
String username,
String passwd)
{
try
{
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(url, username, passwd);
}
catch(Exception e)
{
System.out.println("fail to establish DB connection to: " +url);
e.printStackTrace();
return null;
}
}

// attach to the XStream Inbound Server
public static XStreamIn attachInbound(Connection in_conn)
{
XStreamIn xsIn = null;
try
{
xsIn = XStreamIn.attach((OracleConnection)in_conn, xsinName,
"XSDEMOINCLIENT" , XStreamIn.DEFAULT_MODE);

// use last position to decide where should we start sending LCRs
lastPosition = xsIn.getLastPosition();
System.out.println("Attached to inbound server:"+xsinName);
System.out.print("Inbound Server Last Position is: ");
if (null == lastPosition)
{
System.out.println("null");
}
else
{
printHex(lastPosition);
}
return xsIn;
}
catch(Exception e)
{
System.out.println("cannot attach to inbound server: "+xsinName);
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}

// attach to the XStream Outbound Server
public static XStreamOut attachOutbound(Connection out_conn)
{
XStreamOut xsOut = null;

try
{
// when attach to an outbound server, client needs to tell outbound
// server the last position.
xsOut = XStreamOut.attach((OracleConnection)out_conn, xsoutName,
lastPosition, XStreamOut.DEFAULT_MODE);
System.out.println("Attached to outbound server:"+xsoutName);
System.out.print("Last Position is: ");
if (lastPosition != null)
{
printHex(lastPosition);
}
else
{
System.out.println("NULL");
}
return xsOut;
}
catch(Exception e)
{
System.out.println("cannot attach to outbound server: "+xsoutName);
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}

// detach from the XStream Inbound Server
public static void detachInbound(XStreamIn xsIn)
{
byte[] processedLowPosition = null;
try
{
processedLowPosition = xsIn.detach(XStreamIn.DEFAULT_MODE);
System.out.print("Inbound server processed low Position is: ");
if (processedLowPosition != null)
{
printHex(processedLowPosition);
}
else
{
System.out.println("NULL");
}
}
catch(Exception e)
{
System.out.println("cannot detach from the inbound server: "+xsinName);
System.out.println(e.getMessage());
e.printStackTrace();
}
}

// detach from the XStream Outbound Server
public static void detachOutbound(XStreamOut xsOut)
{
try
{
xsOut.detach(XStreamOut.DEFAULT_MODE);
}
catch(Exception e)
{
System.out.println("cannot detach from the outbound server: "+xsoutName);
System.out.println(e.getMessage());
e.printStackTrace();
}
}

public static void get_lcrs(XStreamIn xsIn, XStreamOut xsOut)
{
if (null == xsIn)
{
System.out.println("xstreamIn is null");
System.exit(0);
}

if (null == xsOut)
{
System.out.println("xstreamOut is null");
System.exit(0);
}

try
{
while(true)
{
// receive an LCR from outbound server
LCR alcr = xsOut.receiveLCR(XStreamOut.DEFAULT_MODE);

if (xsOut.getBatchStatus() == XStreamOut.EXECUTING) // batch is active
{
assert alcr != null;
// send the LCR to the inbound server
xsIn.sendLCR(alcr, XStreamIn.DEFAULT_MODE);

// also get chunk data for this LCR if any
if (alcr instanceof RowLCR)
{
// receive chunk from outbound then send to inbound
if (((RowLCR)alcr).hasChunkData())
{
ChunkColumnValue chunk = null;
do
{
chunk = xsOut.receiveChunk(XStreamOut.DEFAULT_MODE);
xsIn.sendChunk(chunk, XStreamIn.DEFAULT_MODE);
} while (!chunk.isEndOfRow());
}
}
processedLowPosition = alcr.getPosition();
}
else // batch is end
{
assert alcr == null;
// flush the network
xsIn.flush(XStreamIn.DEFAULT_MODE);
// get the processed_low_position from inbound server
processedLowPosition =
xsIn.getProcessedLowWatermark();
// update the processed_low_position at oubound server
if (null != processedLowPosition)
xsOut.setProcessedLowWatermark(processedLowPosition,
XStreamOut.DEFAULT_MODE);
}
}
}
catch(Exception e)
{
System.out.println("exception when processing LCRs");
System.out.println(e.getMessage());
e.printStackTrace();
}
}

public static void printHex(byte[] b)
{
for (int i = 0; i < b.length; ++i)
{
System.out.print(
Integer.toHexString((b[i]&0xFF) | 0x100).substring(1,3));
}
System.out.println("");
}
}


—-> Downloaded compatibile version of ojdbc driver ojdbc8-19.15.0.0.jar from oracle’s website
—- $ORACLE_HOME/rdbms/jlib/xstreams.jar
—-> copy ‘xstreams.jar’ from the ORACLE_HOME
—-> Compile the java code to create classes.
javac -cp “ojdbc8-19.15.0.0.jar:xstreams.jar:.” xio.java

— Once complied, call the java program now.
— Run Java Replication App
— This does: Connect to outbound server, Read LCRs (Logical Change Record), Send LCRs to inbound server and Confirm position/flush status

[oracle@oracleontario lib]$ java -cp "ojdbc8-19.15.0.0.jar:xstreams.jar:." xio targetdb
192.168.68.85 1521 XSTREAM_ADMIN Welcome123 XIN_SRV dixitdb 192.168.68.79 1521
XSTREAM_ADMIN Welcome123 XOUT_SRV
xsin_host = 192.168.68.85
xsin_port = 1521
xsin_ora_sid = targetdb
xsin connection url: jdbc:oracle:oci:@192.168.68.85:1521:targetdb
xsout_host = 192.168.68.79
xsout_port = 1521
xsout_ora_sid = dixitdb
xsout connection url: jdbc:oracle:oci:@192.168.68.79:1521:dixitdb
Attached to inbound server:XIN_SRV
Inbound Server Last Position is: null
Attached to outbound server:XOUT_SRV
Last Position is: NULL

Now when the code is running and source and target is ready. Lets try to do an insert and see if we gets it on the target.

-- on source database
SQL> select * from xstream_admin.demo_table;
ID
NAME
VALUE
----------
-----------------------------------------------------------------------------------------
----------- ----------
 10
Maine


SQL> insert into xstream_admin.demo_table (ID, NAME, VALUE) values (101, 'Calgary', 100);

1 row created.

SQL> commit;

Commit complete.

SQL> /

ID
NAME
VALUE
650 ----------
-----------------------------------------------------------------------------------------
----------- ----------
10
Maine
800
101
Calgary
100



--- on target database
SQL>
SQL> select * from xstream_admin.demo_table;
ID
NAME
VALUE
----------
-----------------------------------------------------------------------------------------
----------- ----------
10
Maine
800
101
Calgary
100

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , | Leave a Comment »