Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘Bugs’

When GoldenGate decides to throw OGG-02912 just before New Years Eve.

Posted by FatDBA on December 31, 2025

Happy New Year! 🎉
Because nothing says “end of the year” like firing up a test lab, breaking a GoldenGate extract, and realizing that Oracle 11g still has unfinished business with you. I spent the last hours of the year chasing an error that politely reminded me: old databases never really retire — they just wait 😀

Nothing fancy. Just a simple setup. Or at least… that’s what I thought at the beginning.

The goal was straightforward: capture data from an Oracle 11gR2 (11.2.0.4) database using Oracle GoldenGate Integrated Extract, running from a centralized GoldenGate extract hub using remote integrated capture with a newer GoldenGate build (21c)

I’ve done this dozens of times with 12c and above. 11g though… well, 11g always has a way of reminding you that it’s old, but not that old 🙂

The Setup (Quick Context)

Source database: Oracle 11g Enterprise Edition 11.2.0.4 (OEL 7.x 64)
Capture mode: Integrated Extract
GoldenGate binaries: 21.x
Capture host: centralized GoldenGate extract hub using remote integrated capture(Linux OEL 8.X 64)
Simple test table, simple inserts.

Everything registered fine. Extract attached to LogMiner. No privilege errors. No Streams issues.
So far, so good. And then…

The Symptom :
Out of nowhere, the extract stopped. Running info all showed this … Opening the report file made it very clear this wasn’t a generic failure. Right at the bottom:

GGSCI (postgrequebec.quebdomain as ggreplication@DB11G) 31> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXT11G      00:00:00      00:11:14




GGSCI (postgrequebec.quebdomain as ggreplication@DB11G) 32> view report EXT11G
2025-12-30 15:41:53  INFO    OGG-06604  Connected to database DB11G, CPU info: CPU Count 1, CPU Core Count 1, CPU Socket Count 1.

2025-12-30 15:41:53  INFO    OGG-06618  Database DB11G Platform: Linux x86 64-bit.

2025-12-30 15:41:57  INFO    OGG-02248  Logmining server DDL filtering enabled.

2025-12-30 15:41:59  INFO    OGG-02068  Integrated capture successfully attached to logmining server OGG$CAP_EXT11G using OGGCapture API.

2025-12-30 15:41:59  INFO    OGG-02089  Source redo compatibility version is: 11.2.0.4.0.

2025-12-30 15:41:59  INFO    OGG-15446  Extract configured as  resource group.

2025-12-30 15:41:59  INFO    OGG-02086  Integrated Dictionary will be used.

2025-12-30 15:41:59  INFO    OGG-02710  Database metadata information is obtained from source database.

2025-12-30 15:41:59  WARNING OGG-02901  Replication of UDT and ANYDATA from redo logs is not supported with the Oracle compatible parameter setting. Using fetch instead.

2025-12-30 15:41:59  INFO    OGG-02776  Native data capture is enabled for Oracle NUMBER data type.

2025-12-30 15:41:59  INFO    OGG-01971  The previous message, 'INFO OGG-02776', repeated 1 times.

Source Context :
  SourceModule            : [ggdb.ora.ddl]
  SourceID                : [../gglib/ggdbora/ddlora.c]
  SourceMethod            : [metadata_from_logminer]
  SourceLine              : [1270]
  ThreadBacktrace         : [15] elements
                          : [/home/gg_adminremote/ogghome_21c/libgglog.so(CMessageContext::AddThreadContext())]
                          : [/home/gg_adminremote/ogghome_21c/libgglog.so(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...))]
                          : [/home/gg_adminremote/ogghome_21c/libgglog.so(_MSG_(CSourceContext*, int, CMessageFactory::MessageDisposition))]
                          : [/home/gg_adminremote/ogghome_21c/extract()]
                          : [/home/gg_adminremote/ogghome_21c/extract(RedoAPI::createInstance(ggs::gglib::ggdatasource::DataSource*, ggs::gglib::ggapp::ReplicationContext*))]
                          : [/home/gg_adminremote/ogghome_21c/extract(ggs::er::OraTranLogDataSource::setup())]
                          : [/home/gg_adminremote/ogghome_21c/extract(ggs::gglib::ggapp::ReplicationContext::establishStartPoints(char, ggs::gglib::ggdatasource::DataSourceParams const&))]
                          : [/home/gg_adminremote/ogghome_21c/extract(ggs::gglib::ggapp::ReplicationContext::initializeDataSources(ggs::gglib::ggdatasource::DataSourceParams&))]
                          : [/home/gg_adminremote/ogghome_21c/extract()]
                          : [/home/gg_adminremote/ogghome_21c/extract(ggs::gglib::MultiThreading::MainThread::ExecMain())]
                          : [/home/gg_adminremote/ogghome_21c/extract(ggs::gglib::MultiThreading::Thread::RunThread(ggs::gglib::MultiThreading::Thread::ThreadArgs*))]
                          : [/home/gg_adminremote/ogghome_21c/extract(ggs::gglib::MultiThreading::MainThread::Run(int, char**))]
                          : [/home/gg_adminremote/ogghome_21c/extract(main)]
                          : [/lib64/libc.so.6(__libc_start_main)]
                          : [/home/gg_adminremote/ogghome_21c/extract()]

2025-12-30 15:41:59  ERROR   OGG-02912  Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 or later.

2025-12-30 15:41:59  ERROR   OGG-01668  PROCESS ABENDING.

Understanding What Actually Went Wrong
This is one of those GoldenGate errors that looks scary but is actually very precise once you read it slowly. GoldenGate was telling me: “Hey, I’m trying to write trail records using a 12.2+ trail format, but your 11g database can’t mine redo in that format unless you patch it.”

Specifically: Integrated Extract defaults to newer trail formats. Oracle 11g cannot mine 12.2+ trail formats, unless you apply Patch 17030189 (logminer GG Dictionary support: missing attributes) on the 11g database home. And in most environments… patching 11g is not happening.

Here’s the subtle trap: You install GoldenGate 19c / 21c and configured Integrated Extract. You don’t explicitly set a trail format — GoldenGate assumes: “Modern source, modern trail”. But 11g is not modern, even 11.2.0.4, the best version of 11g, still has limits. So GoldenGate happily starts… and then politely crashes.

The Options on the Table .. At this point, there were only three real choices:

Option 1: Patch the 11g database “Apply Patch 17030189 to the database home”.

Pros: Allows newer trail formats

Cons: Risky, operationally heavy, often blocked by policy, definitely not “lab friendly”

Option 2: Force an Older Trail Format. Tell GoldenGate to behave like it’s 2012 again.

Pros: No database patching, fully supported, safe and predictable

Cons: You give up newer trail features (more on that later). for me, option 2 was the obvious choice and in fact for many where client don’t want to change anything on the 11g database as its old and so far stable or patching will require additional planning and change requests and other operational risks etc.

Option 3: Using a workaround by using in-build OGG script prvtlmpg.plb.

Pros: Simple, straight forward, fast.

Cons: In production environments, this workaround introduces additional operational and audit risk, requires database-side intervention, and often triggers formal change and approval processes. It is particularly inconvenient in remote or centralized GoldenGate architectures, where GoldenGate is intentionally decoupled from the source database host. Since it alters mining-side database behavior, it is less clean and less maintainable than applying the official Oracle patch or avoiding the issue altogether by enforcing a compatible trail format.

The Fix That I Used.
The fix itself was simple, but order matters. You stop the impacted extract, delete the existing trail (trail headers stores the format), update the extract param file with a new flagh/parameter “FORMAT RELEASE “. Recreate the trail file and start your extract.

GGSCI (postgrequebec.quebdomain as ggreplication@DB11G) 36> DELETE EXTTRAIL ./dirdat/e1
Deleting extract trail ./dirdat/e1 for Extract group EXT11G.



GGSCI (postgrequebec.quebdomain as ggreplication@DB11G) 38>  ADD EXTTRAIL ./dirdat/e1, EXTRACT ext11g
EXTTRAIL added.



GGSCI (postgrequebec.quebdomain as ggreplication@DB11G) 41> view params EXT11G

EXTRACT ext11g
USERIDALIAS ogg_11g
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 512)

EXTTRAIL ./dirdat/e1, FORMAT RELEASE 11.2
DISCARDFILE ./dirrpt/ext11g.dsc, APPEND, MEGABYTES 50
REPORTCOUNT EVERY 30 MINUTES, RATE

TABLE ELEVENGTOFABRIC.TESTREPLTAB;






GGSCI (postgrequebec.quebdomain as ggreplication@DB11G) 52> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT11G      00:00:03      00:00:06

]The Moment of Truth —> Lag was moving. SCNs were advancing. Trail RBAs were increasing. No more abends. No more patch complaints. That’s when you know you’re done.

Why FORMAT RELEASE 11.2 Is Safe (and When It’s Not) ? Let’s be clear, this isn’t a hack. This is documented, supported behavior.

What You Lose —> Newer GoldenGate metadata, Some advanced DDL capture details, Newer datatype handling

What You Keep —> Full DML capture (INSERT / UPDATE / DELETE), Stability, Compatibility
Your sanity

For 11g source systems, especially ones you don’t want to touch, this is the correct trade off.

Final Thoughts
This issue is a perfect example of why GoldenGate work is never just about syntax. Everything was “correct”: Privileges, Integrated capture, Registration ..but one missing line quietly broke the entire pipeline. If you’re running 11g with modern GoldenGate, remember this: Old database. Old trail format or be ready to patch.

And honestly… forcing FORMAT RELEASE 11.2 was the smarter move in this case where we totally avoided any modifications on the source system and continue remote extraction.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Online Patching & Bug Fixing — > : Bug 16342845 : EXCESSIVE CPU IN DBW PROCESSES FOR FAST OBJECT CHECKPOINTSDB

Posted by FatDBA on March 2, 2015

During further investigation we found that all the Database Writers started consuming high resources and waits on ‘DB FILE ASYNC I/O SUBMIT’ event and all rest of the database sessions started waiting on ‘enq: KO – Fast Object Checkpoint’

We have one most critical production system running on oracle 11.2.0.3 on the top of RHEL.
Recently we started experiencing a very bizarre and weird issue where applications timeouts happens intermittently and during that specific period extremely slow IO operations happens at the database level.

While investigation found that during the issue DB started throwing huge and frequent waits on ‘enq: KO – fast object checkpoint’ and ‘db file async I/O submit’ events and all of the queries in the database during the period starts snailing.
At the same time all the 8 available DBWR processes started waiting on “db file async I/O Submit” wait event and seems takes time to write blocks to the disk and this leads to late check-pointing in the system which in turn slow down database.

DB_Waits_Stats-duringSlowness-onetmproduction___mmma

From AWR
—————
foreground wait event
enq: KO – fast object checkpoint — 70% of the DB time.

background wait event
db file async I/O submit — 90% of bg time

Wait Event Histogram
———————-
db file async I/O submit — 100% >1s (% of waits)
enq: CR – block range reuse ckpt — 100% >1s (% of waits)
enq: KO – fast object checkpoint — 100% >1s (% of waits)
enq: RO – fast object reuse — 100% >1s (% of waits)

After weeks of probing and hours of analyzing system state dumps, Hang analyze and writer process traces we reached to a conclusion that we have strike by a Bug 16342845 : EXCESSIVE CPU IN DBW PROCESSES FOR FAST OBJECT CHECKPOINTSDB

Below mentioned are steps performed to apply the recommended patch.
PRIMARY: disable log shipping
STANDBY: stop MRP

1. Disable Log Shipping on Primary Database.
2. Stop MRP process on Physical Standby Database.
3. Apply the patch first in to Physical Standby (online)
4. Monitor the performance on the Phy STDBY db.
5. Will apply the patch on Logical Standby Database.
6. Monitor the logical database performance.
7. Will apply the patch on Primary Database.
8. Monitor the system performance.
9. Start Log shipping on Primary DB end.
10.Start MRP process on Standby database.

Step 1.
Disable log shipping on Primary DB.

SQL> show parameter LOG_ARCHIVE_DEST_

NAME TYPE VALUE
———————————— ——————————– ——————————
log_archive_dest_1 string location=”/vol5/oracle/dixit_db_
arc/dixitdb/arch”, valid_for=(A
LL_LOGFILES,ALL_ROLES)

log_archive_dest_2 string service=”dixitdb_al”, LGWR ASYNC
NOAFFIRM delay=0 optional com
pression=disable max_failure=0
max_connections=1 reopen=300
db_unique_name=”dixitdb_al” net_
timeout=30, valid_for=(all_log
files,primary_role)

log_archive_dest_3 string service=”dixitdb_r”, NOAFFIRM AS
YNC VALID_FOR=(ALL_LOGFILES,PR
IMARY_ROLE) DB_UNIQUE_NAME=one
tm_r

alter system set log_archive_dest_state_2=defer scope=both;

Step 2:
Stop MRP process on Physical Standby Database.

Before
PROCESS STATUS DELAY_MINS
——— ———— ———-
MRP0 WAIT_FOR_LOG 0

SQL> alter database recover managed standby database cancel;

After cancelling recover process the MRP automatically stops.

PROCESS STATUS DELAY_MINS
——— ———— ———-

Step 3:
Opatch online apply – common to all environments.

Recommendations
1. Take binaries backup (ORACLE_HOME).
2. Take Full DB Backup.
3. Check Opatch Prerequisites in order to identify and resolve any patch conflicts.

-bash-3.2$ export PATH=$PATH:$ORACLE_HOME/OPatch

-bash-3.2$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 11.2.0.3.0
Copyright (c) 2012, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /opt/app/oracle/product/11gR2
Central Inventory : /opt/app/oracle/oraInventory
from : /opt/app/oracle/product/11gR2/oraInst.loc
OPatch version : 11.2.0.3.0
OUI version : 11.2.0.3.0
Log file location : /opt/app/oracle/product/11gR2/cfgtoollogs/opatch/opatch2015-03-01_05-34-42AM_1.log

Invoking prereq “checkconflictagainstohwithdetail”

Prereq “checkConflictAgainstOHWithDetail” passed.

-bash-3.2$ opatch apply online -connectString dixitdb_al:sys:xxxxxxx
Oracle Interim Patch Installer version 11.2.0.3.0
Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /opt/app/oracle/product/11gR2
Central Inventory : /opt/app/oracle/oraInventory
from : /opt/app/oracle/product/11gR2/oraInst.loc
OPatch version : 11.2.0.3.0
OUI version : 11.2.0.3.0
Log file location : /opt/app/oracle/product/11gR2/cfgtoollogs/opatch/20505138_Mar_01_2015_05_35_29/apply2015-03-01_05-35-29AM_1.log

The patch should be applied/rolled back in ‘-all_nodes’ mode only.
Converting the RAC mode to ‘-all_nodes’ mode.
Applying interim patch ‘20505138’ to OH ‘/opt/app/oracle/product/11gR2’
Verifying environment and performing prerequisite checks…
All checks passed.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
Backing up files…

Patching component oracle.rdbms, 11.2.0.3.0…
Installing and enabling the online patch ‘bug20505138.pch’, on database ‘dixitdb_al’.

Verifying the update…
Patch 20505138 successfully applied
Log file location: /opt/app/oracle/product/11gR2/cfgtoollogs/opatch/20505138_Mar_01_2015_05_35_29/apply2015-03-01_05-35-29AM_1.log

OPatch succeeded.


From Alert log during the patch apply:

Sun Mar 01 06:17:13 2015
Patch file bug20505138.pch is out of sync with oracle binary; performing fixup
Patch file bug20505138.pch has been synced with oracle binary
Patch bug20505138.pch Installed – Update #1
Patch bug20505138.pch Enabled – Update #2
Sun Mar 01 06:17:15 2015
Online patch bug20505138.pch has been installed
Online patch bug20505138.pch has been enabled

-bash-3.2$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.0
Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /opt/app/oracle/product/11gR2
Central Inventory : /opt/app/oracle/oraInventory
from : /opt/app/oracle/product/11gR2/oraInst.loc
OPatch version : 11.2.0.3.0
OUI version : 11.2.0.3.0
Log file location : /opt/app/oracle/product/11gR2/cfgtoollogs/opatch/opatch2015-03-01_05-35-53AM_1.log

Lsinventory Output file location : /opt/app/oracle/product/11gR2/cfgtoollogs/opatch/lsinv/lsinventory2015-03-01_05-35-53AM.txt

——————————————————————————–
Installed Top-level Products (1):

Oracle Database 11g 11.2.0.3.0
There are 1 products installed in this Oracle Home.

Interim patches (5) :

Patch (online) 20505138: applied on Sun Mar 01 05:35:42 CET 2015
Unique Patch ID: 18568953
Created on 9 Feb 2015, 19:34:29 hrs PST8PDT
Bugs fixed:
16367081, 16463153, 16342845


‘Hope That Helps’
Thanks
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

ORA-09925 – Fixed (Error during Instance Creation)

Posted by FatDBA on June 27, 2012

ERROR: ORA-09925: Unable to create audit trail file Linux-x86_64 Error: 2: No such file or directory Additional information:  9925 ORA-01075: you are currently logged on 

Resolution: Log in as Root and grant 775/777 mode (Permission) to audil_trail_dest=adump folder.

Posted in Advanced | Tagged: , | Leave a Comment »