Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘golden gate’

Golden Gate 19.1.0 – VIEW REPORT & VIEW PARAMS command failed

Posted by FatDBA on March 24, 2020

Hi Folks,

Would like to share one of the latest problem that I’ve faced with Golden Gate release 19.1.0 where two of the most frequently used commands, VIEW REPORT and VIEW PARAMS command simply fails in the administration client console. This is a Windows server environment where I was doing some testing with this new release.
I mean it produces no additional logs or information while doing that, it simply tries to wake up and crashes in next few seconds.

So, with no idea about the problem, plus this being a new release of Oracle GG (19.1) there wasn’t any metalink note or public documentation available, so reached Oracle support. They gave a very simple but logical solution to handle this issue. They asked me to once again set the default text viewer program for viewing parameter and report files by passing following command and it worked


SET PAGER notepad 


They later on declared this issue as a BUG with code 30427030, but bug details are yet to be published on Metalink website.


Stay inside, learn something new during this self isolation & stop the spread of Covid-19.
Stay Healthy and Safe Everyone!

Prashant Dixit
 

Advertisement

Posted in troubleshooting | Tagged: , | Leave a Comment »

No active extraction maps – Golden Gate STATS command, what’s that ?

Posted by FatDBA on April 8, 2018

Hey Mates,

Not sure if you guys have ever encountered a situation where you’ve got a return message “No active extraction maps” from Golden Gate STATS command against your Golden Gate process.
For GG newcomers, STATS command is used to display statistics for one or more Extract,Pump/Replicat groups and the output includes DML and DDL operations that are included in the Oracle GoldenGate configuration.

Now coming back to the error scenario. Let’s see where we got that message and what does that means.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 20> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     GGTUNEX1    00:00:03      00:00:08
EXTRACT     RUNNING     GGTUNPU1    00:00:00      00:00:01
REPLICAT    RUNNING     GGTUNRP1    00:00:00      00:00:03

Now when i tried to check statistics for my processes, we’ve got a message which says that there are “No active extraction maps”

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 18> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 22> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

So what does that means ??
The message that is displayed indicates that nothing has been processed in the extract or replicat. Once data for the tables in the extract and replicat are processed (captured from or applied to DB) the STATS command produces processing statistics.

Let me try to do some manipulations at the source table which is part of replication and see if this brings something to STATS results.

SQL> update emp set ENAME='KARTIKEY' where EMPNO=8090;
1 row updated.

SQL> commit;
Commit complete.

Let’s try now.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 43> stats GGTUNEX1

Sending STATS request to EXTRACT GGTUNEX1 ...

Start of Statistics at 2018-04-05 04:03:08.

Output to /acfsmounts/acfsggv1/ggdir/dirdat/s1:
Extracting from SCOTT.EMP to SCOTT.EMP:

*** Total statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Latest statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.

Yup, we’ve got stats for the process now after the change.

Hope It Helps
Prashant

Posted in Basics | Tagged: | 3 Comments »

Golden Gate Logdump Utility: How to find RBA using TIMESTAMP.

Posted by FatDBA on July 28, 2017

Hey Mates,
Would like to discuss about the well known Golden Gate troubleshooting tool ‘Logdump’, specially about one of the command that is quite handy when you have million of records present in the Trail file and doing that Constant press of character ‘n’ on the keyboard could be a huge pain.
So, if you have the timestamp you will be able to do it using SFTS of SCANFORTIMESTAMP command of Logdump.

Logdump 885 > usertoken on
Logdump 886 > ggstoken on
Logdump 887 > ghdr on
Logdump 888 > detail on

Logdump 889 > open ./dirdat/pe000067

Logdump 889 > sfts 2017/07/28 11:15:30
Scan for timestamp >= 2017/07/28 11:15:30.000.000 CEST

Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 705 (x02c1) IO Time : 2017/07/28 11:15:30.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 101 AuditPos : 223086608
Continued : N (x00) RecCount : 1 (x01)

2014/04/07 10:06:16.000.000 Insert Len 705 RBA 63547
Name: EAST.ORDERS
After Image: Partition 4 G s
0000 000a 0000 0000 0000 0000 0001 0001 000a 0000 | ....................
0000 0000 0000 0001 0002 0010 0000 000c 4c6f 7265 | ................Lore
6e20 5065 6e74 6f6e 0003 0004 ffff 0000 0004 0014 | n Penton............
0000 0010 3338 3230 2042 7572 6775 6e64 7920 5374 | ....3820 Burgundy St
0005 0004 ffff 0000 0006 000f 0000 000b 4e65 7720 | ................New
4f72 6c65 616e 7300 0700 0900 0000 0537 3031 3137 | Orleans........70117
0008 000d 0000 0009 4c6f 7569 7369 616e 6100 0900 | ........Louisiana...

GGS tokens:
TokenID x52 'R' ORAROWID Info x00 Length 20
4141 4157 534c 4141 4541 4141 414a 3141 4141 0001 | AAAWSLAAEAAAAJ1AAA..
TokenID x4c 'L' LOGCSN Info x00 Length 7
3831 3633 3430 34 | 8163404
TokenID x36 '6' TRANID Info x00 Length 9
312e 3239 2e39 3835 30 | 1.29.9850

Filtering suppressed 12 records

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Golden Gate High Availability using XAG Bundles!

Posted by FatDBA on November 9, 2016

Oracle Grid Infrastructure Bundled Agents (XAG) are Oracle Grid Infrastructure components that provide the HA framework to application resources and resource types managed through the bundled agent management interface, AGCTL.
This post will help to understand, implement the XAG agents to have the HA for Golden Gate.

To begin installation, this zip file needs to be downloaded and expanded in a temporary directory.
The $XAG_HOME and sub-directories must be owned by Oracle Grid Infrastructure install owner. The setup script, xagsetup.sh, must be run as the Oracle Grid Infrastructure install owner. The xagsetup.sh script offers the option to install local only (the default) or to install on all nodes in the cluster or, a subset of nodes in the cluster where the application is targeted to run. When deploying the bundled agents to multiple nodes in the cluster, the xagsetup.sh is executed on only one node of the cluster, and the xagsetup.sh script will attempt to create $XAG_HOME directory on all remote nodes.

[oracle@monkey02 xag]$ ./xagsetup.sh –install –directory /u01/app/grid/product/xag –all_nodes
Installing Oracle Grid Infrastructure Agents on: monkey01
Installing Oracle Grid Infrastructure Agents on: monkey02
Installation of Oracle Grid Infrastructure Agents Successfull!

Next step is where you creates the VIP resource that will be used by the GG resource to relocate itself on the other remaining nodes when needed.
[root@monkey02 bin]# ./appvipcfg create -network=1 -ip=192.168.81.101 -vipname=testxag-ggs-vip -user=oracle -group=oinstall

Assign the required set of permissions to the grid install owner on this newly created VIP resource.
[root@monkey02 bin]# ./crsctl setperm resource testxag-ggs-vip -u user:oracle:r-x

Alright, so the VIP resource is created and is running fine.

——————————————————————————-
Cluster Resources
——————————————————————————–
testxag-ggs-vip
1 ONLINE ONLINE monkey02 STABLE

Next we will create the golden gate resource and configure it from the golden gate user.
Call the AGCTL utlity from the XAG_HOME location and choose all required parameters/arguments with GG processes those you want to monitor through this.

[oracle@monkey02 bin]#./agctl add goldengate xagtest_ggs \
–gg_home /olala/app/ggg \
–instance_type dual \
–nodes monkey01,monkey02 \
–vip_name testxag-ggs-vip \
–filesystems ora.monkeydds.monkeydds_vol.acfs \
–databases ora.tunedb.db \
–oracle_home /u01/app/oracle/product/12.1.0.2/dbhome_1 \
–monitor_extracts ETEST1,ETEST2,ETEST3,PTEST1,EPUMP1,EPUMP2,EPUMP3 \
–monitor_replicats PTEST1,PTEST2,PTEST3 \

Options & arguments explained:
–gg_home **GoldenGate installation directory**
–instance_type <source|target|dual*** —- dual is for bi directional replication.
–oracle_home ***path***
–databases ***associated database resources***
–monitor_extracts ***ext1,ext2,ext3,…***
–monitor_replicats ***rep1,rep2,rep3,…***
–vip_name ***VIP resource name***
–filesystems ***acfs1,acfs2,…***

Okay, now when we are done with the configuration of the GG resource, lets verify it.

[oracle@monkey02 bin]$ ./agctl config goldengate xagtest_ggs
GoldenGate location is: /olala/app/ggg
GoldenGate instance type is: dual
Configured to run on Nodes: monkey01 monkey02
ORACLE_HOME location is: /u01/app/oracle/product/12.1.0.2/dbhome_1
Databases needed: ora.tunedb.db
File System resources needed: ora.monkeydds.monkeydds_vol.acfs
VIP name: testxag-ggs-vip
EXTRACT groups to monitor: ETEST1,ETEST2,ETEST3,PUMP1,PUMP2,PUMP3
REPLICAT groups to monitor: RTEST1,RTEST2,RTEST3
Critical EXTRACT groups:
Critical REPLICAT groups:
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no

Alright, so this is what we want. Next we have to start the Golden Gate resource which we have created at the last step.

[oracle@monkey02 bin]$ ./agctl start goldengate xagtest_ggs

In the beginning the status will be in STARTING mode

xag.lab_ggate.goldengate

1 ONLINE OFFLINE monkey01 STARTING
——————————————————————————–

Now, once the resource is started the status of the cluster resource will be STABLE state.

xag.xagtest_ggs.goldengate
1 ONLINE OFFLINE monkey01 STABLE
——————————————————————————–

Now, we are good to go and can perform manual tests like relocation of resources on other nodes.

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: | 4 Comments »

Upgrade Classic Extract & Replicat to Integrated Options.

Posted by FatDBA on October 28, 2016

Before Oracle 12c the Golden Gate replicat process can only work in serial mode and to achieve this correspondence or parallelism we generally have to split/divide our tables among multiple replicat processes. And yes there were many restrictions and limitations of that approach.

Similarly the Integrated Capture (Was there since 11gR2) is also one of the coolest thing where the Oracle Golden Gate Extract process interacts directly with the log mining server which captures the changes in the form of Logical Change Records (LCR’s) as is to be able to make use of the already existing internal procedures in the database. Now with Integrated capture there is No need to fetch LOB’s from tables, full support for XML and SECURE File LOBs, Transparency in RAC setups.

As most of us are working with the Classic Approach of Extracts and Replicat’s. Below are the steps to upgrade Classic Extract to Integrated Capture and Classic Replicat to Integrated Replicat

How to upgrade an existing GG extract to Integrated Option:
Note:
– Stop both EXTRACT and related PUMP processes during the upgrade process after step 1.
– Parameters that are required for Integrated options and should be mentioned in the parameter files for the
processes.
Extract Parameter
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 1048, parallelism 4)
Replicate Parameter
DBOPTIONS INTEGRATEDPARAMS(parallelism 4)

1. On the source server, grant administrative privileges for capture operations to the GoldenGate user using the following:

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => ‘gg_owner’,privilege_type => ‘capture’, grant_select_privileges=> true, do_grants => TRUE);

2. Register the Extract Group
On the source server, the primary extract group must be registered with the database using the REGISTER EXTRACT command.

GGSCI () 1> DBLOGIN USERID ggsaccount PASSWORD test1
Successfully logged into database.

GGSCI () 2> REGISTER EXTRACT EXSANDY1 DATABASE

Extract EXSANDY1 successfully registered with database at SCN 217817171.

3. Check the process if its ready for the upgrade.

GGSCI ( as ggsaccount@SANDBOX1) 15> INFO EXSANDY1 UPGRADE
ERROR: Extract EXSANDY1 is not ready to be upgraded because recovery SCN 128118181 has not reached SCN 128119718.

This error speaks that the SCN is currently at 128118181 and needs to be incremented to 128119718.
To do this ALTER the extract and start it and immediately stop.

GGSCI () 5> ALTER EXTRACT EXSANDY1 TRANLOG BEGIN NOW
EXTRACT altered.

GGSCI () 7> START EXTRACT EXSANDY1
Sending START request to MANAGER …
EXTRACT EXSANDY1 starting

GGSCI () 8> INFO EXTRACT EXSANDY1

EXTRACT EXSANDY1 Last Started 2016-10-10 11:48:01 Status RUNNING
Checkpoint Lag 00:00:18 (updated 00:00:01 ago)
Log Read Checkpoint Oracle Redo Logs
2016-10-10 11:48:01 Seqno 102, RBA 121789
SCN 0.0 (0)


4. Check if the process is ready for the UPGRADE now after the change.

GGSCI ( as ggsaccount@SANDBOX1) 15> INFO EXSANDY1 UPGRADE
Extract EXSANDY1 is ready to be upgraded to integrated capture.

5. Now its the time to migrate the EXTRACT process.

GGSCI () 11> ALTER EXTRACT EXSANDY1 , UPGRADE INTEGRATED TRANLOG
Extract EXSANDY1 successfully upgraded to integrated capture.

It’s successfully upgraded to INTEGRATED option.

6. Lets conform

GGSCI () 8> INFO EXTRACT EXSANDY1

EXTRACT EXSANDY1 Initialized 2016-10-10 12:48:01 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:01:16 ago)
Log Read Checkpoint Oracle Integrated Redo Logs ————> Nows its showing that its reading from the integrated redo logs.
2016-10-10 12:48:01
SCN 0.3112112 (3112112)

7. Start the PUMP process as well.

How to upgrade an existing GG REPLICAT to Integrated Option:

1. Stop the REPLICAT process

GGSCI () 2> stop replicat RXSANDY1
Sending STOP request to REPLICAT RXSANDY1…
Request processed.

2. Login to the database

GGSCI () 1> DBLOGIN USERID ggsaccount PASSWORD test1
Successfully logged into database.

3. Alter the replicate to upgrade it to INTEGRATED option.

GGSCI () 4> alter replicat RXSANDY1, integrated
REPLICAT (Integrated) altered.

4. Start the replicat.

5. Check the status of the replicat to see if its converted.

GGSCI ( as ggsaccount@SANDBOX2) 45> info RXSANDY1

REPLICAT RXSANDY1 Initialized 2016-10-10 13:00:01 Status RUNNING
INTEGRATED
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Process ID 1897
Log Read Checkpoint File /dbs/next/gg/dirdat/RXSANDY1/DI000012
First Record RBA 0

Thanks
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

“ORA-30511: invalid DDL operation in system triggers” during privilege grant for Golden Gate user.

Posted by FatDBA on June 28, 2016

While doing some tests on my lab machine for Golden Gate i’ve encountered a very strange situation where during the process of granting a specific system privilege to Oracle Golden Gate user leads to an error message. Below are the steps performed to fix the issue.

Here i was trying to grant ‘SELECT ANY DICTIONARY’ privilege to the Golden Gate user in order to smoothly work to get the Query data dictionary objects in the SYS schema.

SQL> grant select any dictionary to ggs_owner;
grant select any dictionary to ggs_owner
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-30511: invalid DDL operation in system triggers
ORA-06512: at line 999
ORA-30511: invalid DDL operation in system triggers

Now during the investigation I’ve found that there are multiple tables deleted from the schema and still their location pointers Recycle Bin still exists.

SQL> conn ggs_owner
Enter password:
Connected.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
BIN$LtiM9Mw/3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwq3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwr3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwz3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxB3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxD3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxF3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxH3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxK3SvgUAB/AQARlw==$0 TABLE

CHKPTAB TABLE
CHKPTAB_LOX TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_DDL_COLUMNS TABLE
GGS_DDL_HIST TABLE
GGS_DDL_HIST_ALT TABLE
GGS_DDL_LOG_GROUPS TABLE
GGS_DDL_OBJECTS TABLE
GGS_DDL_PARTITIONS TABLE
GGS_DDL_PRIMARY_KEYS TABLE
GGS_DDL_RULES TABLE
GGS_DDL_RULES_LOG TABLE
GGS_MARKER TABLE
GGS_SETUP TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_STICK TABLE
GGS_TEMP_COLS TABLE
GGS_TEMP_UK TABLE

25 rows selected.

I manually purged the recycle bin for the Golden Gate schema & verified if we are still having those recycle-bin entries coming.

SQL> purge recyclebin;
Recyclebin purged.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
CHKPTAB TABLE
CHKPTAB_LOX TABLE
GGS_DDL_COLUMNS TABLE
GGS_DDL_HIST TABLE
GGS_DDL_HIST_ALT TABLE
GGS_DDL_LOG_GROUPS TABLE
GGS_DDL_OBJECTS TABLE
GGS_DDL_PARTITIONS TABLE
GGS_DDL_PRIMARY_KEYS TABLE
GGS_DDL_RULES TABLE
GGS_DDL_RULES_LOG TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_MARKER TABLE
GGS_SETUP TABLE
GGS_STICK TABLE
GGS_TEMP_COLS TABLE
GGS_TEMP_UK TABLE

16 rows selected.

Alright now we don’t have those entries coming anymore for the schema. Will try to grant the same privilege to the GG user back again and will see what happens now.

SQL> conn / as sysdba
Connected.

SQL> grant sysdba to ggs_owner;
Grant succeeded.

Cool, now after we removed or purged those Bin entries for the schema, we don’t have any error coming while granting the GG user privileges.

Hope That Helps
Prashant D

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

Oracle Golden Gate: Tuning GG processes.

Posted by FatDBA on January 8, 2014

While doing a load tests on one of our databases for Golden Gate found replicate process doing an FTS while updating a big batch of rows in a table.

Problem : Replicate process was going for a  Full table Scan while Updating 804000 rows.

Source Database = 11.2.0.3

Below is update statement which was issued to update 804000 rows.
This completed in 15 seconds on SOURCE side DB.

sql_id :ghsutzgq0m8j9

PLAN_TABLE_OUTPUT
——————————————————————————————-
SQL_ID ghsutzgq0m8j9
——————–
UPDATE dummy11.OS_HISTORYSTEP_BLD SET ENTRY_ID=5555, STEP_ID=100000,
ACTION_ID=20000, OWNER=’JIGNESHKANKRECHA’,
START_DATE=TO_TIMESTAMP(‘5/28/2012 9:19:11.910000
PM’,’fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM’),
FINISH_DATE=TO_TIMESTAMP(‘5/28/2012 9:19:11.910000
PM’,’fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM’),
DUE_DATE=TO_TIMESTAMP(‘5/28/2012 9:19:11.910000 PM’,’fmMMfm/fmDDfm/YYYY
fmHH12fm:MI:SS.FF AM’), STATUS=’QC Validation failed’, CALLER=’KALIYA’
where ID  BETWEEN 1220000 AND 5999999

Plan hash value: 3578452229

—————————————————————————————–
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
—————————————————————————————–
|   0 | UPDATE STATEMENT   |                    |       |       |   376 (100)|          |
|   1 |  UPDATE            | OS_HISTORYSTEP_BLD |       |       |            |          |
|   2 |   TABLE ACCESS FULL| OS_HISTORYSTEP_BLD |    47 |  4277 |   376   (1)| 00:00:05 |
—————————————————————————————–

——————————————————————
Target Database = 10.2.0.4
Below statement Replicate executed, Because i had BATCHSQL replicate made BATCH of few rows and send for update that is the reason we see COUNT STOPKEY in execution plan.

PLAN_TABLE_OUTPUT
—————————————————————————————
SQL_ID b4t7hv6p53165
——————–
UPDATE “dummy11″.”OS_HISTORYSTEP_BLD” SET “ENTRY_ID” = :a1,”STEP_ID” =
:a2,”ACTION_ID” = :a3,”OWNER” = :a4,”START_DATE” = :a5,”FINISH_DATE” =
:a6,”DUE_DATE” = :a7,”STATUS” = :a8,”CALLER” = :a9 WHERE “ID” = :b0 AND ROWNUM = 1

Plan hash value: 185372276

——————————————————————————————
| Id  | Operation           | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————————
|   0 | UPDATE STATEMENT    |                    |       |       |   304 (100)|          |
|   1 |  UPDATE             | OS_HISTORYSTEP_BLD |       |       |            |          |
|   2 |   COUNT STOPKEY     |                    |       |       |            |          |
|   3 |    TABLE ACCESS FULL| OS_HISTORYSTEP_BLD |     1 |    91 |   304   (1)| 00:00:04 |
——————————————————————————————

–Excerpt from Report file.
BATCHSQL BATCHESPERQUEUE 100, BATCHTRANSOPS 10000, OPSPERBATCH 10000, OPSPERQUEUE 100000

MAP resolved (entry dummy11.OS_HISTORYSTEP_BLD):
MAP “dummy11″.”OS_HISTORYSTEP_BLD”, TARGET dummy11.OS_HISTORYSTEP_BLD, KEYCOLS(ID);

xxxxxxx 12:00:30  WARNING OGG-00869  No unique key is defined for table ‘OS_HISTORYSTEP_BLD’. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.
Using following columns in default map by name:
ID, ENTRY_ID, STEP_ID, ACTION_ID, OWNER, START_DATE, FINISH_DATE,
DUE_DATE, STATUS, CALLER
Using the following key columns for target table dummy11.OS_HISTORYSTEP_BLD: ID.

——–some statistic of replicat.
xxxxxxx 12:18:22  INFO    OGG-01408  Restoring current schema for DDL operation to [GGADMIN].
104578 records processed as of xxxxxxxxxxxxxxxx 12:21:05 (rate 84,delta 236)
457309 records processed as of xxxxxxxxxxxxxxxx 13:35:07 (rate 80,delta 79)
913119 records processed as of xxxxxxxxxxxxxxxx 13:39:47 (rate 153,delta 1628)
943118 records processed as of xxxxxxxxxxxxxxxx 13:41:25 (rate 155,delta 305)
953118 records processed as of xxxxxxxxxxxxxxxx 13:42:43 (rate 155,delta 127)
963118 records processed as of xxxxxxxxxxxxxxxx 13:44:25 (rate 154,delta 98)
973118 records processed as of xxxxxxxxxxxxxxxx 13:46:29 (rate 153,delta 80)
983118 records processed as of xxxxxxxxxxxxxxxx 13:48:57 (rate 151,delta 67)
993118 records processed as of xxxxxxxxxxxxxxxx 13:51:47 (rate 148,delta 58)
1003118 records processed as of xxxxxxxxxxxxxxxx 13:54:59 (rate 146,delta 51)
1013118 records processed as of xxxxxxxxxxxxxxxx 13:58:35 (rate 143,delta 46)

** xxxxxxx- Intentional

SOLUTION:
1) kill session of replicate which was running update statement, This will abend replicate.
2) Add Index on ID column which was in KEYCOLS Of MAP statement.
3) start replicate .

And we are back to the normal.

GGSCI (catlmsxt205) 4> INFO ALL

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     TAPSDR      00:00:00      03:32:40

GGSCI (catlmsxt205) 4> INFO ALL

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     TAPSDR      00:00:00      03:32:42

GGSCI (catlmsxt205) 6> INFO ALL
Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
REPLICAT    RUNNING     TAPSDR      00:00:00      03:33:06

GGSCI (catlmsxt205) 7> INFO ALL

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     TAPSDR      00:00:00      00:00:03

Execution plan of Update after Index.

PLAN_TABLE_OUTPUT
———————————————————————————————————————————————————————————————-
SQL_ID  b4t7hv6p53165, child number 0
————————————-
UPDATE “dummy11″.”OS_HISTORYSTEP_BLD” SET “ENTRY_ID” = :a1,”STEP_ID” =
:a2,”ACTION_ID” = :a3,”OWNER” = :a4,”START_DATE” = :a5,”FINISH_DATE” =
:a6,”DUE_DATE” = :a7,”STATUS” = :a8,”CALLER” = :a9 WHERE “ID” = :b0 AND ROWNUM =
1

Plan hash value: 2252287618

—————————————————————————————–
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
—————————————————————————————–
|   0 | UPDATE STATEMENT   |                    |       |       |     5 (100)|          |
|   1 |  UPDATE            | OS_HISTORYSTEP_BLD |       |       |            |          |
|*  2 |   COUNT STOPKEY    |                    |       |       |            |          |
|*  3 |    INDEX RANGE SCAN| OS_HIST_BLDIDX     |     1 |    91 |     3   (0)| 00:00:01 |
—————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

2 – filter(ROWNUM=1)
3 – access(“ID”=TO_NUMBER(:B0))

Posted in Advanced | Tagged: | Leave a Comment »

 
%d bloggers like this: