Tales From A Lazy Fat DBA

$ prashantdixit/dbs90@ace as sysdba

  • Likes

    • 110,996
  • Archives

  • ΰ₯

  • Categories

  • Cause I Support!!

  • Subscribe

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Oracle Radio

  • Magic Of Oracle

  • Disclaimer!

    FatDBA or Oracle 'Ant' is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Archive for the ‘troubleshooting’ Category

Auto Stats Gathering in Oracle 12c & Something Interesting :)

Posted by FatDBA on December 14, 2017

Hi Fellas,
Starting from Oracle 12c there is a new feature added which collects the statistics when you perform the Bulk Loads when using any of the two methods:
– CREATE TABLE AS SELECT (CTAS)
– INSERT INTO … SELECT (Into an empty table using DPR or Direct Path Read).

SQL> explain plan for create table dixittab as select * from scottisdead;
Explained.
 
SQL> select * from table(DBMS_XPLAN.DISPLAY);
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 14312189
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |               |   500K|  8812K|   612   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | DIXITTAB      |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |               |   500K|  8812K|   371   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SCOTTISDEAD   |   500K|  8812K|   371   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------
10 rows selected. 

Above in execution plan you’ll see the new operation named “OPTIMIZER STATISTICS GATHERING” at ID 2.
Lets verify if the stats are collected.

SQL> select table_name, last_analyzed from user_tables where table_name = 'DIXITTAB';
 
TABLE_NAME       LAST_ANALYZED
---------------- -------------
DIXITTAB         12-DEC-17

Yup, stats were collected!
Same way stats will be auto collected during the other type of Bulk Load method (INSERT INTO .. SELECT).

There may be times when you want to disable this feature, situations like.
– Long/Huge Insert operations which is taking huge time on STATS GATHERING.
– With an extremely large dataset where you don’t want to collect stats.

In order to achieve that we have the option, with the use of a hint which instructs oracle to not gather table statistics.

SQL> create table dixittab as select /*+NO_GATHER_OPTIMIZER_STATISTICS */* from scottisdead;

Now, something interesting i would like to discuss …..
Is there any other condition when the stats won’t be collected automatically except barring it using NO_GATHER_OPTIMIZER_STATISTICS Hint ?

Lets try to do some conventional bulk loading using INSERT INTO .. SELECT method.
To do some tests – I am intentionally commenting few of the columns both the tables. I’ve commented DATE_VAL column of newly created table TABLE1 and DATE_VALUE of the table selected SAMPLE.


SQL> create table table1 (ident number, date_val date, text_val varchar2(4000));
Table created.


SQL> insert /*+ append */ into table1
(IDENT
--, DATE_VAL
, TEXT_VAL)
SELECT ID
--, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE; 

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------

Plan hash value: 1523099961
-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |        |   100K|  2539K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT    | TABLE1 |       |       |            |          |
|   2 |   TABLE ACCESS FULL| SAMPLE |   100K|  2539K|   154   (1)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

😦 😦 Why, the auto stats gathering behavior not repeated this time ??

This happened because Oracle needs inclusion of all the columns of a table in order to kick in the OPTIMIZER STATISTICS GATHERING operation —> Let me show you what i said in above statement.

SQL> insert /*+ append */ into table1
(IDENT
, DATE_VAL
, TEXT_VAL)
SELECT ID
, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE;  

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------

Plan hash value: 1523099961
-------------------------------------------------------------------------------------------
| Id  | Operation                        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | TABLE1 |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SAMPLE |   100K|  3320K|   154   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------

10 rows selected.

Yes, the the stats were collected this time when we’ve included all the columns of the tables.
I haven’t seen any documentation on this restriction on this new feature of Oracle 12c. Hope Oracle adds this soon this to their documentation πŸ™‚ …..

Hope It Helps!
Prashant Dixit

Advertisements

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

Why my ASM Command Line (ASMCMD) is so slow, How to make ASMCMD run faster ?

Posted by FatDBA on November 1, 2017

ASMCMD is a command-line utility that you can use to easily view and manipulate files and directories within Automatic Storage Management (ASM) disk groups. It can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more.

But some of the times i have noticed some errors or slowness in command executions with ASMCMD and i believe you guys have too faced the same in the past. And the problem with ASMCMD errors are that they are not much detailed and are obscure which makes the troubleshooting more complicated and direction less.

There are few of the methods or the ways that i follow to handle performance issues with the asmcmd command line are given below.

1. Use ORADEBUG
What happens when you connect with ASMCMD ?
It actually connects with the ASM instance with SYSASM privilege and the same moment a background local process spawns with name BEQ.
Now once you recognize the process using ps -ef commands you can bind it to the ORADEBUG with errostack flag.

2. Truss or STRACE of ASMCMD and its processes.

example:

$ strace -aeft -o /dixit/labtest/asmcmdtrbsst.log asmcmd
ASMCMD>

3. Set the DBI_TRACE for ASMCMD perl tracing
Asmcmd is a wrapper for asmcmdcore script which is a shell script that starts a Perl program. If you are a Perl programmer, you can easily extend this script to add additional commands and security checks. We can use the DBI_TRACE argument to collect more diagnostic information on asm command line.

$ export DBI_TRACE=1
ASMCMD>

Hope That Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

CKPT process blocking table gather stats session intermittently … Why ?

Posted by FatDBA on November 1, 2017

Hi Folks,
Today i would like to share one of the experience that we had while working in one of the production system with a customer with a weird situation where the Gather stats session getting intermittently blocked by CKPT database background process in database and sometimes stays as it is for more than 30 mins.

We were getting the “enq: RO – fast object reuse” wait contention when gathering schema/table statistics in parallel using DBMS_STATS package with DEGREE>1

During the analysis i’ve generated the System State dump and saw a clear blocking situation on object Enq RO-00010059-00000001 .

Snippet from SS Dump.

Resource Holder State
Enq RO-00010059-00000001 14: waiting for ‘rdbms ipc message’
Enq RO-00010059-00000001 89: 89: is waiting for 14: 89:

Workaround for the problem is either of the two solutions
– We can try flush the Buffer Cache.
Though flushing the buffer cache causes dirty blocks to be written to disk and will have some performance impact.
– Setting the parameter β€œ_db_fast_obj_truncate” to FALSE.
This will revert back to 9i way of invalidating buffers in buffer cache.

Hope That Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

CLSRSC-351 & CRS-4000 Errors during execution of root.sh for GRID installation.

Posted by FatDBA on November 9, 2016

While doing GRID installation on one of the machine where there were few previous failed Grid installations happened I’ve got few of the error messages while running the root.sh script during my installation attempt.

This is what I’ve got during the process of executing the root.sh script.

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4000: Command Pin failed, or completed with errors.
2016/11/07 21:30:06 CLSRSC-161: Pin node using the command ‘/u01/app/oracle/product/12.1.0/grid_1/bin/crsctl pin css -n fatdba’ failed

I tried it executing the same second time: Praying for any magic happens this time πŸ˜‰
But this time some more errors but have left some clues and actions.

[root@Fatdba /]#
[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
2016/11/07 21:32:21 CLSRSC-351: Improper Oracle Clusterware configuration found on this host

2016/11/07 21:32:21 CLSRSC-353: Run ‘/u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig’ to deconfigure existing failed configuration and then re-run ‘root.sh’

The command ‘/u01/app/oracle/product/12.1.0/grid_1/perl/bin/perl -I/u01/app/oracle/product/12.1.0/grid_1/perl/lib -I/u01/app/oracle/product/12.1.0/grid_1/crs/install /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl ‘ execution failed

Okay, so it is clear that its happened due to some previous mess happened on the system before i got this as a task to install the software.It says that there is an improper clusterware configuration identified on the host and along it says to deinstall using roothas.pl script.

So i tried, but it says the ORS or the oracle restart stack is not active on the node, and it shouldn’t be as all the files were removed manually …
so it failed!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
2016/11/07 21:32:54 CLSRSC-39: Oracle Restart stack is not active on this node
2016/11/07 21:32:54 CLSRSC-312: Failed to verify HA resources
Died at /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsdeconfig.pm line 1358.

Let’s try the last resort, the FORCE option to remove previous bad installs.
And it worked!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig -force
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
2016/11/07 21:39:06 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Lets try to run the root.sh script again to complete this new GRID installation.
It worked now!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node fatdba successfully pinned.
2016/11/07 21:39:27 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.conf’

fatdba 2016/11/07 21:40:01 /u01/app/oracle/product/12.1.0/grid_1/cdata/fatdba/backup_20161107_214001.olr 0
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘fatdba’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘fatdba’
CRS-2677: Stop of ‘ora.evmd’ on ‘fatdba’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘fatdba’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/11/07 21:41:50 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

Hope That Helps!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: | Leave a Comment »

Oracle Tracing Capabilities – Part 1

Posted by FatDBA on August 31, 2016

Hi Mates,
There are so many times when we exhausted all of our known tips, tricks and techniques of troubleshooting and after trying everything under the sun finally raised requests to Oracle Support, then they asked us to perform some unknown, uncanny, alien steps to troubleshoot the problem in hand and asks us to share the trace files (Which we, most of the times don’t understand) which they analyze and makes conclusion based on them.

Today i would like to start a series of posts where i will share steps to troubleshoot some of Oracle’s in-built tools and software. These steps will help to understand – What and how to enable tracing for them.

Trace Data Pump:
Sometimes while importing/exporting a dump it takes a long time to complete and hangs or session just ‘Freeze’ with no reason. Oracle provides an option to trace import export sessions too by using a parameter TRACE, using this option you can decipher sessions, master & slave processes and other control processes.
You can enable tracing by using the seven digit long hexadecimal argument for the trace option. Below is the complete list of tracing levels.

SHDW: To trace the Shadow process (API) (expdp/impdp)
20300 KUPV: To trace Fixed table
40300 ‘div’ To trace Process services
80300 KUPM: To trace Master Control Process (MCP) (DM)
100300 KUPF: To trace File Manager
200300 KUPC: To trace Queue services
400300 KUPW: To trace Worker process(es) (DW)
800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 ‘all’ To trace all components (full tracing)

How to use it:

impdp \’/ as sysdba\’ SCHEMAS=DIXIT PARALLEL=8 JOB_NAME=testing_tracedpump TRACE=1FF0300
********
KUPP:10:58:22.050: Input trace/debug flags: 01FF0300 = 11818181
KUPP:10:58:22.050: Current trace/debug flags: 01FF0300 = 11818181
SHDW:10:58:22.050: Current user = SYS
SHDW:10:58:22.050: Current schema = SYS
SHDW:10:58:22.050: Current language = AMERICAN_AMERICA.AL32UTF8
SHDW:10:58:22.052: Current session address = 000000007TYBGGG0
SHDW:10:58:22.052: *** OPEN call ***
SHDW:10:58:22.052: operation = IMPORT
SHDW:10:58:22.052: job_mode = schema
SHDW:10:58:22.052: version =
SHDW:10:58:22.052: compression = 2
KUPV:10:58:22.058: Master Table create statement: CREATE TABLE “SYS”.”testing_tracedpump” (process_order NUMBER, duplicate NUMBER, dump_fileid NUMBER, dump_position NUMBER, dump_length NUMBER, dump_orig_length NUMBER
,,,,,,,,,,,,,,,,,,,,,,,,,

This will create some trace files for master control process (DM), shadow processes, slave/worker processes (DW) and under trace directory.
Next will write about more undocumented and hidden features available for some of the tools we use of daily basis.

Hope It Helps
Prashant Dixit

Posted in troubleshooting | Leave a Comment »

Number Of Oracle Database 12c Log Writers ? Yes Finally, but boon or bane ?

Posted by FatDBA on August 4, 2016

Was really excited when I saw on my 12c test machine by default it had two redo workers in addition to the “parent” log writer.

a877463c8ff55d5dbd531826356d319c

Snippet from one of the test database with a parent and two redo workers.

$ ps -eaf|grep tunedb | grep ora_lg
oracle 54964 1 0 14:37 ? 00:00:00 ora_lgwr_tunedb
oracle 54968 1 0 14:37 ? 00:00:00 ora_lg00_tunedb
oracle 54972 1 0 14:37 ? 00:00:00 ora_lg01_tunedb

Yes, with 12c the wait is over. πŸ™‚ πŸ™‚ πŸ™‚ πŸ™‚
Multiple LGWRs is great news because serialization is the demise of computable processes and structures.

But But But —- Not sure how stable and good this feature is. I recently faced one bug 19181582 : DEADLOCK BETWEEN LG0N ON ‘LGWR WORKER GROUP ORDERING’ in 12c production environment because of this new feature.
It causes the database to hang and at this moment the patch is not ready.

Solution is to set instance parameter _use_single_log_writer=TRUE, with this parameter I was able to REDUCE the number LGWRs to only one.

Right now understanding to control the number of log writer slave with _max_outstanding_log_writes and _max_log_write_parallelism instance level parameters or any AUTO Behaviors of increasing-decreasing redo writers.

Thanks
Prashant Dixit

Posted in Advanced, troubleshooting | Leave a Comment »

Real Time Log Mining Steps — Yeah One more time ;)

Posted by FatDBA on July 19, 2016

Okay, this is nothing new as many of us are well aware and have experience on such situations where we used Log Mining utilities to understand/study analyzed archive log files. Next i will discuss easiest steps to analyze archives using oracle in-built Log Miner utility.

Okay so what it is — Once Again πŸ™‚

LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

1) Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun. These might include errors such as those where the wrong rows were deleted because of incorrect values in a WHERE clause, rows were updated with incorrect values, the wrong index was dropped, and so forth.

2) Determining what actions you would have to take to perform fine-grained recovery at the transaction level. Normally you would have to restore the table to its previous state, and then apply an archived redo log file to roll it forward.

3) Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.

4) Performing postauditing. LogMiner can be used to track any data manipulation language (DML) and data definition language (DDL) statements executed on the database, the order in which they were executed, and who executed them.

STEPS.
=========

Step 1: Specify the list of redo log files to be analyzed.
—————————————————————–
Specify the redo log files which you want to analyze.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( –
LOGFILENAME => ‘/opt/data/oracle4/xxxxxx/FRA/xxxxx/archivelog/2016_01_07/o1_mf_1_2514_9xpfvpf7_.arc’,OPTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.


Step 2: Start LogMiner.

—————————–

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.


Step 3: Query the V$LOGMNR_CONTENTS view.

———————————————-
Now when the mining is completed you can query the view in order to crept details fetched during mining session.
It will provide you details like username, XID, session details, actions, UNDO and REDO queries, timestamp,operation, session info etc.

Example:
SELECT username AS USR, (XIDUSN || ‘.’ || XIDSLT || ‘.’ || XIDSQN) AS XID,SQL_REDO, SQL_UNDO,TIMESTAMP,OS_USERNAME,MACHINE_NAME,SESSION# FROM V$LOGMNR_CONTENTS WHERE username=’XXXXX’ AND UPPER(SQL_REDO) LIKE ‘%xxxxxxxx%’;

Step 4: End the LogMiner session.
———————————————-
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Hope It Helps
Prashant D

Posted in Advanced, troubleshooting | 1 Comment »

“ORA-30511: invalid DDL operation in system triggers” during privilege grant for Golden Gate user.

Posted by FatDBA on June 28, 2016

While doing some tests on my lab machine for Golden Gate i’ve encountered a very strange situation where during the process of granting a specific system privilege to Oracle Golden Gate user leads to an error message. Below are the steps performed to fix the issue.

Here i was trying to grant ‘SELECT ANY DICTIONARY’ privilege to the Golden Gate user in order to smoothly work to get the Query data dictionary objects in the SYS schema.

SQL> grant select any dictionary to ggs_owner;
grant select any dictionary to ggs_owner
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-30511: invalid DDL operation in system triggers
ORA-06512: at line 999
ORA-30511: invalid DDL operation in system triggers

Now during the investigation I’ve found that there are multiple tables deleted from the schema and still their location pointers Recycle Bin still exists.

SQL> conn ggs_owner
Enter password:
Connected.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
BIN$LtiM9Mw/3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwq3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwr3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwz3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxB3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxD3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxF3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxH3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxK3SvgUAB/AQARlw==$0 TABLE

CHKPTAB TABLE
CHKPTAB_LOX TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_DDL_COLUMNS TABLE
GGS_DDL_HIST TABLE
GGS_DDL_HIST_ALT TABLE
GGS_DDL_LOG_GROUPS TABLE
GGS_DDL_OBJECTS TABLE
GGS_DDL_PARTITIONS TABLE
GGS_DDL_PRIMARY_KEYS TABLE
GGS_DDL_RULES TABLE
GGS_DDL_RULES_LOG TABLE
GGS_MARKER TABLE
GGS_SETUP TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_STICK TABLE
GGS_TEMP_COLS TABLE
GGS_TEMP_UK TABLE

25 rows selected.

I manually purged the recycle bin for the Golden Gate schema & verified if we are still having those recycle-bin entries coming.

SQL> purge recyclebin;
Recyclebin purged.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
CHKPTAB TABLE
CHKPTAB_LOX TABLE
GGS_DDL_COLUMNS TABLE
GGS_DDL_HIST TABLE
GGS_DDL_HIST_ALT TABLE
GGS_DDL_LOG_GROUPS TABLE
GGS_DDL_OBJECTS TABLE
GGS_DDL_PARTITIONS TABLE
GGS_DDL_PRIMARY_KEYS TABLE
GGS_DDL_RULES TABLE
GGS_DDL_RULES_LOG TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_MARKER TABLE
GGS_SETUP TABLE
GGS_STICK TABLE
GGS_TEMP_COLS TABLE
GGS_TEMP_UK TABLE

16 rows selected.

Alright now we don’t have those entries coming anymore for the schema. Will try to grant the same privilege to the GG user back again and will see what happens now.

SQL> conn / as sysdba
Connected.

SQL> grant sysdba to ggs_owner;
Grant succeeded.

Cool, now after we removed or purged those Bin entries for the schema, we don’t have any error coming while granting the GG user privileges.

Hope That Helps
Prashant D

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

 
%d bloggers like this: