Tales From A Lazy Fat Oracle DBA

$ prashantdixit/oracle90@ace as sysdba

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat Oracle DBA on WordPress.com
  • My Twitter Feeds

  • Oracle Radio

  • Magic Of Oracle

  • Disclaimer!

    FatDBA or Oracle 'Ant' is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Archive for the ‘Advanced’ Category

Advance/Troubleshooting/Error-Bug Fixing

Golden Gate High Availability using XAG Bundles!

Posted by FatDBA on November 9, 2016

Oracle Grid Infrastructure Bundled Agents (XAG) are Oracle Grid Infrastructure components that provide the HA framework to application resources and resource types managed through the bundled agent management interface, AGCTL.
This post will help to understand, implement the XAG agents to have the HA for Golden Gate.

To begin installation, this zip file needs to be downloaded and expanded in a temporary directory.
The $XAG_HOME and sub-directories must be owned by Oracle Grid Infrastructure install owner. The setup script, xagsetup.sh, must be run as the Oracle Grid Infrastructure install owner. The xagsetup.sh script offers the option to install local only (the default) or to install on all nodes in the cluster or, a subset of nodes in the cluster where the application is targeted to run. When deploying the bundled agents to multiple nodes in the cluster, the xagsetup.sh is executed on only one node of the cluster, and the xagsetup.sh script will attempt to create $XAG_HOME directory on all remote nodes.

[oracle@monkey02 xag]$ ./xagsetup.sh –install –directory /u01/app/grid/product/xag –all_nodes
Installing Oracle Grid Infrastructure Agents on: monkey01
Installing Oracle Grid Infrastructure Agents on: monkey02
Installation of Oracle Grid Infrastructure Agents Successfull!

Next step is where you creates the VIP resource that will be used by the GG resource to relocate itself on the other remaining nodes when needed.
[root@monkey02 bin]# ./appvipcfg create -network=1 -ip=192.168.81.101 -vipname=testxag-ggs-vip -user=oracle -group=oinstall

Assign the required set of permissions to the grid install owner on this newly created VIP resource.
[root@monkey02 bin]# ./crsctl setperm resource testxag-ggs-vip -u user:oracle:r-x

Alright, so the VIP resource is created and is running fine.

——————————————————————————-
Cluster Resources
——————————————————————————–
testxag-ggs-vip
1 ONLINE ONLINE monkey02 STABLE

Next we will create the golden gate resource and configure it from the golden gate user.
Call the AGCTL utlity from the XAG_HOME location and choose all required parameters/arguments with GG processes those you want to monitor through this.

[oracle@monkey02 bin]#./agctl add goldengate xagtest_ggs \
–gg_home /olala/app/ggg \
–instance_type dual \
–nodes monkey01,monkey02 \
–vip_name testxag-ggs-vip \
–filesystems ora.monkeydds.monkeydds_vol.acfs \
–databases ora.tunedb.db \
–oracle_home /u01/app/oracle/product/12.1.0.2/dbhome_1 \
–monitor_extracts ETEST1,ETEST2,ETEST3,PTEST1,EPUMP1,EPUMP2,EPUMP3 \
–monitor_replicats PTEST1,PTEST2,PTEST3 \

Options & arguments explained:
–gg_home **GoldenGate installation directory**
–instance_type <source|target|dual*** —- dual is for bi directional replication.
–oracle_home ***path***
–databases ***associated database resources***
–monitor_extracts ***ext1,ext2,ext3,…***
–monitor_replicats ***rep1,rep2,rep3,…***
–vip_name ***VIP resource name***
–filesystems ***acfs1,acfs2,…***

Okay, now when we are done with the configuration of the GG resource, lets verify it.

[oracle@monkey02 bin]$ ./agctl config goldengate xagtest_ggs
GoldenGate location is: /olala/app/ggg
GoldenGate instance type is: dual
Configured to run on Nodes: monkey01 monkey02
ORACLE_HOME location is: /u01/app/oracle/product/12.1.0.2/dbhome_1
Databases needed: ora.tunedb.db
File System resources needed: ora.monkeydds.monkeydds_vol.acfs
VIP name: testxag-ggs-vip
EXTRACT groups to monitor: ETEST1,ETEST2,ETEST3,PUMP1,PUMP2,PUMP3
REPLICAT groups to monitor: RTEST1,RTEST2,RTEST3
Critical EXTRACT groups:
Critical REPLICAT groups:
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no

Alright, so this is what we want. Next we have to start the Golden Gate resource which we have created at the last step.

[oracle@monkey02 bin]$ ./agctl start goldengate xagtest_ggs

In the beginning the status will be in STARTING mode

xag.lab_ggate.goldengate

1 ONLINE OFFLINE monkey01 STARTING
——————————————————————————–

Now, once the resource is started the status of the cluster resource will be STABLE state.

xag.xagtest_ggs.goldengate
1 ONLINE OFFLINE monkey01 STABLE
——————————————————————————–

Now, we are good to go and can perform manual tests like relocation of resources on other nodes.

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

CLSRSC-351 & CRS-4000 Errors during execution of root.sh for GRID installation.

Posted by FatDBA on November 9, 2016

While doing GRID installation on one of the machine where there were few previous failed Grid installations happened I’ve got few of the error messages while running the root.sh script during my installation attempt.

This is what I’ve got during the process of executing the root.sh script.

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4000: Command Pin failed, or completed with errors.
2016/11/07 21:30:06 CLSRSC-161: Pin node using the command ‘/u01/app/oracle/product/12.1.0/grid_1/bin/crsctl pin css -n fatdba’ failed

I tried it executing the same second time: Praying for any magic happens this time 😉
But this time some more errors but have left some clues and actions.

[root@Fatdba /]#
[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
2016/11/07 21:32:21 CLSRSC-351: Improper Oracle Clusterware configuration found on this host

2016/11/07 21:32:21 CLSRSC-353: Run ‘/u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig’ to deconfigure existing failed configuration and then re-run ‘root.sh’

The command ‘/u01/app/oracle/product/12.1.0/grid_1/perl/bin/perl -I/u01/app/oracle/product/12.1.0/grid_1/perl/lib -I/u01/app/oracle/product/12.1.0/grid_1/crs/install /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl ‘ execution failed

Okay, so it is clear that its happened due to some previous mess happened on the system before i got this as a task to install the software.It says that there is an improper clusterware configuration identified on the host and along it says to deinstall using roothas.pl script.

So i tried, but it says the ORS or the oracle restart stack is not active on the node, and it shouldn’t be as all the files were removed manually …
so it failed!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
2016/11/07 21:32:54 CLSRSC-39: Oracle Restart stack is not active on this node
2016/11/07 21:32:54 CLSRSC-312: Failed to verify HA resources
Died at /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsdeconfig.pm line 1358.

Let’s try the last resort, the FORCE option to remove previous bad installs.
And it worked!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig -force
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
2016/11/07 21:39:06 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Lets try to run the root.sh script again to complete this new GRID installation.
It worked now!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node fatdba successfully pinned.
2016/11/07 21:39:27 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.conf’

fatdba 2016/11/07 21:40:01 /u01/app/oracle/product/12.1.0/grid_1/cdata/fatdba/backup_20161107_214001.olr 0
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘fatdba’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘fatdba’
CRS-2677: Stop of ‘ora.evmd’ on ‘fatdba’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘fatdba’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/11/07 21:41:50 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

Hope That Helps!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: | Leave a Comment »

Oracle GRID install failed -INS-40404

Posted by FatDBA on November 9, 2016

Many times while doing some crude deletion or cleanup of GRID directories you have faced an error with the GUI or at the CLI during the progress when you were almost sure that you have deleted everything what is needed.
Bam! right at that moment an error appears saying “The installer has detected a configured instance of Oracle grid infrastructure software on the server” with error code INS-40404.

You tried recalling everything you did to clean and remove every trace of it …. No Luck 😦

Solution to the problem is:
Go to oracle folder under /etc directory.
Remove the ‘ocr.loc’ file which keeps on pointing the local ocr locations.

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Upgrade Classic Extract & Replicat to Integrated Options.

Posted by FatDBA on October 28, 2016

Before Oracle 12c the Golden Gate replicat process can only work in serial mode and to achieve this correspondence or parallelism we generally have to split/divide our tables among multiple replicat processes. And yes there were many restrictions and limitations of that approach.

Similarly the Integrated Capture (Was there since 11gR2) is also one of the coolest thing where the Oracle Golden Gate Extract process interacts directly with the log mining server which captures the changes in the form of Logical Change Records (LCR’s) as is to be able to make use of the already existing internal procedures in the database. Now with Integrated capture there is No need to fetch LOB’s from tables, full support for XML and SECURE File LOBs, Transparency in RAC setups.

As most of us are working with the Classic Approach of Extracts and Replicat’s. Below are the steps to upgrade Classic Extract to Integrated Capture and Classic Replicat to Integrated Replicat

How to upgrade an existing GG extract to Integrated Option:
Note:
– Stop both EXTRACT and related PUMP processes during the upgrade process after step 1.
– Parameters that are required for Integrated options and should be mentioned in the parameter files for the
processes.
Extract Parameter
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 1048, parallelism 4)
Replicate Parameter
DBOPTIONS INTEGRATEDPARAMS(parallelism 4)

1. On the source server, grant administrative privileges for capture operations to the GoldenGate user using the following:

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => ‘gg_owner’,privilege_type => ‘capture’, grant_select_privileges=> true, do_grants => TRUE);

2. Register the Extract Group
On the source server, the primary extract group must be registered with the database using the REGISTER EXTRACT command.

GGSCI () 1> DBLOGIN USERID ggsaccount PASSWORD test1
Successfully logged into database.

GGSCI () 2> REGISTER EXTRACT EXSANDY1 DATABASE

Extract EXSANDY1 successfully registered with database at SCN 217817171.

3. Check the process if its ready for the upgrade.

GGSCI ( as ggsaccount@SANDBOX1) 15> INFO EXSANDY1 UPGRADE
ERROR: Extract EXSANDY1 is not ready to be upgraded because recovery SCN 128118181 has not reached SCN 128119718.

This error speaks that the SCN is currently at 128118181 and needs to be incremented to 128119718.
To do this ALTER the extract and start it and immediately stop.

GGSCI () 5> ALTER EXTRACT EXSANDY1 TRANLOG BEGIN NOW
EXTRACT altered.

GGSCI () 7> START EXTRACT EXSANDY1
Sending START request to MANAGER …
EXTRACT EXSANDY1 starting

GGSCI () 8> INFO EXTRACT EXSANDY1

EXTRACT EXSANDY1 Last Started 2016-10-10 11:48:01 Status RUNNING
Checkpoint Lag 00:00:18 (updated 00:00:01 ago)
Log Read Checkpoint Oracle Redo Logs
2016-10-10 11:48:01 Seqno 102, RBA 121789
SCN 0.0 (0)


4. Check if the process is ready for the UPGRADE now after the change.

GGSCI ( as ggsaccount@SANDBOX1) 15> INFO EXSANDY1 UPGRADE
Extract EXSANDY1 is ready to be upgraded to integrated capture.

5. Now its the time to migrate the EXTRACT process.

GGSCI () 11> ALTER EXTRACT EXSANDY1 , UPGRADE INTEGRATED TRANLOG
Extract EXSANDY1 successfully upgraded to integrated capture.

It’s successfully upgraded to INTEGRATED option.

6. Lets conform

GGSCI () 8> INFO EXTRACT EXSANDY1

EXTRACT EXSANDY1 Initialized 2016-10-10 12:48:01 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:01:16 ago)
Log Read Checkpoint Oracle Integrated Redo Logs ————> Nows its showing that its reading from the integrated redo logs.
2016-10-10 12:48:01
SCN 0.3112112 (3112112)

7. Start the PUMP process as well.

How to upgrade an existing GG REPLICAT to Integrated Option:

1. Stop the REPLICAT process

GGSCI () 2> stop replicat RXSANDY1
Sending STOP request to REPLICAT RXSANDY1…
Request processed.

2. Login to the database

GGSCI () 1> DBLOGIN USERID ggsaccount PASSWORD test1
Successfully logged into database.

3. Alter the replicate to upgrade it to INTEGRATED option.

GGSCI () 4> alter replicat RXSANDY1, integrated
REPLICAT (Integrated) altered.

4. Start the replicat.

5. Check the status of the replicat to see if its converted.

GGSCI ( as ggsaccount@SANDBOX2) 45> info RXSANDY1

REPLICAT RXSANDY1 Initialized 2016-10-10 13:00:01 Status RUNNING
INTEGRATED
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Process ID 1897
Log Read Checkpoint File /dbs/next/gg/dirdat/RXSANDY1/DI000012
First Record RBA 0

Thanks
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Cross Platform Migrations: ‘As Easy As Pie’ in Oracle 12c

Posted by FatDBA on August 22, 2016

The legendary Transportable Tablespace feature was introduced in Oracle 8i to make it convenient to transport a large amount of data between databases. Specially from Oracle10g this useful feature was enhanced with cross-platform support which allowed a tablespace/tablespaces, to be transported between databases deployed on different hardware platforms or between platforms with a different endian formats.

So till 11g the migration activity involves RMAN, EXPDP and IMPDP have to be used to transport tablespace(s) across platforms along with the RMAN CONVERT statement was used. Below are the steps that are required to perform the migration work till 11g.

Step 1: Check Platform Support and File Conversion Requirement
Step 2: Identify Tablespaces to be Transported and Verify Self-containment
Step 3: Check for Problematic Data Types
Step 4: Check for Missing Schemas and Duplicate Tablespace and Object Names
Step 5: Make Tablespaces Read-only in Source Database
Step 6: Extract Metadata from Source Database (We could use either data pump or original export to do this)
Step 7: Copy Files to Target Server and Convert if Necessary (Conversion involves RMAN)
Step 8: Import Metadata into Target Database (This step is sometimes called “plugging in” the tablespaces. Again we can use data pump or original import).
Step 9: Copy Additional Objects to Target Database as Desired

With the introduction of Oracle Database 12c, it includes a very easy and novel way to do the same – That is the ability to transport tablespaces across platforms using just RMAN and RMAN (compressed) backupsets!

Let me show you how we can transport a tablespace from Oracle Linux to Oracle Solaris. Which is an example of a cross platform migration with different ENDIAN formats. Solaris is BIG endian whereas the Linux is a small ENDIAN type OS.

SQL> select banner from v$version;

BANNER
——————————————————————————–
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
PL/SQL Release 12.1.0.2.0 – Production

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
———– ————————————————– ————–
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
13 Linux x86 64-bit Little

SQL> SELECT tablespace_name, segment_type, COUNT(*),
2 SUM (bytes) / 1024 / 1024 mb
3 FROM dba_segments
4 WHERE owner = ‘DIXIT’
5 GROUP BY tablespace_name, segment_type
6 ORDER BY 1, 2 DESC;

TABLESPACE_NAME SEGMENT_TYPE COUNT(*) MB
————— ———— ———- ——-
IND1 INDEX 88 1353.4
TAB1 TABLE 41 4079.6
TAB1 LOBSEGMENT 3 0.4
TAB1 LOBINDEX 3 0.2
TAB1 INDEX 53 106.4

Make Tablespaces Read-only in Source Database
With today’s filers and sophisticated storage systems, it is often possible to take a filer “snapshot” or split a mirror in order to get a copy of the data files very quickly. Extracting metadata is also quick. So, on a system with a good storage system, tablespaces may only need to be read-only for a few minutes.
NOTE: In 12c we can use a procedure that keeps the downtime to a minimum with the ‘Incremental Cross-Platform Transportable Tablespaces’. It also uses RMAN transportable backupsets but is a slightly more complicated procedure.

We put the tablespaces into read-only mode with the following statements:

SQL> ALTER TABLESPACE tab1 READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE ind1 READ ONLY;
Tablespace altered.

In order to create a TTS backup we have two of the optins available to use either BACKUP FOR TRANSPORT or the BACKUP TO PLATFORM RMAN

Whats the difference between the two ?
Answer: The difference between these two arguments or statements in RMAN is where the datafile conversion will take place. The BACKUP FOR TRANSPORT statement should be used if the datafile conversion is to be performed on the target system and the BACKUP TO PLATFORM statement should be used if the datafile conversion is to be performed on the source system.

For the test purposes we will perform the conversion of datafiles on the source using the BACKUP TO PLATFORM statement specifying the name of the target platform as its argument.
We have to provide some additional information like Where RMAN should place the backupsets. RMAN will create one backupset with the datafile(s) and a second backupset with the metadata dumpset that EXPDP will create in lieu of RMAN. For this test i will create a compressed transportable backupset.

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 10:49:57 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> backup to platform ‘Solaris[tm] OE (64-bit)’ as compressed backupset
2> tablespace xtransport format ‘/tmp/dbfilebackups.bck’
3> datapump format ‘/tmp/infometaexpdp.bck’;
Starting backup at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ”:
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table “SYS”.”TRANSPORT_EXP_V121_pyAn” successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TRANSPORT_EXP_TUNEDB_Y7OJ is:
EXPDP> /u01/app/oracle/product/12.1.0.2.0/db_1/dbs/backup_transporttbs_tunedb_181881.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace XTRANSPORT:
EXPDP> /u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
EXPDP> Job “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ” successfully completed at Sun Aug 21 10:53:55 2016 elapsed 0 00:04:03
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/dbfilebackups.bck tag=TAG201698188T888 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/infometaexpdp.bck tag=TAG201678777U998 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:06
Finished backup at 21-AUG-16

Recovery Manager complete.

So in short the RMAN has performed below mentioned activities:
– Verify and Identify Tablespaces to be Transported and Verify Self-containment
– Extract Metadata from Source Database using EXPDP.
– RMAN created a compressed backupset which contains the tablespace’s datafile.
– Created a backupset containing the metadata dump.

Now its time to restore the transportable backupset!!

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 12:39:13 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> restore from platform ‘Solaris[tm] OE (64-bit)’
2> foreign tablespace IND1, TAB1 to new
3> from backupset ‘/tmp/dbfilebackups.bck’
4> dump file from backupset ‘/tmp/infometaexpdp.bck’;
Starting restore at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=67 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace IND1, TAB1
channel ORA_DISK_1: reading from backup piece /tmp/dbfilebackups.bck
channel ORA_DISK_1: restoring foreign file 9 to /u01/db/tunedb/data/tunedb/datafile/o1_mf_testtransport_ab77hho11_.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/dbfilebackups.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0.2.0/db/dbs/o1_mf_ttftest_dixit_.dmp
channel ORA_DISK_1: reading from backup piece /tmp/infometaexpdp.bck
channel ORA_DISK_1: foreign piece handle=/tmp/infometaexpdp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04

Performing import of metadata…
IMPDP> Master table “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully loaded/unloaded
IMPDP> Starting “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu”:
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully completed at Sun Aug 21 12:42:10 2016 elapsed 0 00:02:03
Import completed

Finished restore at 21-AUG-16

Recovery Manager complete.

Deducing on the basis of RMAN restore logs, its clear that the RMAN completed following steps:
– It restored the foreign tablespace’s datafile from the datafile backupset.
– Along it restores the tablespace metadata from the metadata backupset.
– Import the tablespace metadata using IMPDP.

Hope That Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Admin stuff on MongoDB Document Stores soon!

Posted by FatDBA on August 19, 2016

Yes, soon i will start sharing some stuff on one of the leading document stores (oriented) database
Yup i got certified few years back but those are the days when it wasn’t that popular. Now its one of the most popular document stores databases used by businesses to transform using Big Data.

Posted in Advanced | Leave a Comment »

Days of ‘catbundle PSU apply’are over, Lets welcome ‘DATAPATCH’ in 12c!!

Posted by FatDBA on August 19, 2016

Hi Mates,
With the introduction of Oracle 12c the SQL commands belonging to one patch are not installed by the catbundle.sql but by the “datapatch” tool, located in the OPatch directory. Apart from that the datapatch checks before if the requirement for the installation are met.

Let me be more simple — Datapatch is the new tool that enables automation of post-patch SQL actions for RDBMS patches. So, In 12c you don’t use carbundle psu apply now this is all done using datapatch.

With the Enterprise Manager and OPatchAuto we gets the further automation of database patches by calling datapatch automatically after applying the binary patch.

Enterprise Manager: Starting version 12.1 Enterprise Manager now calls datapatch to complete post patch actions upon any 12c or later database restart.

OPatchAuto : OPatchAuto calls datapatch to complete post patch actions upon installation of the binary patch and restart of the database.

OPatch : Datapatch integration with OPatch is not possible as OPatch is executed when the database is down and datapatch requires the database to be opened to complete its activity.
When patches are installed or rollbacked using OPatch then datapatch needs to be explicitly invoked if instructed to do so in the patch readme.

RAC specific: For a RAC environment, after the binary patch has been applied on all nodes run Datapatch to complete the post-install SQL deployment for the PSU only from one node. Datapatch need not be run on all the nodes.

Below are the steps that you normally perform while applying a patch.

1
2

Restart your Database now when the patch is successfully applied to the binaries.
3

Now apply the patch to the database using DATAPATCH Utility from ORACLE_HOME/OPatch directory.
4

Lest query the database to check the patch apply status.
5

or

To check the PSU applied to your database using the following SQL statement

select * from DBA_REGISTRY_SQLPATCH;

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Number Of Oracle Database 12c Log Writers ? Yes Finally, but boon or bane ?

Posted by FatDBA on August 4, 2016

Was really excited when I saw on my 12c test machine by default it had two redo workers in addition to the “parent” log writer.

a877463c8ff55d5dbd531826356d319c

Snippet from one of the test database with a parent and two redo workers.

$ ps -eaf|grep tunedb | grep ora_lg
oracle 54964 1 0 14:37 ? 00:00:00 ora_lgwr_tunedb
oracle 54968 1 0 14:37 ? 00:00:00 ora_lg00_tunedb
oracle 54972 1 0 14:37 ? 00:00:00 ora_lg01_tunedb

Yes, with 12c the wait is over. 🙂 🙂 🙂 🙂
Multiple LGWRs is great news because serialization is the demise of computable processes and structures.

But But But —- Not sure how stable and good this feature is. I recently faced one bug 19181582 : DEADLOCK BETWEEN LG0N ON ‘LGWR WORKER GROUP ORDERING’ in 12c production environment because of this new feature.
It causes the database to hang and at this moment the patch is not ready.

Solution is to set instance parameter _use_single_log_writer=TRUE, with this parameter I was able to REDUCE the number LGWRs to only one.

Right now understanding to control the number of log writer slave with _max_outstanding_log_writes and _max_log_write_parallelism instance level parameters or any AUTO Behaviors of increasing-decreasing redo writers.

Thanks
Prashant Dixit

Posted in Advanced, troubleshooting | Leave a Comment »

Real Time Log Mining Steps — Yeah One more time ;)

Posted by FatDBA on July 19, 2016

Okay, this is nothing new as many of us are well aware and have experience on such situations where we used Log Mining utilities to understand/study analyzed archive log files. Next i will discuss easiest steps to analyze archives using oracle in-built Log Miner utility.

Okay so what it is — Once Again 🙂

LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

1) Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun. These might include errors such as those where the wrong rows were deleted because of incorrect values in a WHERE clause, rows were updated with incorrect values, the wrong index was dropped, and so forth.

2) Determining what actions you would have to take to perform fine-grained recovery at the transaction level. Normally you would have to restore the table to its previous state, and then apply an archived redo log file to roll it forward.

3) Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.

4) Performing postauditing. LogMiner can be used to track any data manipulation language (DML) and data definition language (DDL) statements executed on the database, the order in which they were executed, and who executed them.

STEPS.
=========

Step 1: Specify the list of redo log files to be analyzed.
—————————————————————–
Specify the redo log files which you want to analyze.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( –
LOGFILENAME => ‘/opt/data/oracle4/xxxxxx/FRA/xxxxx/archivelog/2016_01_07/o1_mf_1_2514_9xpfvpf7_.arc’,OPTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.


Step 2: Start LogMiner.

—————————–

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.


Step 3: Query the V$LOGMNR_CONTENTS view.

———————————————-
Now when the mining is completed you can query the view in order to crept details fetched during mining session.
It will provide you details like username, XID, session details, actions, UNDO and REDO queries, timestamp,operation, session info etc.

Example:
SELECT username AS USR, (XIDUSN || ‘.’ || XIDSLT || ‘.’ || XIDSQN) AS XID,SQL_REDO, SQL_UNDO,TIMESTAMP,OS_USERNAME,MACHINE_NAME,SESSION# FROM V$LOGMNR_CONTENTS WHERE username=’XXXXX’ AND UPPER(SQL_REDO) LIKE ‘%xxxxxxxx%’;

Step 4: End the LogMiner session.
———————————————-
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Hope It Helps
Prashant D

Posted in Advanced, troubleshooting | Leave a Comment »

“ORA-30511: invalid DDL operation in system triggers” during privilege grant for Golden Gate user.

Posted by FatDBA on June 28, 2016

While doing some tests on my lab machine for Golden Gate i’ve encountered a very strange situation where during the process of granting a specific system privilege to Oracle Golden Gate user leads to an error message. Below are the steps performed to fix the issue.

Here i was trying to grant ‘SELECT ANY DICTIONARY’ privilege to the Golden Gate user in order to smoothly work to get the Query data dictionary objects in the SYS schema.

SQL> grant select any dictionary to ggs_owner;
grant select any dictionary to ggs_owner
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-30511: invalid DDL operation in system triggers
ORA-06512: at line 999
ORA-30511: invalid DDL operation in system triggers

Now during the investigation I’ve found that there are multiple tables deleted from the schema and still their location pointers Recycle Bin still exists.

SQL> conn ggs_owner
Enter password:
Connected.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
BIN$LtiM9Mw/3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwq3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwr3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9Mwz3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxB3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxD3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxF3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxH3SvgUAB/AQARlw==$0 TABLE
BIN$LtiM9MxK3SvgUAB/AQARlw==$0 TABLE

CHKPTAB TABLE
CHKPTAB_LOX TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_DDL_COLUMNS TABLE
GGS_DDL_HIST TABLE
GGS_DDL_HIST_ALT TABLE
GGS_DDL_LOG_GROUPS TABLE
GGS_DDL_OBJECTS TABLE
GGS_DDL_PARTITIONS TABLE
GGS_DDL_PRIMARY_KEYS TABLE
GGS_DDL_RULES TABLE
GGS_DDL_RULES_LOG TABLE
GGS_MARKER TABLE
GGS_SETUP TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_STICK TABLE
GGS_TEMP_COLS TABLE
GGS_TEMP_UK TABLE

25 rows selected.

I manually purged the recycle bin for the Golden Gate schema & verified if we are still having those recycle-bin entries coming.

SQL> purge recyclebin;
Recyclebin purged.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
CHKPTAB TABLE
CHKPTAB_LOX TABLE
GGS_DDL_COLUMNS TABLE
GGS_DDL_HIST TABLE
GGS_DDL_HIST_ALT TABLE
GGS_DDL_LOG_GROUPS TABLE
GGS_DDL_OBJECTS TABLE
GGS_DDL_PARTITIONS TABLE
GGS_DDL_PRIMARY_KEYS TABLE
GGS_DDL_RULES TABLE
GGS_DDL_RULES_LOG TABLE

TNAME TABTYPE CLUSTERID
—————————— ——- ———-
GGS_MARKER TABLE
GGS_SETUP TABLE
GGS_STICK TABLE
GGS_TEMP_COLS TABLE
GGS_TEMP_UK TABLE

16 rows selected.

Alright now we don’t have those entries coming anymore for the schema. Will try to grant the same privilege to the GG user back again and will see what happens now.

SQL> conn / as sysdba
Connected.

SQL> grant sysdba to ggs_owner;
Grant succeeded.

Cool, now after we removed or purged those Bin entries for the schema, we don’t have any error coming while granting the GG user privileges.

Hope That Helps
Prashant D

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

 
%d bloggers like this: