Tales From A Lazy Fat Oracle DBA

$ prashantdixit/oracle90@ace as sysdba

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat Oracle DBA on WordPress.com
  • My Twitter Feeds

  • Oracle Radio

  • Magic Of Oracle

  • Disclaimer!

    FatDBA or Oracle 'Ant' is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Golden Gate High Availability using XAG Bundles!

Posted by FatDBA on November 9, 2016

Oracle Grid Infrastructure Bundled Agents (XAG) are Oracle Grid Infrastructure components that provide the HA framework to application resources and resource types managed through the bundled agent management interface, AGCTL.
This post will help to understand, implement the XAG agents to have the HA for Golden Gate.

To begin installation, this zip file needs to be downloaded and expanded in a temporary directory.
The $XAG_HOME and sub-directories must be owned by Oracle Grid Infrastructure install owner. The setup script, xagsetup.sh, must be run as the Oracle Grid Infrastructure install owner. The xagsetup.sh script offers the option to install local only (the default) or to install on all nodes in the cluster or, a subset of nodes in the cluster where the application is targeted to run. When deploying the bundled agents to multiple nodes in the cluster, the xagsetup.sh is executed on only one node of the cluster, and the xagsetup.sh script will attempt to create $XAG_HOME directory on all remote nodes.

[oracle@monkey02 xag]$ ./xagsetup.sh –install –directory /u01/app/grid/product/xag –all_nodes
Installing Oracle Grid Infrastructure Agents on: monkey01
Installing Oracle Grid Infrastructure Agents on: monkey02
Installation of Oracle Grid Infrastructure Agents Successfull!

Next step is where you creates the VIP resource that will be used by the GG resource to relocate itself on the other remaining nodes when needed.
[root@monkey02 bin]# ./appvipcfg create -network=1 -ip=192.168.81.101 -vipname=testxag-ggs-vip -user=oracle -group=oinstall

Assign the required set of permissions to the grid install owner on this newly created VIP resource.
[root@monkey02 bin]# ./crsctl setperm resource testxag-ggs-vip -u user:oracle:r-x

Alright, so the VIP resource is created and is running fine.

——————————————————————————-
Cluster Resources
——————————————————————————–
testxag-ggs-vip
1 ONLINE ONLINE monkey02 STABLE

Next we will create the golden gate resource and configure it from the golden gate user.
Call the AGCTL utlity from the XAG_HOME location and choose all required parameters/arguments with GG processes those you want to monitor through this.

[oracle@monkey02 bin]#./agctl add goldengate xagtest_ggs \
–gg_home /olala/app/ggg \
–instance_type dual \
–nodes monkey01,monkey02 \
–vip_name testxag-ggs-vip \
–filesystems ora.monkeydds.monkeydds_vol.acfs \
–databases ora.tunedb.db \
–oracle_home /u01/app/oracle/product/12.1.0.2/dbhome_1 \
–monitor_extracts ETEST1,ETEST2,ETEST3,PTEST1,EPUMP1,EPUMP2,EPUMP3 \
–monitor_replicats PTEST1,PTEST2,PTEST3 \

Options & arguments explained:
–gg_home **GoldenGate installation directory**
–instance_type <source|target|dual*** —- dual is for bi directional replication.
–oracle_home ***path***
–databases ***associated database resources***
–monitor_extracts ***ext1,ext2,ext3,…***
–monitor_replicats ***rep1,rep2,rep3,…***
–vip_name ***VIP resource name***
–filesystems ***acfs1,acfs2,…***

Okay, now when we are done with the configuration of the GG resource, lets verify it.

[oracle@monkey02 bin]$ ./agctl config goldengate xagtest_ggs
GoldenGate location is: /olala/app/ggg
GoldenGate instance type is: dual
Configured to run on Nodes: monkey01 monkey02
ORACLE_HOME location is: /u01/app/oracle/product/12.1.0.2/dbhome_1
Databases needed: ora.tunedb.db
File System resources needed: ora.monkeydds.monkeydds_vol.acfs
VIP name: testxag-ggs-vip
EXTRACT groups to monitor: ETEST1,ETEST2,ETEST3,PUMP1,PUMP2,PUMP3
REPLICAT groups to monitor: RTEST1,RTEST2,RTEST3
Critical EXTRACT groups:
Critical REPLICAT groups:
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no

Alright, so this is what we want. Next we have to start the Golden Gate resource which we have created at the last step.

[oracle@monkey02 bin]$ ./agctl start goldengate xagtest_ggs

In the beginning the status will be in STARTING mode

xag.lab_ggate.goldengate

1 ONLINE OFFLINE monkey01 STARTING
——————————————————————————–

Now, once the resource is started the status of the cluster resource will be STABLE state.

xag.xagtest_ggs.goldengate
1 ONLINE OFFLINE monkey01 STABLE
——————————————————————————–

Now, we are good to go and can perform manual tests like relocation of resources on other nodes.

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

CLSRSC-351 & CRS-4000 Errors during execution of root.sh for GRID installation.

Posted by FatDBA on November 9, 2016

While doing GRID installation on one of the machine where there were few previous failed Grid installations happened I’ve got few of the error messages while running the root.sh script during my installation attempt.

This is what I’ve got during the process of executing the root.sh script.

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4000: Command Pin failed, or completed with errors.
2016/11/07 21:30:06 CLSRSC-161: Pin node using the command ‘/u01/app/oracle/product/12.1.0/grid_1/bin/crsctl pin css -n fatdba’ failed

I tried it executing the same second time: Praying for any magic happens this time😉
But this time some more errors but have left some clues and actions.

[root@Fatdba /]#
[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
2016/11/07 21:32:21 CLSRSC-351: Improper Oracle Clusterware configuration found on this host

2016/11/07 21:32:21 CLSRSC-353: Run ‘/u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig’ to deconfigure existing failed configuration and then re-run ‘root.sh’

The command ‘/u01/app/oracle/product/12.1.0/grid_1/perl/bin/perl -I/u01/app/oracle/product/12.1.0/grid_1/perl/lib -I/u01/app/oracle/product/12.1.0/grid_1/crs/install /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl ‘ execution failed

Okay, so it is clear that its happened due to some previous mess happened on the system before i got this as a task to install the software.It says that there is an improper clusterware configuration identified on the host and along it says to deinstall using roothas.pl script.

So i tried, but it says the ORS or the oracle restart stack is not active on the node, and it shouldn’t be as all the files were removed manually …
so it failed!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
2016/11/07 21:32:54 CLSRSC-39: Oracle Restart stack is not active on this node
2016/11/07 21:32:54 CLSRSC-312: Failed to verify HA resources
Died at /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsdeconfig.pm line 1358.

Let’s try the last resort, the FORCE option to remove previous bad installs.
And it worked!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/crs/install/roothas.pl -deconfig -force
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
2016/11/07 21:39:06 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Lets try to run the root.sh script again to complete this new GRID installation.
It worked now!

[root@Fatdba /]# /u01/app/oracle/product/12.1.0/grid_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node fatdba successfully pinned.
2016/11/07 21:39:27 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.conf’

fatdba 2016/11/07 21:40:01 /u01/app/oracle/product/12.1.0/grid_1/cdata/fatdba/backup_20161107_214001.olr 0
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘fatdba’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘fatdba’
CRS-2677: Stop of ‘ora.evmd’ on ‘fatdba’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘fatdba’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/11/07 21:41:50 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

Hope That Helps!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: | Leave a Comment »

Oracle GRID install failed -INS-40404

Posted by FatDBA on November 9, 2016

Many times while doing some crude deletion or cleanup of GRID directories you have faced an error with the GUI or at the CLI during the progress when you were almost sure that you have deleted everything what is needed.
Bam! right at that moment an error appears saying “The installer has detected a configured instance of Oracle grid infrastructure software on the server” with error code INS-40404.

You tried recalling everything you did to clean and remove every trace of it …. No Luck😦

Solution to the problem is:
Go to oracle folder under /etc directory.
Remove the ‘ocr.loc’ file which keeps on pointing the local ocr locations.

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Upgrade Classic Extract & Replicat to Integrated Options.

Posted by FatDBA on October 28, 2016

Before Oracle 12c the Golden Gate replicat process can only work in serial mode and to achieve this correspondence or parallelism we generally have to split/divide our tables among multiple replicat processes. And yes there were many restrictions and limitations of that approach.

Similarly the Integrated Capture (Was there since 11gR2) is also one of the coolest thing where the Oracle Golden Gate Extract process interacts directly with the log mining server which captures the changes in the form of Logical Change Records (LCR’s) as is to be able to make use of the already existing internal procedures in the database. Now with Integrated capture there is No need to fetch LOB’s from tables, full support for XML and SECURE File LOBs, Transparency in RAC setups.

As most of us are working with the Classic Approach of Extracts and Replicat’s. Below are the steps to upgrade Classic Extract to Integrated Capture and Classic Replicat to Integrated Replicat

How to upgrade an existing GG extract to Integrated Option:
Note:
– Stop both EXTRACT and related PUMP processes during the upgrade process after step 1.
– Parameters that are required for Integrated options and should be mentioned in the parameter files for the
processes.
Extract Parameter
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 1048, parallelism 4)
Replicate Parameter
DBOPTIONS INTEGRATEDPARAMS(parallelism 4)

1. On the source server, grant administrative privileges for capture operations to the GoldenGate user using the following:

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => ‘gg_owner’,privilege_type => ‘capture’, grant_select_privileges=> true, do_grants => TRUE);

2. Register the Extract Group
On the source server, the primary extract group must be registered with the database using the REGISTER EXTRACT command.

GGSCI () 1> DBLOGIN USERID ggsaccount PASSWORD test1
Successfully logged into database.

GGSCI () 2> REGISTER EXTRACT EXSANDY1 DATABASE

Extract EXSANDY1 successfully registered with database at SCN 217817171.

3. Check the process if its ready for the upgrade.

GGSCI ( as ggsaccount@SANDBOX1) 15> INFO EXSANDY1 UPGRADE
ERROR: Extract EXSANDY1 is not ready to be upgraded because recovery SCN 128118181 has not reached SCN 128119718.

This error speaks that the SCN is currently at 128118181 and needs to be incremented to 128119718.
To do this ALTER the extract and start it and immediately stop.

GGSCI () 5> ALTER EXTRACT EXSANDY1 TRANLOG BEGIN NOW
EXTRACT altered.

GGSCI () 7> START EXTRACT EXSANDY1
Sending START request to MANAGER …
EXTRACT EXSANDY1 starting

GGSCI () 8> INFO EXTRACT EXSANDY1

EXTRACT EXSANDY1 Last Started 2016-10-10 11:48:01 Status RUNNING
Checkpoint Lag 00:00:18 (updated 00:00:01 ago)
Log Read Checkpoint Oracle Redo Logs
2016-10-10 11:48:01 Seqno 102, RBA 121789
SCN 0.0 (0)


4. Check if the process is ready for the UPGRADE now after the change.

GGSCI ( as ggsaccount@SANDBOX1) 15> INFO EXSANDY1 UPGRADE
Extract EXSANDY1 is ready to be upgraded to integrated capture.

5. Now its the time to migrate the EXTRACT process.

GGSCI () 11> ALTER EXTRACT EXSANDY1 , UPGRADE INTEGRATED TRANLOG
Extract EXSANDY1 successfully upgraded to integrated capture.

It’s successfully upgraded to INTEGRATED option.

6. Lets conform

GGSCI () 8> INFO EXTRACT EXSANDY1

EXTRACT EXSANDY1 Initialized 2016-10-10 12:48:01 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:01:16 ago)
Log Read Checkpoint Oracle Integrated Redo Logs ————> Nows its showing that its reading from the integrated redo logs.
2016-10-10 12:48:01
SCN 0.3112112 (3112112)

7. Start the PUMP process as well.

How to upgrade an existing GG REPLICAT to Integrated Option:

1. Stop the REPLICAT process

GGSCI () 2> stop replicat RXSANDY1
Sending STOP request to REPLICAT RXSANDY1…
Request processed.

2. Login to the database

GGSCI () 1> DBLOGIN USERID ggsaccount PASSWORD test1
Successfully logged into database.

3. Alter the replicate to upgrade it to INTEGRATED option.

GGSCI () 4> alter replicat RXSANDY1, integrated
REPLICAT (Integrated) altered.

4. Start the replicat.

5. Check the status of the replicat to see if its converted.

GGSCI ( as ggsaccount@SANDBOX2) 45> info RXSANDY1

REPLICAT RXSANDY1 Initialized 2016-10-10 13:00:01 Status RUNNING
INTEGRATED
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Process ID 1897
Log Read Checkpoint File /dbs/next/gg/dirdat/RXSANDY1/DI000012
First Record RBA 0

Thanks
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Oracle Tracing Capabilities – Part 1

Posted by FatDBA on August 31, 2016

Hi Mates,
There are so many times when we exhausted all of our known tips, tricks and techniques of troubleshooting and after trying everything under the sun finally raised requests to Oracle Support, then they asked us to perform some unknown, uncanny, alien steps to troubleshoot the problem in hand and asks us to share the trace files (Which we, most of the times don’t understand) which they analyze and makes conclusion based on them.

Today i would like to start a series of posts where i will share steps to troubleshoot some of Oracle’s in-built tools and software. These steps will help to understand – What and how to enable tracing for them.

Trace Data Pump:
Sometimes while importing/exporting a dump it takes a long time to complete and hangs or session just ‘Freeze’ with no reason. Oracle provides an option to trace import export sessions too by using a parameter TRACE, using this option you can decipher sessions, master & slave processes and other control processes.
You can enable tracing by using the seven digit long hexadecimal argument for the trace option. Below is the complete list of tracing levels.

SHDW: To trace the Shadow process (API) (expdp/impdp)
20300 KUPV: To trace Fixed table
40300 ‘div’ To trace Process services
80300 KUPM: To trace Master Control Process (MCP) (DM)
100300 KUPF: To trace File Manager
200300 KUPC: To trace Queue services
400300 KUPW: To trace Worker process(es) (DW)
800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 ‘all’ To trace all components (full tracing)

How to use it:

impdp \’/ as sysdba\’ SCHEMAS=DIXIT PARALLEL=8 JOB_NAME=testing_tracedpump TRACE=1FF0300
********
KUPP:10:58:22.050: Input trace/debug flags: 01FF0300 = 11818181
KUPP:10:58:22.050: Current trace/debug flags: 01FF0300 = 11818181
SHDW:10:58:22.050: Current user = SYS
SHDW:10:58:22.050: Current schema = SYS
SHDW:10:58:22.050: Current language = AMERICAN_AMERICA.AL32UTF8
SHDW:10:58:22.052: Current session address = 000000007TYBGGG0
SHDW:10:58:22.052: *** OPEN call ***
SHDW:10:58:22.052: operation = IMPORT
SHDW:10:58:22.052: job_mode = schema
SHDW:10:58:22.052: version =
SHDW:10:58:22.052: compression = 2
KUPV:10:58:22.058: Master Table create statement: CREATE TABLE “SYS”.”testing_tracedpump” (process_order NUMBER, duplicate NUMBER, dump_fileid NUMBER, dump_position NUMBER, dump_length NUMBER, dump_orig_length NUMBER
,,,,,,,,,,,,,,,,,,,,,,,,,

This will create some trace files for master control process (DM), shadow processes, slave/worker processes (DW) and under trace directory.
Next will write about more undocumented and hidden features available for some of the tools we use of daily basis.

Hope It Helps
Prashant Dixit

Posted in troubleshooting | Leave a Comment »

Active Data Guard (ADG) is included in Golden Gate License on EE edition.

Posted by FatDBA on August 22, 2016

The license for Oracle GoldenGate includes a full use license for Oracle Active Data Guard, and a full use license for XStream in the Oracle Database.

Active Data Guard is a superset of Data Guard capabilities included with Oracle Enterprise Edition and can be purchased as the Active Data Guard Option for Oracle Database Enterprise Edition. It is included with every Oracle GoldenGate license, offering customers the ability to acquire the complete set of advanced Oracle replication capabilities with a single purchase.

Posted in Uncategorized | Leave a Comment »

Cross Platform Migrations: ‘As Easy As Pie’ in Oracle 12c

Posted by FatDBA on August 22, 2016

The legendary Transportable Tablespace feature was introduced in Oracle 8i to make it convenient to transport a large amount of data between databases. Specially from Oracle10g this useful feature was enhanced with cross-platform support which allowed a tablespace/tablespaces, to be transported between databases deployed on different hardware platforms or between platforms with a different endian formats.

So till 11g the migration activity involves RMAN, EXPDP and IMPDP have to be used to transport tablespace(s) across platforms along with the RMAN CONVERT statement was used. Below are the steps that are required to perform the migration work till 11g.

Step 1: Check Platform Support and File Conversion Requirement
Step 2: Identify Tablespaces to be Transported and Verify Self-containment
Step 3: Check for Problematic Data Types
Step 4: Check for Missing Schemas and Duplicate Tablespace and Object Names
Step 5: Make Tablespaces Read-only in Source Database
Step 6: Extract Metadata from Source Database (We could use either data pump or original export to do this)
Step 7: Copy Files to Target Server and Convert if Necessary (Conversion involves RMAN)
Step 8: Import Metadata into Target Database (This step is sometimes called “plugging in” the tablespaces. Again we can use data pump or original import).
Step 9: Copy Additional Objects to Target Database as Desired

With the introduction of Oracle Database 12c, it includes a very easy and novel way to do the same – That is the ability to transport tablespaces across platforms using just RMAN and RMAN (compressed) backupsets!

Let me show you how we can transport a tablespace from Oracle Linux to Oracle Solaris. Which is an example of a cross platform migration with different ENDIAN formats. Solaris is BIG endian whereas the Linux is a small ENDIAN type OS.

SQL> select banner from v$version;

BANNER
——————————————————————————–
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
PL/SQL Release 12.1.0.2.0 – Production

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
———– ————————————————– ————–
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
13 Linux x86 64-bit Little

SQL> SELECT tablespace_name, segment_type, COUNT(*),
2 SUM (bytes) / 1024 / 1024 mb
3 FROM dba_segments
4 WHERE owner = ‘DIXIT’
5 GROUP BY tablespace_name, segment_type
6 ORDER BY 1, 2 DESC;

TABLESPACE_NAME SEGMENT_TYPE COUNT(*) MB
————— ———— ———- ——-
IND1 INDEX 88 1353.4
TAB1 TABLE 41 4079.6
TAB1 LOBSEGMENT 3 0.4
TAB1 LOBINDEX 3 0.2
TAB1 INDEX 53 106.4

Make Tablespaces Read-only in Source Database
With today’s filers and sophisticated storage systems, it is often possible to take a filer “snapshot” or split a mirror in order to get a copy of the data files very quickly. Extracting metadata is also quick. So, on a system with a good storage system, tablespaces may only need to be read-only for a few minutes.
NOTE: In 12c we can use a procedure that keeps the downtime to a minimum with the ‘Incremental Cross-Platform Transportable Tablespaces’. It also uses RMAN transportable backupsets but is a slightly more complicated procedure.

We put the tablespaces into read-only mode with the following statements:

SQL> ALTER TABLESPACE tab1 READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE ind1 READ ONLY;
Tablespace altered.

In order to create a TTS backup we have two of the optins available to use either BACKUP FOR TRANSPORT or the BACKUP TO PLATFORM RMAN

Whats the difference between the two ?
Answer: The difference between these two arguments or statements in RMAN is where the datafile conversion will take place. The BACKUP FOR TRANSPORT statement should be used if the datafile conversion is to be performed on the target system and the BACKUP TO PLATFORM statement should be used if the datafile conversion is to be performed on the source system.

For the test purposes we will perform the conversion of datafiles on the source using the BACKUP TO PLATFORM statement specifying the name of the target platform as its argument.
We have to provide some additional information like Where RMAN should place the backupsets. RMAN will create one backupset with the datafile(s) and a second backupset with the metadata dumpset that EXPDP will create in lieu of RMAN. For this test i will create a compressed transportable backupset.

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 10:49:57 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> backup to platform ‘Solaris[tm] OE (64-bit)’ as compressed backupset
2> tablespace xtransport format ‘/tmp/dbfilebackups.bck’
3> datapump format ‘/tmp/infometaexpdp.bck’;
Starting backup at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ”:
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table “SYS”.”TRANSPORT_EXP_V121_pyAn” successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TRANSPORT_EXP_TUNEDB_Y7OJ is:
EXPDP> /u01/app/oracle/product/12.1.0.2.0/db_1/dbs/backup_transporttbs_tunedb_181881.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace XTRANSPORT:
EXPDP> /u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
EXPDP> Job “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ” successfully completed at Sun Aug 21 10:53:55 2016 elapsed 0 00:04:03
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/dbfilebackups.bck tag=TAG201698188T888 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/infometaexpdp.bck tag=TAG201678777U998 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:06
Finished backup at 21-AUG-16

Recovery Manager complete.

So in short the RMAN has performed below mentioned activities:
– Verify and Identify Tablespaces to be Transported and Verify Self-containment
– Extract Metadata from Source Database using EXPDP.
– RMAN created a compressed backupset which contains the tablespace’s datafile.
– Created a backupset containing the metadata dump.

Now its time to restore the transportable backupset!!

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 12:39:13 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> restore from platform ‘Solaris[tm] OE (64-bit)’
2> foreign tablespace IND1, TAB1 to new
3> from backupset ‘/tmp/dbfilebackups.bck’
4> dump file from backupset ‘/tmp/infometaexpdp.bck’;
Starting restore at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=67 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace IND1, TAB1
channel ORA_DISK_1: reading from backup piece /tmp/dbfilebackups.bck
channel ORA_DISK_1: restoring foreign file 9 to /u01/db/tunedb/data/tunedb/datafile/o1_mf_testtransport_ab77hho11_.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/dbfilebackups.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0.2.0/db/dbs/o1_mf_ttftest_dixit_.dmp
channel ORA_DISK_1: reading from backup piece /tmp/infometaexpdp.bck
channel ORA_DISK_1: foreign piece handle=/tmp/infometaexpdp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04

Performing import of metadata…
IMPDP> Master table “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully loaded/unloaded
IMPDP> Starting “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu”:
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully completed at Sun Aug 21 12:42:10 2016 elapsed 0 00:02:03
Import completed

Finished restore at 21-AUG-16

Recovery Manager complete.

Deducing on the basis of RMAN restore logs, its clear that the RMAN completed following steps:
– It restored the foreign tablespace’s datafile from the datafile backupset.
– Along it restores the tablespace metadata from the metadata backupset.
– Import the tablespace metadata using IMPDP.

Hope That Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Admin stuff on MongoDB Document Stores soon!

Posted by FatDBA on August 19, 2016

Yes, soon i will start sharing some stuff on one of the leading document stores (oriented) database
Yup i got certified few years back but those are the days when it wasn’t that popular. Now its one of the most popular document stores databases used by businesses to transform using Big Data.

Posted in Advanced | Leave a Comment »

Days of ‘catbundle PSU apply’are over, Lets welcome ‘DATAPATCH’ in 12c!!

Posted by FatDBA on August 19, 2016

Hi Mates,
With the introduction of Oracle 12c the SQL commands belonging to one patch are not installed by the catbundle.sql but by the “datapatch” tool, located in the OPatch directory. Apart from that the datapatch checks before if the requirement for the installation are met.

Let me be more simple — Datapatch is the new tool that enables automation of post-patch SQL actions for RDBMS patches. So, In 12c you don’t use carbundle psu apply now this is all done using datapatch.

With the Enterprise Manager and OPatchAuto we gets the further automation of database patches by calling datapatch automatically after applying the binary patch.

Enterprise Manager: Starting version 12.1 Enterprise Manager now calls datapatch to complete post patch actions upon any 12c or later database restart.

OPatchAuto : OPatchAuto calls datapatch to complete post patch actions upon installation of the binary patch and restart of the database.

OPatch : Datapatch integration with OPatch is not possible as OPatch is executed when the database is down and datapatch requires the database to be opened to complete its activity.
When patches are installed or rollbacked using OPatch then datapatch needs to be explicitly invoked if instructed to do so in the patch readme.

RAC specific: For a RAC environment, after the binary patch has been applied on all nodes run Datapatch to complete the post-install SQL deployment for the PSU only from one node. Datapatch need not be run on all the nodes.

Below are the steps that you normally perform while applying a patch.

1
2

Restart your Database now when the patch is successfully applied to the binaries.
3

Now apply the patch to the database using DATAPATCH Utility from ORACLE_HOME/OPatch directory.
4

Lest query the database to check the patch apply status.
5

or

To check the PSU applied to your database using the following SQL statement

select * from DBA_REGISTRY_SQLPATCH;

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Number Of Oracle Database 12c Log Writers ? Yes Finally, but boon or bane ?

Posted by FatDBA on August 4, 2016

Was really excited when I saw on my 12c test machine by default it had two redo workers in addition to the “parent” log writer.

a877463c8ff55d5dbd531826356d319c

Snippet from one of the test database with a parent and two redo workers.

$ ps -eaf|grep tunedb | grep ora_lg
oracle 54964 1 0 14:37 ? 00:00:00 ora_lgwr_tunedb
oracle 54968 1 0 14:37 ? 00:00:00 ora_lg00_tunedb
oracle 54972 1 0 14:37 ? 00:00:00 ora_lg01_tunedb

Yes, with 12c the wait is over.🙂🙂🙂🙂
Multiple LGWRs is great news because serialization is the demise of computable processes and structures.

But But But —- Not sure how stable and good this feature is. I recently faced one bug 19181582 : DEADLOCK BETWEEN LG0N ON ‘LGWR WORKER GROUP ORDERING’ in 12c production environment because of this new feature.
It causes the database to hang and at this moment the patch is not ready.

Solution is to set instance parameter _use_single_log_writer=TRUE, with this parameter I was able to REDUCE the number LGWRs to only one.

Right now understanding to control the number of log writer slave with _max_outstanding_log_writes and _max_log_write_parallelism instance level parameters or any AUTO Behaviors of increasing-decreasing redo writers.

Thanks
Prashant Dixit

Posted in Advanced, troubleshooting | Leave a Comment »

 
%d bloggers like this: