Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘12c’

All about Oracle 12c Database In-Memory!

Posted by FatDBA on December 27, 2017

Hi Mates,

Today i will discuss about the Database In-Memory option of Oracle 12c databases. I know i am little late to add about this feature but recently I’ve implemented the option for one of my customer in Latin Americas region and there I’ve got more exposure to understand it better and implement the feature.

So, lets start with most probably my last post of the year!

Recently i have got a chance to test and to benchmark the performance gains in one of out Pre-Prod environment. This feature applies on Tablespaces, Tables, MViews, (Sub) Partitions (Except objects owned by SYS, SYSTEM and SYSAUX). To understand the feature i would first like to shed some light on two of the ways how Oracle stores tables on both Disk and Memory using conventional ‘Row Format’ and with the all new In-Memory ‘Column’ format.

Row Arrangement: Is the same old traditional method to store data in row formats. This is best for OLTP systems as queries runs faster with this approach as it quickly fetches all of the columns in a record.
Column Based Arrangement: This way it stores records in a separate column store. This proves good for OLAP systems where large set of data is chosen but only for few number of columns.

Hence, based on above two methods or data arrangements in database it is clear that the row based method is best for DMLs and column based arrangement is good when selecting large portion of data, so both of the two methods have their own respective pros and cons. But starting from Oracle 12.1.0.2 we have the all new feature of ‘DB In-Memory’ which use best of both the approaches. I mean it uses both row and column arrangements to keep data in memory. Our smart optimizer automatically knows which query to route as per the workload (OLTP and for Analytical processing).

The In-Memory feature uses the IM Column store which is a new occupant of the SGA (In-Memory Area). This In-Memory Area is adjusted by a parameter INMEMORY_SIZE.

As far as IM Column Store it is filled by the information collected by worker processes e.g. w001, w002 etc.; each of the worker process updates the IM Compression Units or IMCUs.

So, enough the background and internals involved, lets jump to the real work and check how this thing practically works.
First check the minimum comparability of the database

COMPATIBLE = 12.1.0.0.0

Lets first enable the in-memory column store. There are many of the related parameters and are given below.

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
inmemory_adg_enabled                 boolean     TRUE
inmemory_clause_default              string
inmemory_expressions_usage           string      ENABLE
inmemory_force                       string      DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query                       string      ENABLE
inmemory_size                        big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
inmemory_virtual_columns             string      MANUAL
optimizer_inmemory_aware             boolean     TRUE

Before i show next logs on how to enable it, i would first like to show you the database startup details with no In-Memory enabled.

SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             771753984 bytes
Database Buffers          402653184 bytes
Redo Buffers                8155136 bytes
Database mounted.
Database opened.
SQL>

Okay now lets enable it!
Now a question might be asked about the criterion of adding objects to the pool. So in my opinion the best candidates would
be — Very hot data, large segments (not less than 1MB), collect these stats from sources like Oracle segment statistics, number of column scans, AWR reports etc. Analytical queries etc.

SQL> alter system set inmemory_size=250m scope=spfile;

System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             822085632 bytes
Database Buffers           83886080 bytes
Redo Buffers                8155136 bytes
In-Memory Area            268435456 bytes
Database mounted.
Database opened.

Now i will try to load some stuff in to the memory.

SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate 
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                DISABLED

00:14:42 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE                         >>>>>>> 1MB pool is used to store the column formatted data.
64KB POOL                     50331648          0 DONE                         >>>>>>> 64 KB pool is used to store the metadata about its residents.

Now i am moving the discussed table to in memory.

exec DBMS_INMEMORY.POPULATE(schema_name => dixit, table_name => 'TEST1');
or Using

00:15:01 SQL> alter table test1 inmemory;

Table altered.


00:17:59 SQL> SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                ENABLED  NONE     AUTO            FOR QUERY LOW     NO DUPLICATE




00:20:17 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE
64KB POOL                     50331648          0 DONE


00:18:25 SQL> select owner, segment_name, populate_status from v$im_segments;

no rows selected

Above results shows that no segment is added to the pool. Now lets try to query the table and see the results again.

00:20:26 SQL> select count(*) from test1;

  COUNT(*)
----------
   1000000



00:20:48 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 POPULATING
64KB POOL                     50331648          0 POPULATING


Okay now it's doing something as status from DONE has changed to POLULATING. Lets repeat the same command again.

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016   49283072 DONE
64KB POOL                     50331648     458752 DONE

Okay its there in the pool now! Lets check the in memory area, what we have inside it.

00:23:56 SQL> select owner, segment_name, populate_status from v$im_segments;

OWNER      SEGMENT_NAME         POPULATE_STAT
---------- -------------------- -------------
DIXIT      TEST1                COMPLETED

Lets conform if table is fully populated we have to look at v$im_segments_detail which compares the number of blocks in In-Memory and in the TEST1 table.

SELECT m.inst_id, 
       m.blocksinmem, 
       m.datablocks 
FROM   v$im_segments_detail m, 
       user_objects o 
WHERE  m.dataobj = o.object_id 
AND    o.object_name = 'TEST1';

   INST_ID BLOCKSINMEM DATABLOCKS
---------- ----------- ----------
         1        6922       6922

Lets see if there is any changes captured in Execution plan of the query.

SQL> explain plan for select count(*) from test1;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3896847026

-----------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    71   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE             |       |     1 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| TEST1 |  1000K|    71   (2)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

Now we have a new operation type added to the plan ‘TABLE ACCESS INMEMORY FULL’. It says that the object was accessed directly from the memory.

You can compare the response times with/without the In-Memory Store by simply disabling In-Memory functionality in your session:

ALTER SESSION SET inmemory_query = ENABLE;
ALTER SESSION SET inmemory_query = DISABLE;

You can anytime remove the objects from the pool. Other options to use with In memory.

SQL> ALTER TABLESPACE example INMEMORY;

Enabling the In-Memory attribute on the TEST1 table but excluding the “dest_id” column

SQL> ALTER TABLE TEST1 INMEMORY NO INMEMORY(dest_id);

Enabling the in memory option for table TEST1 and setting the priority to CRITICAL. With this option set on object (Other options are HIGH, MEDIUM, LOW), it will be populated immediately after the database is opened.

SQL> ALTER TABLE TEST1 INMEMORY PRIORITY CRITICAL;

There are few other options like Compression (Objects compressed during population) , Joins, Scans are also there and will be covered in a separate post or you can read the official documentation on in memory to understand them.

But i would like to discuss little about how the In-Memory option works in RAC environment.

In case of user querying the database in-memory in RAC, serial queries will only access a fraction of the data from its own node. IMCUs or IM Compression Units are not traveled using interconnect or using cache fusion. It fetches the remainder data from the disk itself.
Parallel execution helps as it starts multiple processes and ensures that at least one parallel server slave is allocated for each RAC instance. For that we have to set the parallel_degree_policy or Auto DOP to AUTO which makes the query coordinator ICMU aware and it (QC) automatically starts parallel server processes on correct server.

Now after all the discussion time to share my final words on the subject.
As we know that caching or pooling are the concepts there with Oracle RDBMS or with any of the databases from a very long time; then – why this in memory now ?
We already have few of the areas like KEEP/RECYCLE pools, Result cache etc. to keep cache data or results.
So my answer is that the Oracle In-Memory column store enables objects to load in memory in compressed columnar format which makes the scans to perform better that on-disk reads and leads to performance boosts. Use it when there is a SQL related performance issue and you’ve tried all tuning methods and each one of them are failed to fix the problem and yes If the In-Memory performance benefit can outperform the additional Oracle license costs.

Hope It Helps
Prashant Dixit

Advertisement

Posted in Advanced, Basics | Tagged: , | Leave a Comment »

Auto Stats Gathering in Oracle 12c & Something Interesting :)

Posted by FatDBA on December 14, 2017

Hi Fellas,
Starting from Oracle 12c there is a new feature added which collects the statistics when you perform the Bulk Loads when using any of the two methods:
– CREATE TABLE AS SELECT (CTAS)
– INSERT INTO … SELECT (Into an empty table using DPR or Direct Path Read).

SQL> explain plan for create table dixittab as select * from scottisdead;
Explained.
 
SQL> select * from table(DBMS_XPLAN.DISPLAY);
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 14312189
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |               |   500K|  8812K|   612   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | DIXITTAB      |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |               |   500K|  8812K|   371   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SCOTTISDEAD   |   500K|  8812K|   371   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------
10 rows selected. 

Above in execution plan you’ll see the new operation named “OPTIMIZER STATISTICS GATHERING” at ID 2.
Lets verify if the stats are collected.

SQL> select table_name, last_analyzed from user_tables where table_name = 'DIXITTAB';
 
TABLE_NAME       LAST_ANALYZED
---------------- -------------
DIXITTAB         12-DEC-17

Yup, stats were collected!
Same way stats will be auto collected during the other type of Bulk Load method (INSERT INTO .. SELECT).

There may be times when you want to disable this feature, situations like.
– Long/Huge Insert operations which is taking huge time on STATS GATHERING.
– With an extremely large dataset where you don’t want to collect stats.

In order to achieve that we have the option, with the use of a hint which instructs oracle to not gather table statistics.

SQL> create table dixittab as select /*+NO_GATHER_OPTIMIZER_STATISTICS */* from scottisdead;

Now, something interesting i would like to discuss …..
Is there any other condition when the stats won’t be collected automatically except barring it using NO_GATHER_OPTIMIZER_STATISTICS Hint ?

Lets try to do some conventional bulk loading using INSERT INTO .. SELECT method.
To do some tests – I am intentionally commenting few of the columns both the tables. I’ve commented DATE_VAL column of newly created table TABLE1 and DATE_VALUE of the table selected SAMPLE.


SQL> create table table1 (ident number, date_val date, text_val varchar2(4000));
Table created.


SQL> insert /*+ append */ into table1
(IDENT
--, DATE_VAL
, TEXT_VAL)
SELECT ID
--, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE; 

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------

Plan hash value: 1523099961
-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |        |   100K|  2539K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT    | TABLE1 |       |       |            |          |
|   2 |   TABLE ACCESS FULL| SAMPLE |   100K|  2539K|   154   (1)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

😦 😦 Why, the auto stats gathering behavior not repeated this time ??

This happened because Oracle needs inclusion of all the columns of a table in order to kick in the OPTIMIZER STATISTICS GATHERING operation —> Let me show you what i said in above statement.

SQL> insert /*+ append */ into table1
(IDENT
, DATE_VAL
, TEXT_VAL)
SELECT ID
, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE;  

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------

Plan hash value: 1523099961
-------------------------------------------------------------------------------------------
| Id  | Operation                        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | TABLE1 |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SAMPLE |   100K|  3320K|   154   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------

10 rows selected.

Yes, the the stats were collected this time when we’ve included all the columns of the tables.
I haven’t seen any documentation on this restriction on this new feature of Oracle 12c. Hope Oracle adds this soon this to their documentation 🙂 …..

Hope It Helps!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

DB Upgrade Failed! :( – Have to start it from the scratch, Really ??

Posted by FatDBA on December 4, 2017

Hi Mates,

Lets recall the list of per-requisites that we have to perform before doing a database upgrade!
Few of them i quickly recall are – Full RMAN Backup, Creating Guaranteed Restore points etc.

But are these options allows us to resume any failed upgrade activity ? — NO

Let’s go back to time before Oracle 12c (Ex: 10g, 11g etc.)

Resuming of failed upgrade was not possible and you have to recall the catupgrd.sql and open the database in UPGRADE mode but this will only start it from the beginning and won’t resume from the point it got failed last time. So same amount of time we be consumed once again for the upgrade

SQL> startup upgrade
SQL> spool /home/dixit/upgradedir/catupgrdscriptout.log
SQL> @?/rdbms/admin/catupgrd.sql

Now lets discuss about 12c, starting from 12c Release 1 we have the all new parallel upgrade utility (catctl.pl). For this you still have to open your database in UPGRADE mode and will be controlled by this new perl script.
If talk about restarting of failed upgrade, here we’ve got a small relief as you will be able to restart the upgrade from the phase where it was crashed/stopped using -P option.

Example:

Serial Phase #:56 [UPGR] Files:1 Time: 3s
******************* Migration ******************
Restart Phase #:57 Files: 1 Time: 0s
Serial Phase #:58 Files: 1 Time: 102s
Restart Phase #:59 Files: 1 Time: 1s
Serial Phase #:60 Files: 1 Time: 221s
Restart Phase #:61 Files: 1 Time: 0s
Serial Phase #:62 Files: 1 Time: 25s
Restart Phase #:63 Files: 1 Time: 0s
Serial Phase #:64 Files: 1 Time: 2s
Serial Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/db_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/db_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/db_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only > catupgrd_datapatch_upgrade.log 2> catupgrd_datapatch_upgrade.err FAILED

As we see the upgrade failed at phase number 65, so we can restart the upgrade mentioning the phase number (-P) to resume it from that phase and upgrade process won’t repeat steps happened successfully.

$ORACLE_HOME/perl/bin/perl catctl.pl -p 65 -l /home/dixit/upgradedir/catupgrd.sql

******************* Migration ******************
Serial Phase #:65 Files: 1 Time: 28s
Serial Phase #:66 Files: 1 Time: 3921s
Serial Phase #:68 Files: 1 Time: 0s
Serial Phase #:69 Files: 1 Time: 57s
Serial Phase #:70 Files: 1 Time: 1027s
Serial Phase #:71 Files: 1 Time: 3s
Serial Phase #:72 Files: 1 Time: 0s
Serial Phase #:73 Files: 1 Time: 18s
Grand Total Time: 6459s
LOG FILES: (catupgrd*.log)
Upgrade Summary Report Located in:
/u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/JANI/upgrade/upg_summary.log
Grand Total Upgrade Time: [0d:0h:40m:39s]

Now, lets discuss the main objective of writing this new blog entry.
Starting from Oracle 12c Release 2 we have got more control over upgrade activities as with this latest release you will be able to resume the upgrade from where it got stopped last time. Oracle introduced a new flag -R with catctl.pl to achieve this, so don’t worry about the upgrade phases 🙂

$ORACLE_HOME/perl/bin/perl catctl.pl -R -l /home/dixit/upgradedir/catupgrd.sql

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

12c all new Parallel Upgrade utility.

Posted by FatDBA on October 24, 2017

With Oracle Database 12c enters the all new Parallel Upgrade Utility, catctl.pl. This utility exchanges the catupgrd.sql script that was used in earlier releases.
Although you can still use the catupgrd.sql script, it is vilipended starting with Oracle Database 12c and will be removed in future releases.
Oracle urges database upgrades be performed with the new Parallel Upgrade Utility, catctl.pl.

If you choose to run the catupgrd.sql script instead of running catctl.pl, doing so now requires an additional input argument as follows:

SQL> @catupgrd.sql PARALLEL=NO

If you run catupgrd.sql without the parameter, then Oracle displays the following error message:

NOTE:

The catupgrd.sql script is being deprecated in the 12.1 release of Oracle Database. Customers are encouraged to use catctl.pl as the replacement for catupgrd.sql when upgrading the database dictionary.

cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl -n 4 catupgrd.sql

Refer to the Oracle Database Upgrade Guide for more information.

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Cross Platform Migrations: ‘As Easy As Pie’ in Oracle 12c

Posted by FatDBA on August 22, 2016

The legendary Transportable Tablespace feature was introduced in Oracle 8i to make it convenient to transport a large amount of data between databases. Specially from Oracle10g this useful feature was enhanced with cross-platform support which allowed a tablespace/tablespaces, to be transported between databases deployed on different hardware platforms or between platforms with a different endian formats.

So till 11g the migration activity involves RMAN, EXPDP and IMPDP have to be used to transport tablespace(s) across platforms along with the RMAN CONVERT statement was used. Below are the steps that are required to perform the migration work till 11g.

Step 1: Check Platform Support and File Conversion Requirement
Step 2: Identify Tablespaces to be Transported and Verify Self-containment
Step 3: Check for Problematic Data Types
Step 4: Check for Missing Schemas and Duplicate Tablespace and Object Names
Step 5: Make Tablespaces Read-only in Source Database
Step 6: Extract Metadata from Source Database (We could use either data pump or original export to do this)
Step 7: Copy Files to Target Server and Convert if Necessary (Conversion involves RMAN)
Step 8: Import Metadata into Target Database (This step is sometimes called “plugging in” the tablespaces. Again we can use data pump or original import).
Step 9: Copy Additional Objects to Target Database as Desired

With the introduction of Oracle Database 12c, it includes a very easy and novel way to do the same – That is the ability to transport tablespaces across platforms using just RMAN and RMAN (compressed) backupsets!

Let me show you how we can transport a tablespace from Oracle Linux to Oracle Solaris. Which is an example of a cross platform migration with different ENDIAN formats. Solaris is BIG endian whereas the Linux is a small ENDIAN type OS.

SQL> select banner from v$version;

BANNER
——————————————————————————–
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
PL/SQL Release 12.1.0.2.0 – Production

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
———– ————————————————– ————–
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
13 Linux x86 64-bit Little

SQL> SELECT tablespace_name, segment_type, COUNT(*),
2 SUM (bytes) / 1024 / 1024 mb
3 FROM dba_segments
4 WHERE owner = ‘DIXIT’
5 GROUP BY tablespace_name, segment_type
6 ORDER BY 1, 2 DESC;

TABLESPACE_NAME SEGMENT_TYPE COUNT(*) MB
————— ———— ———- ——-
IND1 INDEX 88 1353.4
TAB1 TABLE 41 4079.6
TAB1 LOBSEGMENT 3 0.4
TAB1 LOBINDEX 3 0.2
TAB1 INDEX 53 106.4

Make Tablespaces Read-only in Source Database
With today’s filers and sophisticated storage systems, it is often possible to take a filer “snapshot” or split a mirror in order to get a copy of the data files very quickly. Extracting metadata is also quick. So, on a system with a good storage system, tablespaces may only need to be read-only for a few minutes.
NOTE: In 12c we can use a procedure that keeps the downtime to a minimum with the ‘Incremental Cross-Platform Transportable Tablespaces’. It also uses RMAN transportable backupsets but is a slightly more complicated procedure.

We put the tablespaces into read-only mode with the following statements:

SQL> ALTER TABLESPACE tab1 READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE ind1 READ ONLY;
Tablespace altered.

In order to create a TTS backup we have two of the optins available to use either BACKUP FOR TRANSPORT or the BACKUP TO PLATFORM RMAN

Whats the difference between the two ?
Answer: The difference between these two arguments or statements in RMAN is where the datafile conversion will take place. The BACKUP FOR TRANSPORT statement should be used if the datafile conversion is to be performed on the target system and the BACKUP TO PLATFORM statement should be used if the datafile conversion is to be performed on the source system.

For the test purposes we will perform the conversion of datafiles on the source using the BACKUP TO PLATFORM statement specifying the name of the target platform as its argument.
We have to provide some additional information like Where RMAN should place the backupsets. RMAN will create one backupset with the datafile(s) and a second backupset with the metadata dumpset that EXPDP will create in lieu of RMAN. For this test i will create a compressed transportable backupset.

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 10:49:57 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> backup to platform ‘Solaris[tm] OE (64-bit)’ as compressed backupset
2> tablespace xtransport format ‘/tmp/dbfilebackups.bck’
3> datapump format ‘/tmp/infometaexpdp.bck’;
Starting backup at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ”:
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table “SYS”.”TRANSPORT_EXP_V121_pyAn” successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TRANSPORT_EXP_TUNEDB_Y7OJ is:
EXPDP> /u01/app/oracle/product/12.1.0.2.0/db_1/dbs/backup_transporttbs_tunedb_181881.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace XTRANSPORT:
EXPDP> /u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
EXPDP> Job “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ” successfully completed at Sun Aug 21 10:53:55 2016 elapsed 0 00:04:03
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/dbfilebackups.bck tag=TAG201698188T888 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/infometaexpdp.bck tag=TAG201678777U998 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:06
Finished backup at 21-AUG-16

Recovery Manager complete.

So in short the RMAN has performed below mentioned activities:
– Verify and Identify Tablespaces to be Transported and Verify Self-containment
– Extract Metadata from Source Database using EXPDP.
– RMAN created a compressed backupset which contains the tablespace’s datafile.
– Created a backupset containing the metadata dump.

Now its time to restore the transportable backupset!!

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 12:39:13 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> restore from platform ‘Solaris[tm] OE (64-bit)’
2> foreign tablespace IND1, TAB1 to new
3> from backupset ‘/tmp/dbfilebackups.bck’
4> dump file from backupset ‘/tmp/infometaexpdp.bck’;
Starting restore at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=67 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace IND1, TAB1
channel ORA_DISK_1: reading from backup piece /tmp/dbfilebackups.bck
channel ORA_DISK_1: restoring foreign file 9 to /u01/db/tunedb/data/tunedb/datafile/o1_mf_testtransport_ab77hho11_.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/dbfilebackups.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0.2.0/db/dbs/o1_mf_ttftest_dixit_.dmp
channel ORA_DISK_1: reading from backup piece /tmp/infometaexpdp.bck
channel ORA_DISK_1: foreign piece handle=/tmp/infometaexpdp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04

Performing import of metadata…
IMPDP> Master table “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully loaded/unloaded
IMPDP> Starting “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu”:
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully completed at Sun Aug 21 12:42:10 2016 elapsed 0 00:02:03
Import completed

Finished restore at 21-AUG-16

Recovery Manager complete.

Deducing on the basis of RMAN restore logs, its clear that the RMAN completed following steps:
– It restored the foreign tablespace’s datafile from the datafile backupset.
– Along it restores the tablespace metadata from the metadata backupset.
– Import the tablespace metadata using IMPDP.

Hope That Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Days of ‘catbundle PSU apply’are over, Lets welcome ‘DATAPATCH’ in 12c!!

Posted by FatDBA on August 19, 2016

Hi Mates,
With the introduction of Oracle 12c the SQL commands belonging to one patch are not installed by the catbundle.sql but by the “datapatch” tool, located in the OPatch directory. Apart from that the datapatch checks before if the requirement for the installation are met.

Let me be more simple — Datapatch is the new tool that enables automation of post-patch SQL actions for RDBMS patches. So, In 12c you don’t use carbundle psu apply now this is all done using datapatch.

With the Enterprise Manager and OPatchAuto we gets the further automation of database patches by calling datapatch automatically after applying the binary patch.

Enterprise Manager: Starting version 12.1 Enterprise Manager now calls datapatch to complete post patch actions upon any 12c or later database restart.

OPatchAuto : OPatchAuto calls datapatch to complete post patch actions upon installation of the binary patch and restart of the database.

OPatch : Datapatch integration with OPatch is not possible as OPatch is executed when the database is down and datapatch requires the database to be opened to complete its activity.
When patches are installed or rollbacked using OPatch then datapatch needs to be explicitly invoked if instructed to do so in the patch readme.

RAC specific: For a RAC environment, after the binary patch has been applied on all nodes run Datapatch to complete the post-install SQL deployment for the PSU only from one node. Datapatch need not be run on all the nodes.

Below are the steps that you normally perform while applying a patch.

1
2

Restart your Database now when the patch is successfully applied to the binaries.
3

Now apply the patch to the database using DATAPATCH Utility from ORACLE_HOME/OPatch directory.
4

Lest query the database to check the patch apply status.
5

or

To check the PSU applied to your database using the following SQL statement

select * from DBA_REGISTRY_SQLPATCH;

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: