Tales From A Lazy Fat DBA

$ prashantdixit/dbs90@ace as sysdba

  • Likes

    • 110,996
  • Archives

  • Categories

  • Cause I Support!!

  • Subscribe

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Oracle Radio

  • Magic Of Oracle

  • Disclaimer!

    FatDBA or Oracle 'Ant' is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Posts Tagged ‘12c’

Auto Stats Gathering in Oracle 12c & Something Interesting :)

Posted by FatDBA on December 14, 2017

Hi Fellas,
Starting from Oracle 12c there is a new feature added which collects the statistics when you perform the Bulk Loads when using any of the two methods:
– CREATE TABLE AS SELECT (CTAS)
– INSERT INTO … SELECT (Into an empty table using DPR or Direct Path Read).

SQL> explain plan for create table dixittab as select * from scottisdead;
Explained.
 
SQL> select * from table(DBMS_XPLAN.DISPLAY);
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 14312189
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |               |   500K|  8812K|   612   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | DIXITTAB      |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |               |   500K|  8812K|   371   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SCOTTISDEAD   |   500K|  8812K|   371   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------
10 rows selected. 

Above in execution plan you’ll see the new operation named “OPTIMIZER STATISTICS GATHERING” at ID 2.
Lets verify if the stats are collected.

SQL> select table_name, last_analyzed from user_tables where table_name = 'DIXITTAB';
 
TABLE_NAME       LAST_ANALYZED
---------------- -------------
DIXITTAB         12-DEC-17

Yup, stats were collected!
Same way stats will be auto collected during the other type of Bulk Load method (INSERT INTO .. SELECT).

There may be times when you want to disable this feature, situations like.
– Long/Huge Insert operations which is taking huge time on STATS GATHERING.
– With an extremely large dataset where you don’t want to collect stats.

In order to achieve that we have the option, with the use of a hint which instructs oracle to not gather table statistics.

SQL> create table dixittab as select /*+NO_GATHER_OPTIMIZER_STATISTICS */* from scottisdead;

Now, something interesting i would like to discuss …..
Is there any other condition when the stats won’t be collected automatically except barring it using NO_GATHER_OPTIMIZER_STATISTICS Hint ?

Lets try to do some conventional bulk loading using INSERT INTO .. SELECT method.
To do some tests – I am intentionally commenting few of the columns both the tables. I’ve commented DATE_VAL column of newly created table TABLE1 and DATE_VALUE of the table selected SAMPLE.


SQL> create table table1 (ident number, date_val date, text_val varchar2(4000));
Table created.


SQL> insert /*+ append */ into table1
(IDENT
--, DATE_VAL
, TEXT_VAL)
SELECT ID
--, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE; 

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------

Plan hash value: 1523099961
-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |        |   100K|  2539K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT    | TABLE1 |       |       |            |          |
|   2 |   TABLE ACCESS FULL| SAMPLE |   100K|  2539K|   154   (1)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

😦 😦 Why, the auto stats gathering behavior not repeated this time ??

This happened because Oracle needs inclusion of all the columns of a table in order to kick in the OPTIMIZER STATISTICS GATHERING operation —> Let me show you what i said in above statement.

SQL> insert /*+ append */ into table1
(IDENT
, DATE_VAL
, TEXT_VAL)
SELECT ID
, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE;  

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------

Plan hash value: 1523099961
-------------------------------------------------------------------------------------------
| Id  | Operation                        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | TABLE1 |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SAMPLE |   100K|  3320K|   154   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------

10 rows selected.

Yes, the the stats were collected this time when we’ve included all the columns of the tables.
I haven’t seen any documentation on this restriction on this new feature of Oracle 12c. Hope Oracle adds this soon this to their documentation 🙂 …..

Hope It Helps!
Prashant Dixit

Advertisements

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

DB Upgrade Failed! :( – Have to start it from the scratch, Really ??

Posted by FatDBA on December 4, 2017

Hi Mates,

Lets recall the list of per-requisites that we have to perform before doing a database upgrade!
Few of them i quickly recall are – Full RMAN Backup, Creating Guaranteed Restore points etc.

But are these options allows us to resume any failed upgrade activity ? — NO

Let’s go back to time before Oracle 12c (Ex: 10g, 11g etc.)

Resuming of failed upgrade was not possible and you have to recall the catupgrd.sql and open the database in UPGRADE mode but this will only start it from the beginning and won’t resume from the point it got failed last time. So same amount of time we be consumed once again for the upgrade

SQL> startup upgrade
SQL> spool /home/dixit/upgradedir/catupgrdscriptout.log
SQL> @?/rdbms/admin/catupgrd.sql

Now lets discuss about 12c, starting from 12c Release 1 we have the all new parallel upgrade utility (catctl.pl). For this you still have to open your database in UPGRADE mode and will be controlled by this new perl script.
If talk about restarting of failed upgrade, here we’ve got a small relief as you will be able to restart the upgrade from the phase where it was crashed/stopped using -P option.

Example:

Serial Phase #:56 [UPGR] Files:1 Time: 3s
******************* Migration ******************
Restart Phase #:57 Files: 1 Time: 0s
Serial Phase #:58 Files: 1 Time: 102s
Restart Phase #:59 Files: 1 Time: 1s
Serial Phase #:60 Files: 1 Time: 221s
Restart Phase #:61 Files: 1 Time: 0s
Serial Phase #:62 Files: 1 Time: 25s
Restart Phase #:63 Files: 1 Time: 0s
Serial Phase #:64 Files: 1 Time: 2s
Serial Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/db_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/db_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/db_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only > catupgrd_datapatch_upgrade.log 2> catupgrd_datapatch_upgrade.err FAILED

As we see the upgrade failed at phase number 65, so we can restart the upgrade mentioning the phase number (-P) to resume it from that phase and upgrade process won’t repeat steps happened successfully.

$ORACLE_HOME/perl/bin/perl catctl.pl -p 65 -l /home/dixit/upgradedir/catupgrd.sql

******************* Migration ******************
Serial Phase #:65 Files: 1 Time: 28s
Serial Phase #:66 Files: 1 Time: 3921s
Serial Phase #:68 Files: 1 Time: 0s
Serial Phase #:69 Files: 1 Time: 57s
Serial Phase #:70 Files: 1 Time: 1027s
Serial Phase #:71 Files: 1 Time: 3s
Serial Phase #:72 Files: 1 Time: 0s
Serial Phase #:73 Files: 1 Time: 18s
Grand Total Time: 6459s
LOG FILES: (catupgrd*.log)
Upgrade Summary Report Located in:
/u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/JANI/upgrade/upg_summary.log
Grand Total Upgrade Time: [0d:0h:40m:39s]

Now, lets discuss the main objective of writing this new blog entry.
Starting from Oracle 12c Release 2 we have got more control over upgrade activities as with this latest release you will be able to resume the upgrade from where it got stopped last time. Oracle introduced a new flag -R with catctl.pl to achieve this, so don’t worry about the upgrade phases 🙂

$ORACLE_HOME/perl/bin/perl catctl.pl -R -l /home/dixit/upgradedir/catupgrd.sql

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

12c all new Parallel Upgrade utility.

Posted by FatDBA on October 24, 2017

With Oracle Database 12c enters the all new Parallel Upgrade Utility, catctl.pl. This utility exchanges the catupgrd.sql script that was used in earlier releases.
Although you can still use the catupgrd.sql script, it is vilipended starting with Oracle Database 12c and will be removed in future releases.
Oracle urges database upgrades be performed with the new Parallel Upgrade Utility, catctl.pl.

If you choose to run the catupgrd.sql script instead of running catctl.pl, doing so now requires an additional input argument as follows:

SQL> @catupgrd.sql PARALLEL=NO

If you run catupgrd.sql without the parameter, then Oracle displays the following error message:

NOTE:

The catupgrd.sql script is being deprecated in the 12.1 release of Oracle Database. Customers are encouraged to use catctl.pl as the replacement for catupgrd.sql when upgrading the database dictionary.

cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl -n 4 catupgrd.sql

Refer to the Oracle Database Upgrade Guide for more information.

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Cross Platform Migrations: ‘As Easy As Pie’ in Oracle 12c

Posted by FatDBA on August 22, 2016

The legendary Transportable Tablespace feature was introduced in Oracle 8i to make it convenient to transport a large amount of data between databases. Specially from Oracle10g this useful feature was enhanced with cross-platform support which allowed a tablespace/tablespaces, to be transported between databases deployed on different hardware platforms or between platforms with a different endian formats.

So till 11g the migration activity involves RMAN, EXPDP and IMPDP have to be used to transport tablespace(s) across platforms along with the RMAN CONVERT statement was used. Below are the steps that are required to perform the migration work till 11g.

Step 1: Check Platform Support and File Conversion Requirement
Step 2: Identify Tablespaces to be Transported and Verify Self-containment
Step 3: Check for Problematic Data Types
Step 4: Check for Missing Schemas and Duplicate Tablespace and Object Names
Step 5: Make Tablespaces Read-only in Source Database
Step 6: Extract Metadata from Source Database (We could use either data pump or original export to do this)
Step 7: Copy Files to Target Server and Convert if Necessary (Conversion involves RMAN)
Step 8: Import Metadata into Target Database (This step is sometimes called “plugging in” the tablespaces. Again we can use data pump or original import).
Step 9: Copy Additional Objects to Target Database as Desired

With the introduction of Oracle Database 12c, it includes a very easy and novel way to do the same – That is the ability to transport tablespaces across platforms using just RMAN and RMAN (compressed) backupsets!

Let me show you how we can transport a tablespace from Oracle Linux to Oracle Solaris. Which is an example of a cross platform migration with different ENDIAN formats. Solaris is BIG endian whereas the Linux is a small ENDIAN type OS.

SQL> select banner from v$version;

BANNER
——————————————————————————–
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
PL/SQL Release 12.1.0.2.0 – Production

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
———– ————————————————– ————–
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
13 Linux x86 64-bit Little

SQL> SELECT tablespace_name, segment_type, COUNT(*),
2 SUM (bytes) / 1024 / 1024 mb
3 FROM dba_segments
4 WHERE owner = ‘DIXIT’
5 GROUP BY tablespace_name, segment_type
6 ORDER BY 1, 2 DESC;

TABLESPACE_NAME SEGMENT_TYPE COUNT(*) MB
————— ———— ———- ——-
IND1 INDEX 88 1353.4
TAB1 TABLE 41 4079.6
TAB1 LOBSEGMENT 3 0.4
TAB1 LOBINDEX 3 0.2
TAB1 INDEX 53 106.4

Make Tablespaces Read-only in Source Database
With today’s filers and sophisticated storage systems, it is often possible to take a filer “snapshot” or split a mirror in order to get a copy of the data files very quickly. Extracting metadata is also quick. So, on a system with a good storage system, tablespaces may only need to be read-only for a few minutes.
NOTE: In 12c we can use a procedure that keeps the downtime to a minimum with the ‘Incremental Cross-Platform Transportable Tablespaces’. It also uses RMAN transportable backupsets but is a slightly more complicated procedure.

We put the tablespaces into read-only mode with the following statements:

SQL> ALTER TABLESPACE tab1 READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE ind1 READ ONLY;
Tablespace altered.

In order to create a TTS backup we have two of the optins available to use either BACKUP FOR TRANSPORT or the BACKUP TO PLATFORM RMAN

Whats the difference between the two ?
Answer: The difference between these two arguments or statements in RMAN is where the datafile conversion will take place. The BACKUP FOR TRANSPORT statement should be used if the datafile conversion is to be performed on the target system and the BACKUP TO PLATFORM statement should be used if the datafile conversion is to be performed on the source system.

For the test purposes we will perform the conversion of datafiles on the source using the BACKUP TO PLATFORM statement specifying the name of the target platform as its argument.
We have to provide some additional information like Where RMAN should place the backupsets. RMAN will create one backupset with the datafile(s) and a second backupset with the metadata dumpset that EXPDP will create in lieu of RMAN. For this test i will create a compressed transportable backupset.

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 10:49:57 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> backup to platform ‘Solaris[tm] OE (64-bit)’ as compressed backupset
2> tablespace xtransport format ‘/tmp/dbfilebackups.bck’
3> datapump format ‘/tmp/infometaexpdp.bck’;
Starting backup at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ”:
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table “SYS”.”TRANSPORT_EXP_V121_pyAn” successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TRANSPORT_EXP_TUNEDB_Y7OJ is:
EXPDP> /u01/app/oracle/product/12.1.0.2.0/db_1/dbs/backup_transporttbs_tunedb_181881.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace XTRANSPORT:
EXPDP> /u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
EXPDP> Job “SYS”.”TRANSPORT_EXP_TUNEDB_Y7OJ” successfully completed at Sun Aug 21 10:53:55 2016 elapsed 0 00:04:03
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/dbfilebackups.bck tag=TAG201698188T888 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/db/v121/data/V121/datafile/o1_mf_ttftest_dixit_.dbf
channel ORA_DISK_1: starting piece 1 at 21-AUG-16
channel ORA_DISK_1: finished piece 1 at 21-AUG-16
piece handle=/tmp/infometaexpdp.bck tag=TAG201678777U998 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:06
Finished backup at 21-AUG-16

Recovery Manager complete.

So in short the RMAN has performed below mentioned activities:
– Verify and Identify Tablespaces to be Transported and Verify Self-containment
– Extract Metadata from Source Database using EXPDP.
– RMAN created a compressed backupset which contains the tablespace’s datafile.
– Created a backupset containing the metadata dump.

Now its time to restore the transportable backupset!!

$ rman target /
Recovery Manager: Release 12.1.0.2.0 – Production on Sun Aug 21 12:39:13 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
connected to target database: tunedb (DBID=1989879787)

RMAN> restore from platform ‘Solaris[tm] OE (64-bit)’
2> foreign tablespace IND1, TAB1 to new
3> from backupset ‘/tmp/dbfilebackups.bck’
4> dump file from backupset ‘/tmp/infometaexpdp.bck’;
Starting restore at 21-AUG-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=67 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace IND1, TAB1
channel ORA_DISK_1: reading from backup piece /tmp/dbfilebackups.bck
channel ORA_DISK_1: restoring foreign file 9 to /u01/db/tunedb/data/tunedb/datafile/o1_mf_testtransport_ab77hho11_.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/dbfilebackups.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0.2.0/db/dbs/o1_mf_ttftest_dixit_.dmp
channel ORA_DISK_1: reading from backup piece /tmp/infometaexpdp.bck
channel ORA_DISK_1: foreign piece handle=/tmp/infometaexpdp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04

Performing import of metadata…
IMPDP> Master table “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully loaded/unloaded
IMPDP> Starting “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu”:
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job “SYS”.”TRANSDIXIT_IMPORT_tunedb_g7aahu” successfully completed at Sun Aug 21 12:42:10 2016 elapsed 0 00:02:03
Import completed

Finished restore at 21-AUG-16

Recovery Manager complete.

Deducing on the basis of RMAN restore logs, its clear that the RMAN completed following steps:
– It restored the foreign tablespace’s datafile from the datafile backupset.
– Along it restores the tablespace metadata from the metadata backupset.
– Import the tablespace metadata using IMPDP.

Hope That Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Days of ‘catbundle PSU apply’are over, Lets welcome ‘DATAPATCH’ in 12c!!

Posted by FatDBA on August 19, 2016

Hi Mates,
With the introduction of Oracle 12c the SQL commands belonging to one patch are not installed by the catbundle.sql but by the “datapatch” tool, located in the OPatch directory. Apart from that the datapatch checks before if the requirement for the installation are met.

Let me be more simple — Datapatch is the new tool that enables automation of post-patch SQL actions for RDBMS patches. So, In 12c you don’t use carbundle psu apply now this is all done using datapatch.

With the Enterprise Manager and OPatchAuto we gets the further automation of database patches by calling datapatch automatically after applying the binary patch.

Enterprise Manager: Starting version 12.1 Enterprise Manager now calls datapatch to complete post patch actions upon any 12c or later database restart.

OPatchAuto : OPatchAuto calls datapatch to complete post patch actions upon installation of the binary patch and restart of the database.

OPatch : Datapatch integration with OPatch is not possible as OPatch is executed when the database is down and datapatch requires the database to be opened to complete its activity.
When patches are installed or rollbacked using OPatch then datapatch needs to be explicitly invoked if instructed to do so in the patch readme.

RAC specific: For a RAC environment, after the binary patch has been applied on all nodes run Datapatch to complete the post-install SQL deployment for the PSU only from one node. Datapatch need not be run on all the nodes.

Below are the steps that you normally perform while applying a patch.

1
2

Restart your Database now when the patch is successfully applied to the binaries.
3

Now apply the patch to the database using DATAPATCH Utility from ORACLE_HOME/OPatch directory.
4

Lest query the database to check the patch apply status.
5

or

To check the PSU applied to your database using the following SQL statement

select * from DBA_REGISTRY_SQLPATCH;

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: