Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

What are Standby Redo log Files in Data Guard Environment ?

Posted by FatDBA on February 21, 2013

What are Standby Redo Log Files and why they are important ?

Recently asked by one of my good Fried Sowmya about some internals of Standby Redo Log Files and soon i found myself in bit of soreness. So, here i decided to pt some light over this very basic thing if one has to start with Oracle Data Guard.

Standby Redo Logs: They are very similar to Online Redo but with the only difference that S’by Redo Logs are used to contain redo data from Primary DB. S’by Redo Logs are only used if you are using Log Writer as the log transport medium to Standby Site.

sbydb042
When we use Standby Redo Log Files:
*  When using Real-Time Apply service.
*  If using Maximum Protection or Maximum Availability Models.

Points To Remember:
* You should create standby redo logs on both the standby and the primary database which is helpful in case of switch-overs (But is not mandatory for Primary Database)
* The standby redo logs should be at least as big as the largest online redo log .
* You can create Standby Logs after creating Standby Environment.

* In case of disaster and complete loss of the primary database, the entire redo log worth of data will not be lost because the standby redo log file on standby site will contain some or all of the transactions contained in the lost redo log file. This has been one of the most useful enhancements in Data Guard technology, as it minimizes the loss of data while operating in MAXIMUM PERFORMANCE mode.

Example:
Creating Standby Redo log Files on
ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/app/oracle/oradata/DB11G/standby_redo01.log’) SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/app/oracle/oradata/DB11G/standby_redo02.log’) SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/app/oracle/oradata/DB11G/standby_redo03.log’) SIZE 50M;

SQL> select group#, member, type from v$logfile;

GROUP#                          MEMBER                                                                            TYPE
———-    ———————————————————————————————————————————–—
4       /u01/app/oracle/oradata/DB11G/standby_redo01.log                      STANDBY
3       /u01/app/oracle/oradata/DB11G/standby_redo02.log                      STANDBY
5       /u01/app/oracle/oradata/DB11G/standby_redo03.log                      STANDBY

Here i’ve added a standby logfile on Std DB end:

ALTER DATABASE ADD LOGFILE (‘/u01/app/oracle/oradata/DB11G/online_redo01.log’) SIZE 50M;
ALTER DATABASE ADD LOGFILE (‘/u01/app/oracle/oradata/DB11G/online_redo02.log’) SIZE 50M;
ALTER DATABASE ADD LOGFILE (‘/u01/app/oracle/oradata/DB11G/online_redo03.log’) SIZE 50M;

Pic Credits: Google/Oracle.com

Posted in Advanced | Tagged: , | 1 Comment »

Enterprise Cloud Infrastructure for Dummies (Oracle Special Edition – PDF Format) E-Book

Posted by FatDBA on February 19, 2013

Get your own copy of Enterprise Cloud Infrastructure for Dummies, Oracle Special Edition.
Source: Oracle.com

Link: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11847803&src=7618000&Act=8

1653970

Posted in Advanced, Basics | Tagged: | Leave a Comment »

DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM and

Posted by FatDBA on February 8, 2013

Well there are whole lot of changes when you start comparing Oracle 10g and 11g.  DB_ULTRA_SAFE is one of the new parameter introduced in 11g which provides a mechanism to avoid block corruptions in your database. his parameter actually controls three parameters DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM & DB_LOST_WRITE_PROTECT parameters collectively.
Click on below provided links in order to read more about these three three initialization parameters.

DB_BLOCK_CHECKING

DB_BLOCK_CHECKSUM

DB_LOST_WRITE_PROTECT

Posted in Advanced | Tagged: | 2 Comments »

Image Copies VS BackupSets

Posted by FatDBA on February 7, 2013

Difference between Image Copies and BackupSets (RMAN):

Image Copies could be of two types:
1. RMAN Image Copies
2. O.S Image Copies

RMAN Image Copies
Use the RMAN copy command to create an image copy. If the original file needs to be replaced, and if the image copy is of a datafile, then you do not need to restore the copy. Instead, Oracle provides a switch command to point the control file at the copy and update the recovery catalog to indicate that the copy has been switched. Issuing the switch command in this case is equivalent to issuing the SQL statement ALTER DATABASE RENAME DATAFILE. You can then perform media recovery to make the copy current. They have an advantage of checking for Corrupt Blocks as they are being read by RMAN.

O/S Image Copies
Oracle supports image copies created by mechanisms other than RMAN, also known as O/S copies. For example, a copy of a datafile that you make with the UNIX cp command is an O/S copy. You must catalog such O/S copies with RMAN before using them with the restore or switch commands.
You can create an O/S copy when the database is open or closed. If the database is open and the datafile is not offline normal, then you must place the tablespace in hot backup mode, that is, issue the SQL statement ALTER TABLESPACE BEGIN BACKUP before creating the copy.

BackupSets:
A backup set consists of one or more files in an RMAN-specific format, known as backup pieces. By default, a backup set consists of one backup piece. When backing up datafiles to backup sets, RMAN is able to skip some datafile blocks that do not currently contain data, reducing the size of backup sets and the time required to create them. This behavior, known as unused block compression, means that backups of datafiles as backup sets are generally smaller than image copy backups and take less time to write.

Main Differences:
•RMAN supports binary compression of backup sets, where the backup set contents are compressed before being written to disk using a compression algorithm tuned for compression of datafiles and archived log files. But with Image copy no compression is done.

•A backup set never contains empty blocks. As RMAN passes through the datafiles, it simply skips blocks that have never been used. But image copy contains empty blocks. This means that backups of datafiles as backup sets are generally smaller than image copy backups and take less time to write.

•Incremental backups can’t be performed with Image copy but incremental backups can be taken over backup sets.

•RMAN can take backup of backup sets directly to tape, if you have installed the RMAN drivers for your tape library.But RMAN can’t take backup of image copies directly to tape.

Posted in Advanced | Tagged: | 4 Comments »

Shrinking Segments

Posted by FatDBA on February 7, 2013

Shrinking Segments:
The diagram in the slide describes the two phases of a table shrink operation. The first phase does the compaction. During this phase, rows are moved to the left part of the segment as much as possible. Internally, rows are moved by packets to avoid locking issues. After the rows have been moved, the second phase of the shrink operation is started. During this phase, the high-water mark (HWM) is adjusted and the unused space is released. The COMPACT clause is useful if you have long-running queries that might span the shrink operation and attempt to read from blocks that have been reclaimed. When you specify the SHRINK SPACE COMPACT clause, the progress of the shrink operation is saved in the bitmap blocks of the corresponding segment. This means that the next time a shrink operation is executed on the same segment, the Oracle database server remembers what has been done already. You can then reissue the SHRINK SPACE clause without the COMPACT clause during off-peak hours to complete the second phase.

Untitled

Posted in Advanced | Tagged: , | Leave a Comment »

DEFERRED_SEGMENT_CREATION and DB_ULTRA_SAFE Parameters (New to 11gr2)

Posted by FatDBA on February 7, 2013

Deferred segment creation (DEFERRED_SEGMENT_CREATION):

Oracle Database 11gR2 includes a new space allocation method. When you create a non-partitioned heap table, the table segment creation is deferred to the first row insert. This functionality is enabled by default with the DEFERRED_SEGMENT_CREATION initialization parameter set to TRUE.

Advantages of this new space allocation method:
1. A significant amount of disk space can be saved for applications that create hundreds or thousands of tables upon installation,    many of which might never be populated.
2. The application installation time is reduced.

When you insert the first row into the table, the segments are created for the base table, its LOB columns, and its indexes. During segment creation, cursors on the table are invalidated. These operations have a small additional impact on performance.

A new parameter DEFERRED_SEGMENT_CREATION is created to control the feature. It can be enabled/disabled it at the session or system level (doesn’t require Instance restart).
Alter session set deferred_segment_creation=true;
Alter system set deferred_segment_creation=true;

Deferred segment creation is enabled by default. It’s also possible to enable/disable it for a single table by specifying the deferred segment creation clause.
CREATE TABLE…. SEGMENT CREATION IMMEDIATE
CREATE TABLE… SEGMENT CREATION DEFERRED

Example:

SQL> SHOW PARAMETERS deferred_segment_creation
NAME                                 TYPE        VALUE
———————————— ———– ——
deferred_segment_creation            boolean     TRUE

SQL> CREATE TABLE seg_test(c number, d varchar2(500));
Table created.
SQL> SELECT segment_name FROM user_segments;
no rows selected

Inserting rows and creating segments:
SQL> INSERT INTO seg_test VALUES(1, ‘aaaaaaa’);
1 row created.

SQL> SELECT segment_name FROM user_segments;
SEGMENT_NAME
——————————————————-
SEG_TEST

DB_ULTRA_SAFE is a new parameter brought in 11g.
It provides an combined mechanism to offer protection from various possible data corruptions and provides critical high availability advantages for Oracle Database. Setting DB_ULTRA_SAFE initialization parameter will configure the appropriate data protection block checking level in the database. It will control DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT.

DB_BLOCK_CHECKING: DB_BLOCK_CHECKING controls whether or not Oracle performs block checking for database blocks. Oracle checks a block by going through the data in the block, making sure it is logically self-consistent. Block checking can often prevent memory and data corruption. Block checking typically causes 1% to 10% overhead, depending on workload and the parameter value. The more updates or inserts in a workload, the more expensive it is to turn on block checking. You should set DB_BLOCK_CHECKING to FULL if the performance overhead is acceptable. For backward compatibility the use of FALSE (implying OFF) and TRUE (implying FULL) is preserved.
DB_BLOCK_CHECKSUM: detects redo and data block corruptions and detect corruptions on the primary database and protect the standby database. This parameter requires minimal CPU resources.
DB_LOST_WRITE_PROTECT (also introduced with 11gR1): enable or disable a physical standby database to detect lost write corruptions on both the primary and physical standby database.

DB_ULTRA_SAFE Parameter can be set to 3 different values: OFF, DATA_ONLY and DATA_AND_INDEX. Default value is OFF.

Here are the descriptions of these values:
OFF: It will not change values of DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT if they are explicitly set, otherwise all of them will set to default values.

DATA_ONLY: It will set DB_BLOCK_CHECKING to medium, DB_LOST_WRITE_PROTECT to typical and DB_BLOCK_CHECKSUM to full.

DATA_AND_INDEX: It will set DB_BLOCK_CHECKING to full, DB_LOST_WRITE_PROTECT to typical, and DB_BLOCK_CHECKSUM to full.

The only difference between DATA_AND_INDEX and DATA_ONLY is DB_BLOCK_CHECKING. When DB_BLOCK_CHECKING is set to full, Oracle will do substance checks for index blocks.

Note: Please check blog from Francisco Munoz Alavarez (Oracle ACE Director) over DB_ULTRA_SAFE parameter  — http://oraclenz.wordpress.com/2009/12/16/db_ultra_safe-a-new-gem-for-high-availability/

Posted in Advanced | Tagged: , | 1 Comment »

Data Guard Configuration Attributes (LOG_ARCHIVE_DEST_n)

Posted by FatDBA on January 29, 2013

Today i will try to explain some of the Data Guard Configuration attributes that we use along with LOG_ARCHIVE_DEST_n parameter or entry:

AFFIRM and NOAFFIRM  |    ALTERNATE    |    COMPRESSION    |     DELAY        |       LOCATION and SERVICE         |          MANDATORY       |        MAX_CONNECTIONS        |         MAX_FAILURE       |        REOPEN       |        SYNC and ASYNC         |         VALID_FOR       |      NOREGISTER

AFFIRM and NOAFFIRM

Controls whether a redo transport destination acknowledges received redo data before or after writing it to the standby redo log:

AFFIRM—specifies that a redo transport destination acknowledges received redo data after writing it to the standby redo log.
NOAFFIRM—specifies that a redo transport destination acknowledges received redo data before writing it to the standby redo log.

  •  If neither the AFFIRM nor the NOAFFIRM attribute is specified, the default is AFFIRM when the SYNC attribute is specified and NOAFFIRM when the ASYNC attribute is specified.

Examples
The following example shows the AFFIRM attribute for a remote destination.

LOG_ARCHIVE_DEST_3=’SERVICE=stby1 SYNC AFFIRM’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

ALTERNATE
Specifies an alternate archiving destination to be used when the original destination fails.

  • The ALTERNATE attribute is optional. If an alternate destination is not specified, then redo transport services do not automatically change to another destination if the original destination fails.
  • You can specify only one alternate destination for each LOG_ARCHIVE_DEST_n parameter, but several enabled destinations can share the same alternate destination.
  • Any destination can be designated as an alternate destination, given the following restrictions:

– At least one local mandatory destination is enabled.
– The number of enabled destinations must meet the defined LOG_ARCHIVE_MIN_SUCCEED_DEST parameter value.

  • A destination cannot be its own alternate.

Examples
In the sample initialization parameter file, LOG_ARCHIVE_DEST_1 automatically fails over to LOG_ARCHIVE_DEST_2 on the next archival operation if an error occurs or the device becomes full.

Example –  Automatically Failing Over to an Alternate Destination

LOG_ARCHIVE_DEST_1=’LOCATION=/disk1 MANDATORY ALTERNATE=LOG_ARCHIVE_DEST_2′
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2=’LOCATION=/disk2 MANDATORY’
LOG_ARCHIVE_DEST_STATE_2=ALTERNATE

COMPRESSION
The COMPRESSION attribute is used to specify whether redo data is transmitted to a redo transport destination in compressed form or uncompressed form when resolving redo data gaps.
* Redo transport compression is a feature of the Oracle Advanced Compression option. You must purchase a license for this option before using the redo transport compression feature.
The following example shows the COMPRESSION attribute with the LOG_ARCHIVE_DEST_n parameter.
LOG_ARCHIVE_DEST_3=’SERVICE=denver SYNC COMPRESSION=ENABLE’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

DELAY
Specifies a time lag between when redo data is archived on a standby site and when the archived redo log file is applied to the standby database.

Valid values     >=0 minutes
Default Value 30 minutes
Requires attributes      SERVICE

  •  The DELAY attribute indicates the archived redo log files at the standby destination are not available for recovery until the specified time interval has expired. The time interval is expressed in minutes, and it starts when the redo data is successfully transmitted to, and archived at, the standby site.
  • The DELAY attribute may be used to protect a standby database from corrupted or erroneous primary data. However, there is a tradeoff because during failover it takes more time to apply all of the redo up to the point of corruption.
  • If you have real-time apply enabled, any delay that you set will be ignored.
  • You can override the specified delay interval at the standby site, as follows:

For a physical standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;

For a logical standby database:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY;

The following example shows how to specify the DELAY attribute for this configuration:
LOG_ARCHIVE_DEST_1=’LOCATION=/oracle/dbs/’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2=’SERVICE=stbyB SYNC AFFIRM’
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_3=’SERVICE=stbyC DELAY=120′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

LOCATION and SERVICE
Each destination must specify either the LOCATION or the SERVICE attribute to identify either a local disk directory or a remote database destination where redo transport services can transmit redo data.

  • Either the LOCATION or the SERVICE attribute must be specified. There is no default.
  • You can specify up to nine additional local or remote destinations.
  • For the LOCATION attribute, you can specify one of the following:

LOCATION=local_disk_directory

This specifies a unique directory path name for a disk directory on the system that hosts the database. This is the local destination for archived redo log files.
LOCATION=USE_DB_RECOVERY_FILE_DEST

  • When you specify a SERVICE attribute:

You identify remote destinations by specifying the SERVICE attribute with a valid Oracle Net service name (SERVICE=net_service_name) that identifies the remote Oracle database instance to which the redo data will be sent.

Example 1   Specifying the LOCATION Attribute

LOG_ARCHIVE_DEST_2=’LOCATION=/disk1/oracle/oradata/payroll/arch/’
LOG_ARCHIVE_DEST_STATE_2=ENABLE

Example 2   Specifying the SERVICE Attribute

LOG_ARCHIVE_DEST_3=’SERVICE=stby1′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MANDATORY
Specifies that filled online log files must be successfully archived to the destination before they can be reused.

  • The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter (where n is an integer from 1 to 10) specifies the number of destinations that must archive successfully before online redo log files can be overwritten.
  • You must have at least one local destination, which you can declare MANDATORY or leave as optional.
  • At least one local destination is operationally treated as mandatory, because the minimum value for the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter is 1.

Examples
The following example shows the MANDATORY attribute:

LOG_ARCHIVE_DEST_1=’LOCATION=/arch/dest MANDATORY
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_3=’SERVICE=denver MANDATORY’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MAX_CONNECTIONS
Enables multiple network connections to be used when sending an archived redo log file to a redo transport destination. Using multiple network connections can improve redo transport performance over high-latency network links.

Valid values     1 to
Default value   1

  • The MAX_CONNECTIONS attribute is optional. If it is specified, it is only used when redo transport services use ARCn processes for archival.
  • If MAX_CONNECTIONS is set to 1 (the default), redo transport services use a single ARCn process to transmit redo data to the remote destination.
  • If MAX_CONNECTIONS is set to a value greater than 1, redo transport services use multiple ARCn processes to transmit redo in parallel to archived redo log files at the remote destination. Each archiver (ARCn) process uses a separate network connection.
  • With multiple ARCn processes, redo transmission occurs in parallel, thus increasing the speed at which redo is transmitted to the remote destination.
  • Any standby database using archiver (ARCn) processes will not use standby redo logs if the MAX_CONNECTIONS attribute is specified. Thus, such destinations:

–  Cannot use real-time apply
– Cannot be configured as a redo forwarding destination

Example:

LOG_ARCHIVE_DEST_3=’SERVICE=denver MAX_CONNECTIONS=3′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MAX_FAILURE
Controls the consecutive number of times redo transport services attempt to reestablish communication and transmit redo data to a failed destination before the primary database gives up on the destination.

  •  The MAX_FAILURE attribute is optional. By default, there are an unlimited number of archival attempts to the failed destination.
  • This attribute is useful for providing failure resolution for destinations to which you want to retry transmitting redo data after a failure, but not retry indefinitely.
  • When you specify the MAX_FAILURE attribute, you must also set the REOPEN attribute. Once the specified number of consecutive attempts is exceeded, the destination is treated as if the REOPEN attribute was not specified.
  • Once the failure count is greater than or equal to the value set for the MAX_FAILURE attribute, the REOPEN attribute value is implicitly set to zero, which causes redo transport services to transport redo data to an alternate destination (defined with the ALTERNATE attribute) on the next archival operation.

The following example allows redo transport services up to three consecutive archival attempts, tried every 5 seconds, to the arc_dest destination. If the archival operation fails after the third attempt, the destination is treated as if the REOPEN attribute was not specified.

LOG_ARCHIVE_DEST_1=’LOCATION=/arc_dest REOPEN=5 MAX_FAILURE=3′
LOG_ARCHIVE_DEST_STATE_1=ENABLE

REOPEN
Specifies the minimum number of seconds before redo transport services should try to reopen a failed destination.

  • Default Value 300 seconds
  • The REOPEN attribute is optional.
  • Redo transport services attempt to reopen failed destinations at log switch time.
  • Redo transport services check if the time of the last error plus the REOPEN interval is less than the current time. If it is, redo transport services attempt to reopen the destination.
  • REOPEN applies to all errors, not just connection failures. These errors include, but are not limited to, network failures, disk errors, and quota exceptions.

Example:

LOG_ARCHIVE_DEST_3=’SERVICE=stby1 MANDATORY REOPEN=60′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

VALID_FOR
Specifies whether redo data will be written to a destination, based on the following factors:
–   Whether the database is currently running in the primary or the standby role
–   Whether online redo log files, standby redo log files, or both are currently being archived on the database at this destination

  •  Default Value VALID_FOR=(ALL_LOGFILES, ALL_ROLES)
  • The VALID_FOR attribute is optional. However, Oracle recommends that the VALID_FOR attribute be specified for each redo transport destination at each database in a Data Guard configuration so that redo transport continues after a role transition to any standby database in the configuration.
  • To configure these factors for each LOG_ARCHIVE_DEST_n destination, you specify this attribute with a pair of keywords: VALID_FOR=(redo_log_type,database_role):

The redo_log_type keyword identifies the destination as valid for archiving one of the following:

ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
STANDBY_LOGFILE—This destination is valid only when archiving standby redo log files.
ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.
The database_role keyword identifies the role in which this destination is valid for archiving:

PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
STANDBY_ROLE—This destination is valid only when the database is running in the standby role.
ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.

LOG_ARCHIVE_DEST_1=’LOCATION=/disk1/oracle/oradata VALID_FOR=(ALL_LOGFILES, ALL_ROLES)’

NOREGISTER
Indicates that the location of the archived redo log file should not be recorded at the corresponding destination.

  • The NOREGISTER attribute is optional if the standby database destination is a part of a Data Guard configuration.
  • The NOREGISTER attribute is required if the destination is not part of a Data Guard configuration.

LOG_ARCHIVE_DEST_5=’NOREGISTER’

Posted in Advanced | Tagged: , | Leave a Comment »

Standby: Quick Switchover with Physical Database.

Posted by FatDBA on January 27, 2013

Oracle 10g Data Guard – Quick Switchover with Physical Standby Database

You have a Standby database which is setup using Data Guard and works properly, you may want to test switchover, or perform switchover to reduce primary database downtime during OS upgrades or hardware upgrades.  A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. You can switch back to the original Primary database later by performing another switchover.

In case of primary database failure, you will need to perform failover to transition the standby database to the primary role. After a failover, the original primary database can no longer participate in the Data Guard configuration. So if the original Primary database is still accessible, you should always consider a switchover first.

This document only talks about switchover involving physical standby database. In this example, the original primary data is called PRIM and the original standby database is called STAN.

I. Before Switchover:

1. As I always recommend, test the Switchover first on your testing systems before working on Production.

2. Verify the primary database instance is open and the standby database instance is mounted.

3. Verify there are no active users connected to the databases.

4. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:

SQL>select sequence#, applied from v$archvied_log;
Perform SWITCH LOGFILE if necessary.

In order to apply redo data to the standby database as soon as it is received, use Real-time apply.

II. Quick Switchover Steps

1. Initiate the switchover on the primary database PRIM:
SQL>connect /@PRIM as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

2. After step 1 finishes, Switch the original physical standby db STAN to primary role;
Open another prompt and connect to SQLPLUS:
SQL>connect /@STAN as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;

4. After step 3 completes:
– If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN.
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

– If you are using Oracle Database 10g release 2, you can open the new Primary database STAN:
SQL>ALTER DATABASE OPEN;

STAN is now transitioned to the primary database role.

5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM.
SQL>ALTER SYSTEM SWITCH LOGFILE;

 

FAST-START Failover:
Other way to perform Switchovers automatically is using Data Guard Broker (DGMGRL). FAST-START Failover is one of the solution available and widely accepted to perform Auto Switchovers in case of Production faults/errors.

It requires:
1. Protection Mode should be Maximum Availability.
2. FAST-START Failover Observer.
3. A Production and Standby Database)

Untitled

 

Read More:

faststartfailoverbestp-131997

Posted in Advanced | Tagged: , | Leave a Comment »

Recovery Error: ORA-01207: file is more recent than control file – old control file

Posted by FatDBA on January 21, 2013

Solution steps when you started receiving ORA-01207.

Reason: The control file change sequence number in the data file is greater than the number in the control file. This implies that the wrong control file is being used. Note that repeatedly causing this error can make it stop happening without correcting the real problem. Every attempt to open the database will advance the control file change sequence number until it is great enough.

Scenario: In my scenario one of our SIT box lost all the copies of multiplexed ControlFiles from the system and after couple of minutes Database got crashed. While trying fixing the problem we discovered that we did’nt have the backup copies of Control Files Available, but we had a mount point backup available which holds all backup information but due to issues with our SMARTS (Monitoring Tool) backup script got failed to update the control file backup. So we didn’t knew that the ControlFile backup available was too old and have a very old sequence number registered.

So, after restoring the controlfile from that outdated backup we started recieving this error message during Databse OPEN:

SQL> alter database open;

ORA-00283: recovery session canceled due to errors
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: ‘/app/oracle/oradata/rac10g/system01.dbf’
ORA-01207: file is more recent than control file – old control file

Solution: recreate the control file.

1) dump controlfile to trace
SQL> alter database backup controlfile to trace as ‘/tmp/ccc.sql’;

2) startup database nomount
SQL> startup nomount

3) create the control file
SQL> @/tmp/ccc.sql
Control file created.

sys@SIT4435Z89> recover database using backup controlfile;

SQL> recover database using backup controlfile ;
ORA-00279: change 1904510 generated at 12/11/2012 15:46:23 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 1904510 for thread 1 is in sequence #62

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+DATA/orcl/onlinelog/group_1.261.801074115
ORA-00310: archived log contains sequence 61; sequence 62 required
ORA-00334: archived log: ‘+DATA/orcl/onlinelog/group_1.261.801074115’

Now it is asking for Log sequence 62 which is not available under archive dest
ASMCMD> ls
thread_1_seq_52.314.801739289
thread_1_seq_53.294.801743225
thread_1_seq_54.295.801743259
thread_1_seq_55.285.801743403
thread_1_seq_56.283.801757235
thread_1_seq_57.282.801758303
thread_1_seq_58.281.801759635
thread_1_seq_59.286.801761231
thread_1_seq_60.305.801762327
thread_1_seq_61.296.801762385

We only have seq:61 available.

Hint: It could be still inside REDO Log groups and is not archived before the crash or Control file loss.
Hence we’ll check both of the two avialable redo groups to find seq:62
+DATA/orcl/onlinelog/group_1.261.801074115
+DATA/orcl/onlinelog/group_2.262.801074117

SQL> startup  mount
ORACLE instance started.

Total System Global Area 1636814848 bytes
Fixed Size                  2213856 bytes
Variable Size            1140852768 bytes
Database Buffers          486539264 bytes
Redo Buffers                7208960 bytes
Database mounted.
SQL> recover database using backup controlfile ;
ORA-00279: change 1904510 generated at 12/11/2012 15:46:23 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 1904510 for thread 1 is in sequence #62

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+DATA/orcl/onlinelog/group_1.261.801074115
ORA-00310: archived log contains sequence 61; sequence 62 required
ORA-00334: archived log: ‘+DATA/orcl/onlinelog/group_1.261.801074115’

SQL> recover database using backup controlfile ;
ORA-00279: change 1904510 generated at 12/11/2012 15:46:23 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 1904510 for thread 1 is in sequence #62

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+DATA/orcl/onlinelog/group_2.262.801074117
Log applied.
Media recovery complete.

Shows it was available under redo group: 2 (could be Current or Active) and it is successfully applied.

Finally we’d open DB in reset log mode.

SQL> alter database open resetlogs ;
Database altered.

Posted in Advanced | Tagged: , | Leave a Comment »

TKProf Utility.

Posted by FatDBA on January 17, 2013

TKPROF:
————————–
TKProf is oracle provided trace reading utility which converts results in to more human readable format.
It required one to Enable SQL_Trace and set TIMED_Statistics which helps Oracle to create trace files with extra information.

Example:
SQL> alter session set timed_statistics = true;
Session altered.

SQL> alter session set sql_trace = true;
Session altered.

I’ve created a query to find my Oracle Server process id:
SELECT p.spid oracle_dedicated_process, s.process clientpid FROM v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = (select sid from v$mystat where rownum = 1);

ORACLE_DEDIC CLIENTPID
———— ————
11653        11636

SQL> select * from scott.emp where job=’CLERK’;

EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
———- ———- ——— ———- ——— ———- ———- ———-
7369 SMITH      CLERK           7902 17-DEC-80        800                    20
7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
7900 JAMES      CLERK           7698 03-DEC-81        950                    30
7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

[oracle@localhost udump]$ ls -ltr
total 128

-rw-r—– 1 oracle oinstall   635 Jan 17 23:05 orcl_ora_11331.trc
-rw-r—– 1 oracle oinstall   692 Jan 17 23:05 orcl_ora_11358.trc
-rw-r—– 1 oracle oinstall 21454 Jan 17 23:08 orcl_ora_11359.trc
-rw-r—– 1 oracle oinstall  2862 Jan 17 23:31 orcl_ora_11653.trc

[oracle@localhost udump]$ tkprof orcl_ora_11653.trc Prashant_tkprof_study.txt

TKPROF: Release 10.2.0.1.0 – Production on Thu Jan 17 23:32:41 2013
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

[oracle@localhost udump]$ ls -ltr
total 145

-rw-r–r– 1 oracle oinstall  5925 Jan 17 23:32 Prashant_tkprof_study.txt

 
[oracle@localhost udump]$ cat Prashant_tkprof_study.txt

TKPROF: Release 10.2.0.1.0 – Production on Thu Jan 17 23:32:41 2013

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Trace file: orcl_ora_11653.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

alter session set sql_trace = true

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
——- ——  ——– ———- ———- ———- ———-  ———-
total        1      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
********************************************************************************

SELECT p.spid oracle_dedicated_process, s.process clientpid
FROM
v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = (select sid from
v$mystat where rownum = 1)

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          0          0           1
——- ——  ——– ———- ———- ———- ———-  ———-
total        4      0.01       0.01          0          0          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
——-  —————————————————
1  NESTED LOOPS  (cr=0 pr=0 pw=0 time=2133 us)
1   HASH JOIN  (cr=0 pr=0 pw=0 time=2511 us)
1    FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=643 us)
1     COUNT STOPKEY (cr=0 pr=0 pw=0 time=20 us)
1      FIXED TABLE FULL X$KSUMYSTA (cr=0 pr=0 pw=0 time=14 us)
1       FIXED TABLE FULL X$KSUSGIF (cr=0 pr=0 pw=0 time=4 us)
20    FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=43 us)
1   FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=6 us)

********************************************************************************

select *
from
scott.emp where job=’CLERK’

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          8          0           4
——- ——  ——– ———- ———- ———- ———-  ———-
total        4      0.00       0.00          0          8          0           4

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
——-  —————————————————
4  TABLE ACCESS FULL EMP (cr=8 pr=0 pw=0 time=388 us)

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        2      0.01       0.01          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        4      0.00       0.00          0          8          0           5
——- ——  ——– ———- ———- ———- ———-  ———-
total        9      0.01       0.02          0          8          0           5

Misses in library cache during parse: 2
Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
——- ——  ——– ———- ———- ———- ———-  ———-
total        0      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0

3  user  SQL statements in session.
0  internal SQL statements in session.
3  SQL statements in session.
********************************************************************************
Trace file: orcl_ora_11653.trc
Trace file compatibility: 10.01.00
Sort options: default

1  session in tracefile.
3  user  SQL statements in trace file.
0  internal SQL statements in trace file.
3  SQL statements in trace file.
3  unique SQL statements in trace file.
48  lines in trace file.
35  elapsed seconds in trace file.

Posted in Advanced | Tagged: , , | Leave a Comment »