Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for January, 2013

Data Guard Configuration Attributes (LOG_ARCHIVE_DEST_n)

Posted by FatDBA on January 29, 2013

Today i will try to explain some of the Data Guard Configuration attributes that we use along with LOG_ARCHIVE_DEST_n parameter or entry:

AFFIRM and NOAFFIRM  |    ALTERNATE    |    COMPRESSION    |     DELAY        |       LOCATION and SERVICE         |          MANDATORY       |        MAX_CONNECTIONS        |         MAX_FAILURE       |        REOPEN       |        SYNC and ASYNC         |         VALID_FOR       |      NOREGISTER

AFFIRM and NOAFFIRM

Controls whether a redo transport destination acknowledges received redo data before or after writing it to the standby redo log:

AFFIRM—specifies that a redo transport destination acknowledges received redo data after writing it to the standby redo log.
NOAFFIRM—specifies that a redo transport destination acknowledges received redo data before writing it to the standby redo log.

  •  If neither the AFFIRM nor the NOAFFIRM attribute is specified, the default is AFFIRM when the SYNC attribute is specified and NOAFFIRM when the ASYNC attribute is specified.

Examples
The following example shows the AFFIRM attribute for a remote destination.

LOG_ARCHIVE_DEST_3=’SERVICE=stby1 SYNC AFFIRM’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

ALTERNATE
Specifies an alternate archiving destination to be used when the original destination fails.

  • The ALTERNATE attribute is optional. If an alternate destination is not specified, then redo transport services do not automatically change to another destination if the original destination fails.
  • You can specify only one alternate destination for each LOG_ARCHIVE_DEST_n parameter, but several enabled destinations can share the same alternate destination.
  • Any destination can be designated as an alternate destination, given the following restrictions:

– At least one local mandatory destination is enabled.
– The number of enabled destinations must meet the defined LOG_ARCHIVE_MIN_SUCCEED_DEST parameter value.

  • A destination cannot be its own alternate.

Examples
In the sample initialization parameter file, LOG_ARCHIVE_DEST_1 automatically fails over to LOG_ARCHIVE_DEST_2 on the next archival operation if an error occurs or the device becomes full.

Example –  Automatically Failing Over to an Alternate Destination

LOG_ARCHIVE_DEST_1=’LOCATION=/disk1 MANDATORY ALTERNATE=LOG_ARCHIVE_DEST_2′
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2=’LOCATION=/disk2 MANDATORY’
LOG_ARCHIVE_DEST_STATE_2=ALTERNATE

COMPRESSION
The COMPRESSION attribute is used to specify whether redo data is transmitted to a redo transport destination in compressed form or uncompressed form when resolving redo data gaps.
* Redo transport compression is a feature of the Oracle Advanced Compression option. You must purchase a license for this option before using the redo transport compression feature.
The following example shows the COMPRESSION attribute with the LOG_ARCHIVE_DEST_n parameter.
LOG_ARCHIVE_DEST_3=’SERVICE=denver SYNC COMPRESSION=ENABLE’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

DELAY
Specifies a time lag between when redo data is archived on a standby site and when the archived redo log file is applied to the standby database.

Valid values     >=0 minutes
Default Value 30 minutes
Requires attributes      SERVICE

  •  The DELAY attribute indicates the archived redo log files at the standby destination are not available for recovery until the specified time interval has expired. The time interval is expressed in minutes, and it starts when the redo data is successfully transmitted to, and archived at, the standby site.
  • The DELAY attribute may be used to protect a standby database from corrupted or erroneous primary data. However, there is a tradeoff because during failover it takes more time to apply all of the redo up to the point of corruption.
  • If you have real-time apply enabled, any delay that you set will be ignored.
  • You can override the specified delay interval at the standby site, as follows:

For a physical standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;

For a logical standby database:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY;

The following example shows how to specify the DELAY attribute for this configuration:
LOG_ARCHIVE_DEST_1=’LOCATION=/oracle/dbs/’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2=’SERVICE=stbyB SYNC AFFIRM’
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_3=’SERVICE=stbyC DELAY=120′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

LOCATION and SERVICE
Each destination must specify either the LOCATION or the SERVICE attribute to identify either a local disk directory or a remote database destination where redo transport services can transmit redo data.

  • Either the LOCATION or the SERVICE attribute must be specified. There is no default.
  • You can specify up to nine additional local or remote destinations.
  • For the LOCATION attribute, you can specify one of the following:

LOCATION=local_disk_directory

This specifies a unique directory path name for a disk directory on the system that hosts the database. This is the local destination for archived redo log files.
LOCATION=USE_DB_RECOVERY_FILE_DEST

  • When you specify a SERVICE attribute:

You identify remote destinations by specifying the SERVICE attribute with a valid Oracle Net service name (SERVICE=net_service_name) that identifies the remote Oracle database instance to which the redo data will be sent.

Example 1   Specifying the LOCATION Attribute

LOG_ARCHIVE_DEST_2=’LOCATION=/disk1/oracle/oradata/payroll/arch/’
LOG_ARCHIVE_DEST_STATE_2=ENABLE

Example 2   Specifying the SERVICE Attribute

LOG_ARCHIVE_DEST_3=’SERVICE=stby1′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MANDATORY
Specifies that filled online log files must be successfully archived to the destination before they can be reused.

  • The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter (where n is an integer from 1 to 10) specifies the number of destinations that must archive successfully before online redo log files can be overwritten.
  • You must have at least one local destination, which you can declare MANDATORY or leave as optional.
  • At least one local destination is operationally treated as mandatory, because the minimum value for the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter is 1.

Examples
The following example shows the MANDATORY attribute:

LOG_ARCHIVE_DEST_1=’LOCATION=/arch/dest MANDATORY
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_3=’SERVICE=denver MANDATORY’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MAX_CONNECTIONS
Enables multiple network connections to be used when sending an archived redo log file to a redo transport destination. Using multiple network connections can improve redo transport performance over high-latency network links.

Valid values     1 to
Default value   1

  • The MAX_CONNECTIONS attribute is optional. If it is specified, it is only used when redo transport services use ARCn processes for archival.
  • If MAX_CONNECTIONS is set to 1 (the default), redo transport services use a single ARCn process to transmit redo data to the remote destination.
  • If MAX_CONNECTIONS is set to a value greater than 1, redo transport services use multiple ARCn processes to transmit redo in parallel to archived redo log files at the remote destination. Each archiver (ARCn) process uses a separate network connection.
  • With multiple ARCn processes, redo transmission occurs in parallel, thus increasing the speed at which redo is transmitted to the remote destination.
  • Any standby database using archiver (ARCn) processes will not use standby redo logs if the MAX_CONNECTIONS attribute is specified. Thus, such destinations:

–  Cannot use real-time apply
– Cannot be configured as a redo forwarding destination

Example:

LOG_ARCHIVE_DEST_3=’SERVICE=denver MAX_CONNECTIONS=3′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MAX_FAILURE
Controls the consecutive number of times redo transport services attempt to reestablish communication and transmit redo data to a failed destination before the primary database gives up on the destination.

  •  The MAX_FAILURE attribute is optional. By default, there are an unlimited number of archival attempts to the failed destination.
  • This attribute is useful for providing failure resolution for destinations to which you want to retry transmitting redo data after a failure, but not retry indefinitely.
  • When you specify the MAX_FAILURE attribute, you must also set the REOPEN attribute. Once the specified number of consecutive attempts is exceeded, the destination is treated as if the REOPEN attribute was not specified.
  • Once the failure count is greater than or equal to the value set for the MAX_FAILURE attribute, the REOPEN attribute value is implicitly set to zero, which causes redo transport services to transport redo data to an alternate destination (defined with the ALTERNATE attribute) on the next archival operation.

The following example allows redo transport services up to three consecutive archival attempts, tried every 5 seconds, to the arc_dest destination. If the archival operation fails after the third attempt, the destination is treated as if the REOPEN attribute was not specified.

LOG_ARCHIVE_DEST_1=’LOCATION=/arc_dest REOPEN=5 MAX_FAILURE=3′
LOG_ARCHIVE_DEST_STATE_1=ENABLE

REOPEN
Specifies the minimum number of seconds before redo transport services should try to reopen a failed destination.

  • Default Value 300 seconds
  • The REOPEN attribute is optional.
  • Redo transport services attempt to reopen failed destinations at log switch time.
  • Redo transport services check if the time of the last error plus the REOPEN interval is less than the current time. If it is, redo transport services attempt to reopen the destination.
  • REOPEN applies to all errors, not just connection failures. These errors include, but are not limited to, network failures, disk errors, and quota exceptions.

Example:

LOG_ARCHIVE_DEST_3=’SERVICE=stby1 MANDATORY REOPEN=60′
LOG_ARCHIVE_DEST_STATE_3=ENABLE

VALID_FOR
Specifies whether redo data will be written to a destination, based on the following factors:
–   Whether the database is currently running in the primary or the standby role
–   Whether online redo log files, standby redo log files, or both are currently being archived on the database at this destination

  •  Default Value VALID_FOR=(ALL_LOGFILES, ALL_ROLES)
  • The VALID_FOR attribute is optional. However, Oracle recommends that the VALID_FOR attribute be specified for each redo transport destination at each database in a Data Guard configuration so that redo transport continues after a role transition to any standby database in the configuration.
  • To configure these factors for each LOG_ARCHIVE_DEST_n destination, you specify this attribute with a pair of keywords: VALID_FOR=(redo_log_type,database_role):

The redo_log_type keyword identifies the destination as valid for archiving one of the following:

ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
STANDBY_LOGFILE—This destination is valid only when archiving standby redo log files.
ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.
The database_role keyword identifies the role in which this destination is valid for archiving:

PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
STANDBY_ROLE—This destination is valid only when the database is running in the standby role.
ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.

LOG_ARCHIVE_DEST_1=’LOCATION=/disk1/oracle/oradata VALID_FOR=(ALL_LOGFILES, ALL_ROLES)’

NOREGISTER
Indicates that the location of the archived redo log file should not be recorded at the corresponding destination.

  • The NOREGISTER attribute is optional if the standby database destination is a part of a Data Guard configuration.
  • The NOREGISTER attribute is required if the destination is not part of a Data Guard configuration.

LOG_ARCHIVE_DEST_5=’NOREGISTER’

Advertisement

Posted in Advanced | Tagged: , | Leave a Comment »

Standby: Quick Switchover with Physical Database.

Posted by FatDBA on January 27, 2013

Oracle 10g Data Guard – Quick Switchover with Physical Standby Database

You have a Standby database which is setup using Data Guard and works properly, you may want to test switchover, or perform switchover to reduce primary database downtime during OS upgrades or hardware upgrades.  A switchover allows the primary database to switch roles with its standby database. There is no data loss during a switchover. You can switch back to the original Primary database later by performing another switchover.

In case of primary database failure, you will need to perform failover to transition the standby database to the primary role. After a failover, the original primary database can no longer participate in the Data Guard configuration. So if the original Primary database is still accessible, you should always consider a switchover first.

This document only talks about switchover involving physical standby database. In this example, the original primary data is called PRIM and the original standby database is called STAN.

I. Before Switchover:

1. As I always recommend, test the Switchover first on your testing systems before working on Production.

2. Verify the primary database instance is open and the standby database instance is mounted.

3. Verify there are no active users connected to the databases.

4. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:

SQL>select sequence#, applied from v$archvied_log;
Perform SWITCH LOGFILE if necessary.

In order to apply redo data to the standby database as soon as it is received, use Real-time apply.

II. Quick Switchover Steps

1. Initiate the switchover on the primary database PRIM:
SQL>connect /@PRIM as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

2. After step 1 finishes, Switch the original physical standby db STAN to primary role;
Open another prompt and connect to SQLPLUS:
SQL>connect /@STAN as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;

4. After step 3 completes:
– If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN.
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

– If you are using Oracle Database 10g release 2, you can open the new Primary database STAN:
SQL>ALTER DATABASE OPEN;

STAN is now transitioned to the primary database role.

5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM.
SQL>ALTER SYSTEM SWITCH LOGFILE;

 

FAST-START Failover:
Other way to perform Switchovers automatically is using Data Guard Broker (DGMGRL). FAST-START Failover is one of the solution available and widely accepted to perform Auto Switchovers in case of Production faults/errors.

It requires:
1. Protection Mode should be Maximum Availability.
2. FAST-START Failover Observer.
3. A Production and Standby Database)

Untitled

 

Read More:

faststartfailoverbestp-131997

Posted in Advanced | Tagged: , | Leave a Comment »

Recovery Error: ORA-01207: file is more recent than control file – old control file

Posted by FatDBA on January 21, 2013

Solution steps when you started receiving ORA-01207.

Reason: The control file change sequence number in the data file is greater than the number in the control file. This implies that the wrong control file is being used. Note that repeatedly causing this error can make it stop happening without correcting the real problem. Every attempt to open the database will advance the control file change sequence number until it is great enough.

Scenario: In my scenario one of our SIT box lost all the copies of multiplexed ControlFiles from the system and after couple of minutes Database got crashed. While trying fixing the problem we discovered that we did’nt have the backup copies of Control Files Available, but we had a mount point backup available which holds all backup information but due to issues with our SMARTS (Monitoring Tool) backup script got failed to update the control file backup. So we didn’t knew that the ControlFile backup available was too old and have a very old sequence number registered.

So, after restoring the controlfile from that outdated backup we started recieving this error message during Databse OPEN:

SQL> alter database open;

ORA-00283: recovery session canceled due to errors
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: ‘/app/oracle/oradata/rac10g/system01.dbf’
ORA-01207: file is more recent than control file – old control file

Solution: recreate the control file.

1) dump controlfile to trace
SQL> alter database backup controlfile to trace as ‘/tmp/ccc.sql’;

2) startup database nomount
SQL> startup nomount

3) create the control file
SQL> @/tmp/ccc.sql
Control file created.

sys@SIT4435Z89> recover database using backup controlfile;

SQL> recover database using backup controlfile ;
ORA-00279: change 1904510 generated at 12/11/2012 15:46:23 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 1904510 for thread 1 is in sequence #62

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+DATA/orcl/onlinelog/group_1.261.801074115
ORA-00310: archived log contains sequence 61; sequence 62 required
ORA-00334: archived log: ‘+DATA/orcl/onlinelog/group_1.261.801074115’

Now it is asking for Log sequence 62 which is not available under archive dest
ASMCMD> ls
thread_1_seq_52.314.801739289
thread_1_seq_53.294.801743225
thread_1_seq_54.295.801743259
thread_1_seq_55.285.801743403
thread_1_seq_56.283.801757235
thread_1_seq_57.282.801758303
thread_1_seq_58.281.801759635
thread_1_seq_59.286.801761231
thread_1_seq_60.305.801762327
thread_1_seq_61.296.801762385

We only have seq:61 available.

Hint: It could be still inside REDO Log groups and is not archived before the crash or Control file loss.
Hence we’ll check both of the two avialable redo groups to find seq:62
+DATA/orcl/onlinelog/group_1.261.801074115
+DATA/orcl/onlinelog/group_2.262.801074117

SQL> startup  mount
ORACLE instance started.

Total System Global Area 1636814848 bytes
Fixed Size                  2213856 bytes
Variable Size            1140852768 bytes
Database Buffers          486539264 bytes
Redo Buffers                7208960 bytes
Database mounted.
SQL> recover database using backup controlfile ;
ORA-00279: change 1904510 generated at 12/11/2012 15:46:23 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 1904510 for thread 1 is in sequence #62

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+DATA/orcl/onlinelog/group_1.261.801074115
ORA-00310: archived log contains sequence 61; sequence 62 required
ORA-00334: archived log: ‘+DATA/orcl/onlinelog/group_1.261.801074115’

SQL> recover database using backup controlfile ;
ORA-00279: change 1904510 generated at 12/11/2012 15:46:23 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 1904510 for thread 1 is in sequence #62

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+DATA/orcl/onlinelog/group_2.262.801074117
Log applied.
Media recovery complete.

Shows it was available under redo group: 2 (could be Current or Active) and it is successfully applied.

Finally we’d open DB in reset log mode.

SQL> alter database open resetlogs ;
Database altered.

Posted in Advanced | Tagged: , | Leave a Comment »

TKProf Utility.

Posted by FatDBA on January 17, 2013

TKPROF:
————————–
TKProf is oracle provided trace reading utility which converts results in to more human readable format.
It required one to Enable SQL_Trace and set TIMED_Statistics which helps Oracle to create trace files with extra information.

Example:
SQL> alter session set timed_statistics = true;
Session altered.

SQL> alter session set sql_trace = true;
Session altered.

I’ve created a query to find my Oracle Server process id:
SELECT p.spid oracle_dedicated_process, s.process clientpid FROM v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = (select sid from v$mystat where rownum = 1);

ORACLE_DEDIC CLIENTPID
———— ————
11653        11636

SQL> select * from scott.emp where job=’CLERK’;

EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
———- ———- ——— ———- ——— ———- ———- ———-
7369 SMITH      CLERK           7902 17-DEC-80        800                    20
7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
7900 JAMES      CLERK           7698 03-DEC-81        950                    30
7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

[oracle@localhost udump]$ ls -ltr
total 128

-rw-r—– 1 oracle oinstall   635 Jan 17 23:05 orcl_ora_11331.trc
-rw-r—– 1 oracle oinstall   692 Jan 17 23:05 orcl_ora_11358.trc
-rw-r—– 1 oracle oinstall 21454 Jan 17 23:08 orcl_ora_11359.trc
-rw-r—– 1 oracle oinstall  2862 Jan 17 23:31 orcl_ora_11653.trc

[oracle@localhost udump]$ tkprof orcl_ora_11653.trc Prashant_tkprof_study.txt

TKPROF: Release 10.2.0.1.0 – Production on Thu Jan 17 23:32:41 2013
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

[oracle@localhost udump]$ ls -ltr
total 145

-rw-r–r– 1 oracle oinstall  5925 Jan 17 23:32 Prashant_tkprof_study.txt

 
[oracle@localhost udump]$ cat Prashant_tkprof_study.txt

TKPROF: Release 10.2.0.1.0 – Production on Thu Jan 17 23:32:41 2013

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Trace file: orcl_ora_11653.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

alter session set sql_trace = true

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
——- ——  ——– ———- ———- ———- ———-  ———-
total        1      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
********************************************************************************

SELECT p.spid oracle_dedicated_process, s.process clientpid
FROM
v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = (select sid from
v$mystat where rownum = 1)

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          0          0           1
——- ——  ——– ———- ———- ———- ———-  ———-
total        4      0.01       0.01          0          0          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
——-  —————————————————
1  NESTED LOOPS  (cr=0 pr=0 pw=0 time=2133 us)
1   HASH JOIN  (cr=0 pr=0 pw=0 time=2511 us)
1    FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=643 us)
1     COUNT STOPKEY (cr=0 pr=0 pw=0 time=20 us)
1      FIXED TABLE FULL X$KSUMYSTA (cr=0 pr=0 pw=0 time=14 us)
1       FIXED TABLE FULL X$KSUSGIF (cr=0 pr=0 pw=0 time=4 us)
20    FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=43 us)
1   FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=6 us)

********************************************************************************

select *
from
scott.emp where job=’CLERK’

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          8          0           4
——- ——  ——– ———- ———- ———- ———-  ———-
total        4      0.00       0.00          0          8          0           4

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
——-  —————————————————
4  TABLE ACCESS FULL EMP (cr=8 pr=0 pw=0 time=388 us)

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        2      0.01       0.01          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        4      0.00       0.00          0          8          0           5
——- ——  ——– ———- ———- ———- ———-  ———-
total        9      0.01       0.02          0          8          0           5

Misses in library cache during parse: 2
Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
——- ——  ——– ———- ———- ———- ———-  ———-
total        0      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0

3  user  SQL statements in session.
0  internal SQL statements in session.
3  SQL statements in session.
********************************************************************************
Trace file: orcl_ora_11653.trc
Trace file compatibility: 10.01.00
Sort options: default

1  session in tracefile.
3  user  SQL statements in trace file.
0  internal SQL statements in trace file.
3  SQL statements in trace file.
3  unique SQL statements in trace file.
48  lines in trace file.
35  elapsed seconds in trace file.

Posted in Advanced | Tagged: , , | Leave a Comment »

KEEP Clause * Error: ORA-19811: cannot have files in DB_RECOVERY_FILE_DEST with keep attributes*

Posted by FatDBA on January 13, 2013

KEEP option with RMAN:
If you have any backup criterion set and want to override settings then there is ‘KEEP FOREVER | UNTIL TIME’ clause from recovery manager which will help you to achieve required. There are certain conditions that’s needs to be fulfilled before using FOREVER clause.

Conditions:
1. You should be connected with the CATALOG server  (If using KEEP FOREVER Clause. No need if using KEEP UNTIL TIME Clause).
2. Oracle will not allow you to save backup pieces/set’s inside the FRA. You have to change the backup directory.

Let’s take an example:

[oracle@localhost ~]$ rman target / catalog rman/oracle90@orcl;
RMAN> backup current controlfile keep forever;

Starting backup at 13-JAN-13
current log archived

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 01/13/2013 18:35:23
ORA-19811: cannot have files in DB_RECOVERY_FILE_DEST with keep attributes

Solution:
RMAN> backup current controlfile keep forever format ‘/u01/%U’;

Starting backup at 13-JAN-13
current log archived

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/20nvb5gk_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/21nvb5gm_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

current log archived
using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=66 RECID=39 STAMP=804623897
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/22nvb5gq_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/23nvb5gr_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JAN-13

Posted in Advanced | Tagged: , , | Leave a Comment »

ORA-03113: end-of-file on communication channel **During ASM Instance startup**

Posted by FatDBA on January 8, 2013

Faced one more end-of-file communication error with one of my Test ASM Environment. Below are the steps i’ve tried to fix the problem.
Error:
[oracle@localhost bin]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jan 8 20:00:25 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> ORA-03113: end-of-file on communication channel
1st Step:

I’ve tried to check the CRS (Cluster Restart Service or Oracle Restart Service) of instance. Using crs_stats you can get a list of all services which are linked with CRS with their respective state and target details.
(Always look for STATE & TARGET out of other stats provided and try to fix ‘Types’ with ‘OFFLINE’ state).

Example:
[oracle@localhost bin]$  crs_stat -t

Name           Type           Target    State     Host
————————————————————
ora.cssd       ora.cssd.type  OFFLINE   OFFLINE
ora.diskmon    ora….on.type OFFLINE   OFFLINE
ora.asm        ora….on      ONLINE    OFFLINE

* Clearly reflects broken ora.asm service is broken and is OFFLINE right now.

2nd Step:
Forcibly de-configure CRS Service via Root user.
(You’ll find the Perl script in GRID Home/crs/install directory)

Example:
[root@localhost install]# ./rootcrs.pl -deconfig -force
2013-01-08 19:53:59: Parsing the host name
2013-01-08 19:53:59: Checking for super user privileges
2013-01-08 19:53:59: User has super user privileges
Using configuration parameter file: ./crsconfig_params
Usage: srvctl <command> <object> [<options>]
commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
objects: database|service|asm|diskgroup|listener|home|ons|eons
For detailed help on each command and object and its options use:
srvctl <command> -h or
srvctl <command> <object> -h
PRKO-2012 : nodeapps object is not supported in Oracle Restart
ACFS-9200: Supported
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4013: This command is not supported in a single-node configuration.
CRS-4000: Command Stop failed, or completed with errors.
You must kill crs processes or reboot the system to properly
cleanup the processes started by Oracle clusterware
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

Third Step:
Re-run root.sh script using Root account. This will configure OCR (Oracle Cluster Registry) keys for oracle user and restart ohasd (Oracle High Availability Service Deamon) again.

* Oracle Clusterware is run by Cluster Ready Services (CRS) which consisting of  Oracle Cluster Registry (OCR), which records and maintains the cluster and node membership information. Records Disk membership information in case of ASM configured on Non-RAC systems.

Example:
[root@localhost dbhome_1]# ./root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-01-08 19:55:58: Checking for super user privileges
2013-01-08 19:55:58: User has super user privileges
2013-01-08 19:55:58: Parsing the host name
Using configuration parameter file: /u01/app/oracle/product/11.2.0/dbhome_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
CRS-4664: Node localhost successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

acfsroot: ACFS-9302: No installation files found at /u01/app/oracle/product/11.2.0/dbhome_1/install/usm/EL5/i386/2.6.18-8/2.6.18-8.el5uek-i686/bin.

localhost     2013/01/08 19:56:20     /u01/app/oracle/product/11.2.0/dbhome_1/cdata/localhost/backup_20130108_195620.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.
You have new mail in /var/spool/mail/root

Step 4th:
Start cssd and diskmon services.

* Oracle Cluster Synchronization Services daemon (OCSSd) provides basic ‘group services’ support. This is also required in a single instance configuration if Automatic Storage Management (ASM) is use.
* diskmon (Disk Monitor daemon): Responsibel for performing Monitoring activities. The diskmon daemon is always started when ocssd starts.

[oracle@localhost ~]$ cd /u01/app/oracle/product/11.2.0/dbhome_1/bin
[oracle@localhost bin]$ ./crsctl start resource ora.cssd
CRS-2672: Attempting to start ‘ora.cssd’ on ‘localhost’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘localhost’
CRS-2676: Start of ‘ora.diskmon’ on ‘localhost’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘localhost’ succeeded

Step 5th:
Let’s try to Mount the ASM Diskgroup.
ASMCMD> mount DATA

(If not mounted, you’ll probably receive error codes: ORA-15110: ‘no diskgroups mounted’ and will be fixed once you mount Diskgroup).

Step 6th:
Finally start your ASM instance.

[oracle@localhost bin]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jan 8 20:00:25 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Automatic Storage Management option

SQL>

Done…

Posted in Advanced | Tagged: , | Leave a Comment »

ORA-03113: end-of-file on communication channel **Error due to insufficient Disk Space in ASM DGroup**

Posted by FatDBA on January 7, 2013

Steps to fix an ASM related issue/error when you failed to start an Instance configured on ASM Disk Groups.

Error Message:
ORA-03113: end-of-file on communication channel

When (Live/Real-Time Situation):
[oracle@localhost trace]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Jan 7 20:43:52 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1337720 bytes
Variable Size             239077000 bytes
Database Buffers          289406976 bytes
Redo Buffers                5840896 bytes
Database mounted.

ORA-03113: end-of-file on communication channel
Process ID: 10910
Session ID: 21 Serial number: 3

How to track the error message:
As always Alert log for that instance.

Snippetsfrom AlertLog:
Errors in file /u01/app/oracle/diag/rdbms/etr/etr/trace/etr_ora_6969.trc:
ORA-19816: WARNING: Files may exist in db_recovery_file_dest that are not known to database.
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15041: diskgroup “DATA” space exhausted
*************************************************************
WARNING: A file of type ARCHIVED LOG may exist in
db_recovery_file_dest that is not known to the database.
Use the RMAN command CATALOG RECOVERY AREA to re-catalog
any such files. If files cannot be cataloged, then manually
delete them using OS command. This is most likely the
result of a crash during file creation.
*************************************************************
Errors in file /u01/app/oracle/diag/rdbms/etr/etr/trace/etr_ora_6969.trc:
ORA-19504: failed to create file “+DATA”
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15041: diskgroup “DATA” space exhausted
ARCH: Error 19504 Creating archive log file to ‘+DATA’

Results: It clearly says about Space Exhaustion in Line Number 15 with error message ‘ORA-15041: diskgroup “DATA” space exhausted’
Let’s understand this error message using OERR Utlity:

[oracle@localhost ~]$ oerr ORA 15041
15041, 00000, “diskgroup \”%s\” space exhausted”
// *Cause:  The diskgroup ran out of space.
// *Action: Add more disks to the diskgroup, or delete some existing files.

Let’s Verify space of ASM Disk Group:
SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup;

NAME                                     STATE          TYPE     TOTAL_MB    FREE_MB
—————————— ———–        ——         ———-    ————————————
DATA                                  MOUNTED     NORMAL       4000         17

Only 17 MB of Free Space is left in Disk Group.

Resolution:
Create a new RAW Disk and alter Diskgroup to add that newly created rdisk and create some extra space.

SQL> alter diskgroup DATA add disk ‘/dev/raw/raw6’;
Diskgroup altered.

SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup;

NAME                                       STATE          TYPE     TOTAL_MB    FREE_MB
—————————— ———–        ——           ———- ——————————-
DATA                                     MOUNTED     NORMAL       5000       1015

Let’s try to start the same instance again:

SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1337720 bytes
Variable Size             239077000 bytes
Database Buffers          289406976 bytes
Redo Buffers                5840896 bytes
Database mounted.

Database opened.

Fixed.

Posted in Advanced | Tagged: , | Leave a Comment »

Row Chaining & Row Migration

Posted by FatDBA on January 5, 2013

Row Chaining:
Occurs when a row is too large to fit into an EMPTY data block.In this Oracle stores the DATA for the row in a CHAIN of one or more Data BLOCKS. Chainining occurs when row is Inserted or Updated and happens with rows that contains big data types i.e. LOB etc.
Example: suppose if you have DB_BLOCK_SIZE of 8k (standard) and you want to insert a row of more than 8K in to a Block, Oracle then uses other DB block to fit the remaining portion of the row to it and this is known as Row Chaining and always happens when a row is being INSERTED.

ora_row_chained_1

Row Migration:
Migrated row on the other hand is a row which has been updated larger than it initially was – and if as a result it doesn’t fit into its original block, the row itself is moved to a new block, but the header (kind of a stub pointer) of the row remains in original location. This is needed so that any indexes on the table would still be able to find that row using original ROWIDs stored in them).
Migration always happens in case of UPDATE.

ora_row_migration_1

To Identify Row Chaing and Row Migration:
1. USER_TABLES
2. Using view V$SYSSTAT
3. Using Analyze

How to avoid and eliminate Chained/Migrated rows ?
====================================================
1-ALTER TABLE … MOVE command.
You can MOVE the table by using ALTER TABLE … MOVE statement that enables you to relocate data of a non partitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE … MOVE statement with the COMPRESS keyword to store the new segment using table compression.

Note : Moving a table changes the row ids of the rows in the table. This causes indexes on the table to be marked UNUSABLE,and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

2-Increase PCTFREE.
The PCTFREE parameter sets the minimum percentage of a data block to be reserved as free space for possible updates to rows that already exist in that block. For example, assume that you specify the following parameter within a CREATE TABLE statement. If the PCTFREE has been set to a low value, that means there is not enough room in the block for updates. To avoid migration,all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.

3- Import/Export can be taken as an approach to eliminate the migrated rows.

4- Avoid to create tables with more than 255 columns.

 

——- T.E.S.T ——–

SQL> @$ORACLE_HOME/RDBMS\ADMIN\utlchain.sql
Table Created

SQL> Desc CHAINED_ROWS;

Name                                                     Null?                         Type
—————————————– ——– —————————-
OWNER_NAME                                                      VARCHAR2(30)
TABLE_NAME                                                         VARCHAR2(30)
CLUSTER_NAME                                                   VARCHAR2(30)
PARTITION_NAME                                               VARCHAR2(30)
SUBPARTITION_NAME                                      VARCHAR2(30)
HEAD_ROWID                                                       ROWID
ANALYZE_TIMESTAMP                                      DATE

SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’;

no rows selected

SQL> create table chain_row_test (a varchar(4000),b varchar(4000), c varchar(4000));
Table created.

SQL> insert into chain_row_test (a,b,c) values ( 1, rpad(‘*’,40000000,’*’), rpad(‘*’,2300000,’*’) )
1 row created.

SQL> commit ;
Commit complete.

SQL> SELECT count(*) FROM chained_rows;

COUNT(*)
———-
0

SQL> analyze table chain_row_test list chained rows into chained_rows ;
Table analyzed.

SQL> SELECT count(*) FROM chained_rows;

COUNT(*)
———-
1

SQL> select dbms_rowid.rowid_block_number(rowid) from chain_row_test;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
————————————
508711

Posted in Advanced | Tagged: | Leave a Comment »

Recovering Parameter File

Posted by FatDBA on January 2, 2013

Today while working discovered that both my Pfile and SPfile from one of my test database are missing which we realized after a bounce, when it failed to start and started throwing error message about loss with it’s previous location.

ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/u01/app/oracle/product/10.2.0/dbhome_2/dbs/initorcl.ora’

I tried starting my database from RMAN console to recover my SPfile from my latest Autobackup’s, which is not possible until the database is in atleast nomount mode by Recovery Manager.  This is not possible without parameter file hence Oracle allows RMAN to start-up related instance in nomount mode with the help of a DUMMY parameter file which it creates in it’s default location ($ORACLE_HOME/dbs).

The purpose of this DUMMY file is to aid restoration of SPfile from available backup’s. The file contains below information (10.2.0.1 Database)

db_name=DUMMY
remote_login_passwordfile=exclusive
compatible=10.2.0.1.0
sga_target=150M
_dummy_instance=TRUE

connected to target database (not started)
RMAN> startup nomount;
OR
RMAN> startup force nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/u01/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora’

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area     159019008 bytes

Fixed Size                     1335192 bytes
Variable Size                 75497576 bytes
Database Buffers              79691776 bytes
Redo Buffers                   2494464 bytes

Now you can retrieve your missing server parameter files from backup’s.

The another way to recover Parameter file when there is not any backup’s available is through the
1. Alert Log:
Database always copies all entries from pfile to alert log during every startup and shutdown of database.

2. If you have any recent stats reports available like: AWR (Automated Workload Repository) etc. :
It always contains PFile snap at the end of the report.

 

Posted in Advanced | Tagged: , | Leave a Comment »

Incrementally Updated Backup

Posted by FatDBA on January 1, 2013

Incrementally Updated Backups

Using this feature all changes between the SCN of the original image copy and the SCN of the incremental backup are applied to the image copy, winding it forward to make the equivalent of a new database image copy without the overhead of such a backup. The following example shows how this can be used.

RUN {
RECOVER COPY OF DATABASE WITH TAG ‘incr_backup’ UNTIL TIME ‘SYSDATE – 7’;
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘incr_backup’ DATABASE;
}

The RECOVER COPY… line will not do anything until the script has been running for more than 7 days. The BACKUP INCREMENTAL line will perform a complete backup (level 0) the first day it is run, with all subsequent backups being level 1 incremental backups. After 7 days, the RECOVER COPY… line will start to take effect, merging all incremental backups older than 7 days into the level 0 backup, effectively moving the level 0 backup forward. The effect of this is that you will permanently have a 7 day recovery window with a 7 day old level 0 backup and 6 level 1 incremental backups. Notice that the tag must be used to identify which incremental backups apply to which image copies.

If you wanted to keep your image copy as up to date as possible you might do the following.

RUN {
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘incr_backup’ DATABASE;
RECOVER COPY OF DATABASE WITH TAG ‘incr_backup’;
}

In this example the incremental backup is merged into the image copy as soon as it is completed.

Posted in Advanced | Tagged: | Leave a Comment »

 
%d bloggers like this: