Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for December, 2012

Standby Scenario: Recovering After a Network Failure

Posted by FatDBA on December 9, 2012

http://docs.oracle.com/cd/A84870_01/doc/server.816/a76995/standbys.htm#30520

Posted in Advanced | Tagged: , | Leave a Comment »

RMAN-06056: RMAN Engine failed to access any datafile (How to Skip)

Posted by FatDBA on December 9, 2012

Today when trying to backup full database, I’ve  encountered error which says RMAN failed to access one of the datafile. I found that datafile is not of any use, hence  I’ve  dropped the datafile offline but still RMAN not allowed me to backup database. Here is one of the resolution to fix such issue when you are sure that datafile is not of any use.

RMAN> backup database;

Starting backup at 09-DEC-12
using channel ORA_DISK_1
could not read file header for datafile 9 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 12/09/2012 14:28:13
RMAN-06056: could not access datafile 9

RMAN> backup database skip inaccessible;

Starting backup at 09-DEC-12
using channel ORA_DISK_1
could not access datafile 9
skipping inaccessible file 9
RMAN-06060: WARNING: skipping datafile compromises tablespace RMAN recoverability
RMAN-06060: WARNING: skipping datafile compromises tablespace RMAN recoverability
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile fno=00003 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
input datafile fno=00006 name=/u01/app/oracle/aliflaila
input datafile fno=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf
input datafile fno=00008 name=/u01/app/oracle/oradata/orcl/undotbs007.dbf
input datafile fno=00002 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
input datafile fno=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
input datafile fno=00007 name=/u01/app/oracle/oradata/orcl/users02.dbf
channel ORA_DISK_1: starting piece 1 at 09-DEC-12

Advertisement

Posted in Advanced | Tagged: , | Leave a Comment »

Keep Buffer Pool & Recycle Buffer Pool

Posted by FatDBA on December 9, 2012

sgadetail_mo

Let’s discuss about two of lesser known sections in Database Buffer Cache named: Keep Buffer Pool and Recycle Buffer Pool.

Keep Buffer Pool
Data which is frequently accessed should be kept in Keep buffer pool. Keep buffer pool retains data in the memory. So that next request for same data can be entertained from memory. This avoids disk read and increases performance. Usually small objects should be kept in Keep buffer. DB_KEEP_CACHE_SIZE initialization parameter is used to create Keep buffer Pool. If DB_KEEP_CACHE_SIZE is not used then no Keep buffer is created. Use following syntax to create a Keep buffer pool of 40 MB. Often times an application will have a few very critical objects, such as indexes, that are small enough to fit in the buffer cache but are quickly pushed out by other objects. This is the perfect case for using the initialization
parameter called DB_KEEP_CACHE_SIZE. The KEEP buffer pool is aptly named. It is intended to be used for objects that take absolute priority in the cache. For instance, critical indexes or small look-up tables.
DB_KEEP_CACHE_SIZE=40M

Recycle Buffer Pool
Blocks loaded in Recycle Buffer pool are immediate removed when they are not being used. It is useful for those objects which are accessed rarely. As there is no more need of these blocks so memory occupied by such blocks is made available for others data. The RECYCLE buffer pool is best utilized to “protect” the default buffer pool from being consumed by randomly accessed blocks of data.

For example if ASM is enabled then available memory can be assigned to other SGA components . Use following syntax to create a Recycle Buffer Pool

DB_RECYCLE_CACHE_SIZE=20M

Posted in Advanced | Tagged: | Leave a Comment »

NOFILENAMECHECK Parameter: RMAN Backup’s

Posted by FatDBA on December 9, 2012

NOFILENAMECHECK (RMAN Parameter) is used to handle Data Block’s corruptions during Backup’s.

By default a checksum is calculated for every block read from a datafile and stored in the backup or image copy. If you use the NOCHECKSUM option, then checksums are not calculated. If the block already contains a checksum, however, then the checksum is validated and stored in the backup. If the validation fails, then the block is marked corrupt in the backup.

By default, the BACKUP command computes a checksum for each block and stores it in the backup. The BACKUP command ignores the values of DB_BLOCK_CHECKSUM because this initialization parameter applies to data files in the database, not backups. If you specify the NOCHECKSUM option, then RMAN does not perform a checksum of the blocks when writing the backup.

You cannot disable checksums for data files in the SYSTEM tablespace even if DB_BLOCK_CHECKSUM=false.

“To speed up the process of copying, you can use the NOCHECKSUM parameter. By default, RMAN computes a checksum for each block backed up, and stores it with the backup. When the backup is restored, the checksum is verified.”

Posted in Advanced | Tagged: , | 2 Comments »

Scenario: Recovering a dropped Table.

Posted by FatDBA on December 7, 2012

One not-uncommon error is the accidental dropping of a table from your database. In general, the fastest and simplest solution is to use the flashback drop feature, to reverse the dropping of the table. However, if for some reason, such as flashback drop being disabled or the table having been dropped with the PURGE option, you cannot use flashback table.

To recover a table that has been accidentally dropped, use the following procedure:

1. If possible, keep the database that experienced the user error online and available for use. Back up all datafiles of the existing database in case an error is made during the remaining steps of this procedure.

2. Restore a database backup to an alternate location, then perform incomplete recovery of this backup using a restored backup control file, to the point just before the table was dropped.

3. Export the lost data from the temporary, restored version of the database using an Oracle export utility. In this case, export the accidentally dropped table.

4. Use an Oracle import utility to import the data back into the production database.

5. Delete the files of the temporary copy of the database to conserve space.

Posted in Advanced | Tagged: , | Leave a Comment »

Redo Logs

Posted by FatDBA on December 7, 2012

Redo Log Files are used in a circular fashion.

  • One log file is written in sequential fashion until it is filled, and then the second redo log begins to fill.  This is known as a Log Switch.
  • When the last redo log is written, the database begins overwriting the first redo log again.

image008

 

  • The Redo Log file to which LGWR is actively writing is called the current log file.
  • Log files required for instance recovery are categorized as active log files.
  • Log files no longer needed for instance recovery are categorized as inactive log files.

·        Active log files cannot be overwritten by LGWR until ARCn has archived the data when archiving is enabled.

 

Log Writer Failure

What if LGWR cannot write to a Redo Log File or Group?  Possible failures and the results are:

  1. At least one Redo Log File in a Group can be written – Unavailable Redo Log Group members are marked as Invalid, a LGWR trace file is generated, and an entry is written to the alert file – processing of the database proceeds normally while ignoring the invalid Redo Log Group members.
  2. LGWR cannot write to a Redo Log Group because it is pending archiving – Database operation halts until the Redo Log Group becomes available (could be through turning off archiving) or is archived.
  3. A Redo Log Group is unavailable due to media failure – Oracle generates an error message and the database instance shuts down.  During media recovery, if the database did not archive the bad Redo Log, use this command to disable archiving so the bad Redo Log can be dropped:

ALTER DATABASE CLEAR UNARCHIVED LOG

  1. A Redo Log Group fails while LGWR is writing to the members – Oracle generates an error message and the database instance shuts down.  Check to see if the disk drive needs to be turned back on or if media recovery is required.  In this situation, just turn on the disk drive and Oracle will perform automatic instance recovery.

Sometimes a Redo Log File in a Group becomes corrupted while a database instance is in operation.

  • Database activity halts because archiving cannot continue.
  • Clear the Redo Log Files in a Group (here Group #2) with the statement:

ALTER DATABASE CLEAR LOGFILE GROUP 2;

 

Minimum size for an On-line Redo Log File is 4MB.

 

Posted in Basics | Tagged: | Leave a Comment »

Something Default: Control Files Backup

Posted by FatDBA on December 7, 2012

If trying to create a backup of your control file and you have not mentioned TO clause which allows you to choose any directory of your choice to move the backup of CF. If using 10g default location is always User Dump Destination and for 11g it’s DIAG (Automatic Diagnostic Repository).

Snippets from one of my Alert Log soon after i fired command.

alter database backup controlfile to trace
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12795.trc
Completed: alter database backup controlfile to trace

Posted in Advanced | Tagged: , | Leave a Comment »

Limit of ARCn and DBWRn processes.

Posted by FatDBA on December 2, 2012

Maximum number of Archiver (ARCn) processes allowed in Oracle 11g DB id 1-30.  Let’s try and check changing the parameter  log_archive_max_processes

SQL> alter system set log_archive_max_processes=35;
alter system set log_archive_max_processes=35
*
ERROR at line 1:
ORA-00068: invalid value 35 for parameter log_archive_max_processes, must be
between 1 and 30

Maximum number of Database Writer (DBWRn)

DB_WRITER_PROCESSES = 2 * number of CPU
Range of value in Oracle 10g- 1 to 20 but in Oracle 11g – 1 to 36.

SQL> alter system set db_writer_processes=20 scope=spfile sid=’orcl’;
System altered.

Posted in Basics | Tagged: | Leave a Comment »

 
%d bloggers like this: