Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for February, 2013

Database Migration vs Upgrade ?

Posted by FatDBA on February 25, 2013

While talking with one of the new DBA found him uncomfortable & perplexed when started discussing Database Migration and Upgrade activities. Well i’ve seen that there are so many confusions revolves in any DBA’s mind who is very new to the Administration.

Alright, so today i would like to distinguish between the two activities which could be easily considered as one of the major activity of any DBA.

Okay.

Terms like Migration and Upgrade are used to convert an existing database to a new software release. One of the major difference that i’ve discovered is,

In case of Upgrade, you shut down the database, start it up with the new software release, and then execute provided SQL and PL/SQL scripts to update Oracle’s internal dictionary tables and other objects. Oracle also provides ‘downgrade’ scripts which can be run to revert back all changes that happened to the database during the Upgrade process.
In broader terms Upgrade is a new release for existing system.

Example: DB Version change from Oracle 7.3.3 to Oracle 7.4.4 is an Upgrade example. e.g. 8.1.6->8.1.7 or 8.1.7.0->8.1.7.4.

Migration:
In some cases Migration may describe the process of moving data from a non-Oracle database into an Oracle database. A Migration is generally displacing a database from —

* From one destionation/location to new one.
* One RDBMS to another (Hetrogenous), example: Sybase to Oracle.
* One major release to another (Homogenous), example: Oracle 9i to Oracle 10g.
Example: Going from Oracle 9i to Oracle 10g is a Migration example. e.g. 7->8, 8->9.

Advertisement

Posted in Advanced | Tagged: , | 1 Comment »

Snapshot Standby ?

Posted by FatDBA on February 22, 2013

Writing …… Will Soon Update.

Posted in Advanced | Tagged: , | Leave a Comment »

Oracle Data Guard Evolution.

Posted by FatDBA on February 21, 2013

Oracle Data Guard’s Improvements from Version 8i (Introduced in same version) till now:

ORACLE 8i
————————-
* Read-Only Standby Database
* Managed recovery
* Remote archiving redo log files

ORALCE 9i
————————-
* “Zero Data Loss” Integration
* Data Guard Broker and Data Guard Manager GUI
* Swithcover and Failover operations
* Automatical synchronous
* Logical Standby Database
* Maximum Protection

ORACLE 10g
————————-
* Real-Time Apply
* Fast-Start Failover
* Asynchronous redo transfer
* Flashback Database

ORACLE 11g
————————
* Active Standby Database (Active Data Guard)
* Snapshot Standby
* Heterogeneous platform support (Production –Linux, Standby – Windows)

Posted in Advanced | Tagged: , | 1 Comment »

What are Standby Redo log Files in Data Guard Environment ?

Posted by FatDBA on February 21, 2013

What are Standby Redo Log Files and why they are important ?

Recently asked by one of my good Fried Sowmya about some internals of Standby Redo Log Files and soon i found myself in bit of soreness. So, here i decided to pt some light over this very basic thing if one has to start with Oracle Data Guard.

Standby Redo Logs: They are very similar to Online Redo but with the only difference that S’by Redo Logs are used to contain redo data from Primary DB. S’by Redo Logs are only used if you are using Log Writer as the log transport medium to Standby Site.

sbydb042
When we use Standby Redo Log Files:
*  When using Real-Time Apply service.
*  If using Maximum Protection or Maximum Availability Models.

Points To Remember:
* You should create standby redo logs on both the standby and the primary database which is helpful in case of switch-overs (But is not mandatory for Primary Database)
* The standby redo logs should be at least as big as the largest online redo log .
* You can create Standby Logs after creating Standby Environment.

* In case of disaster and complete loss of the primary database, the entire redo log worth of data will not be lost because the standby redo log file on standby site will contain some or all of the transactions contained in the lost redo log file. This has been one of the most useful enhancements in Data Guard technology, as it minimizes the loss of data while operating in MAXIMUM PERFORMANCE mode.

Example:
Creating Standby Redo log Files on
ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/app/oracle/oradata/DB11G/standby_redo01.log’) SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/app/oracle/oradata/DB11G/standby_redo02.log’) SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/app/oracle/oradata/DB11G/standby_redo03.log’) SIZE 50M;

SQL> select group#, member, type from v$logfile;

GROUP#                          MEMBER                                                                            TYPE
———-    ———————————————————————————————————————————–—
4       /u01/app/oracle/oradata/DB11G/standby_redo01.log                      STANDBY
3       /u01/app/oracle/oradata/DB11G/standby_redo02.log                      STANDBY
5       /u01/app/oracle/oradata/DB11G/standby_redo03.log                      STANDBY

Here i’ve added a standby logfile on Std DB end:

ALTER DATABASE ADD LOGFILE (‘/u01/app/oracle/oradata/DB11G/online_redo01.log’) SIZE 50M;
ALTER DATABASE ADD LOGFILE (‘/u01/app/oracle/oradata/DB11G/online_redo02.log’) SIZE 50M;
ALTER DATABASE ADD LOGFILE (‘/u01/app/oracle/oradata/DB11G/online_redo03.log’) SIZE 50M;

Pic Credits: Google/Oracle.com

Posted in Advanced | Tagged: , | 1 Comment »

Enterprise Cloud Infrastructure for Dummies (Oracle Special Edition – PDF Format) E-Book

Posted by FatDBA on February 19, 2013

Get your own copy of Enterprise Cloud Infrastructure for Dummies, Oracle Special Edition.
Source: Oracle.com

Link: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11847803&src=7618000&Act=8

1653970

Posted in Advanced, Basics | Tagged: | Leave a Comment »

DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM and

Posted by FatDBA on February 8, 2013

Well there are whole lot of changes when you start comparing Oracle 10g and 11g.  DB_ULTRA_SAFE is one of the new parameter introduced in 11g which provides a mechanism to avoid block corruptions in your database. his parameter actually controls three parameters DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM & DB_LOST_WRITE_PROTECT parameters collectively.
Click on below provided links in order to read more about these three three initialization parameters.

DB_BLOCK_CHECKING

DB_BLOCK_CHECKSUM

DB_LOST_WRITE_PROTECT

Posted in Advanced | Tagged: | 2 Comments »

Image Copies VS BackupSets

Posted by FatDBA on February 7, 2013

Difference between Image Copies and BackupSets (RMAN):

Image Copies could be of two types:
1. RMAN Image Copies
2. O.S Image Copies

RMAN Image Copies
Use the RMAN copy command to create an image copy. If the original file needs to be replaced, and if the image copy is of a datafile, then you do not need to restore the copy. Instead, Oracle provides a switch command to point the control file at the copy and update the recovery catalog to indicate that the copy has been switched. Issuing the switch command in this case is equivalent to issuing the SQL statement ALTER DATABASE RENAME DATAFILE. You can then perform media recovery to make the copy current. They have an advantage of checking for Corrupt Blocks as they are being read by RMAN.

O/S Image Copies
Oracle supports image copies created by mechanisms other than RMAN, also known as O/S copies. For example, a copy of a datafile that you make with the UNIX cp command is an O/S copy. You must catalog such O/S copies with RMAN before using them with the restore or switch commands.
You can create an O/S copy when the database is open or closed. If the database is open and the datafile is not offline normal, then you must place the tablespace in hot backup mode, that is, issue the SQL statement ALTER TABLESPACE BEGIN BACKUP before creating the copy.

BackupSets:
A backup set consists of one or more files in an RMAN-specific format, known as backup pieces. By default, a backup set consists of one backup piece. When backing up datafiles to backup sets, RMAN is able to skip some datafile blocks that do not currently contain data, reducing the size of backup sets and the time required to create them. This behavior, known as unused block compression, means that backups of datafiles as backup sets are generally smaller than image copy backups and take less time to write.

Main Differences:
•RMAN supports binary compression of backup sets, where the backup set contents are compressed before being written to disk using a compression algorithm tuned for compression of datafiles and archived log files. But with Image copy no compression is done.

•A backup set never contains empty blocks. As RMAN passes through the datafiles, it simply skips blocks that have never been used. But image copy contains empty blocks. This means that backups of datafiles as backup sets are generally smaller than image copy backups and take less time to write.

•Incremental backups can’t be performed with Image copy but incremental backups can be taken over backup sets.

•RMAN can take backup of backup sets directly to tape, if you have installed the RMAN drivers for your tape library.But RMAN can’t take backup of image copies directly to tape.

Posted in Advanced | Tagged: | 4 Comments »

Shrinking Segments

Posted by FatDBA on February 7, 2013

Shrinking Segments:
The diagram in the slide describes the two phases of a table shrink operation. The first phase does the compaction. During this phase, rows are moved to the left part of the segment as much as possible. Internally, rows are moved by packets to avoid locking issues. After the rows have been moved, the second phase of the shrink operation is started. During this phase, the high-water mark (HWM) is adjusted and the unused space is released. The COMPACT clause is useful if you have long-running queries that might span the shrink operation and attempt to read from blocks that have been reclaimed. When you specify the SHRINK SPACE COMPACT clause, the progress of the shrink operation is saved in the bitmap blocks of the corresponding segment. This means that the next time a shrink operation is executed on the same segment, the Oracle database server remembers what has been done already. You can then reissue the SHRINK SPACE clause without the COMPACT clause during off-peak hours to complete the second phase.

Untitled

Posted in Advanced | Tagged: , | Leave a Comment »

DEFERRED_SEGMENT_CREATION and DB_ULTRA_SAFE Parameters (New to 11gr2)

Posted by FatDBA on February 7, 2013

Deferred segment creation (DEFERRED_SEGMENT_CREATION):

Oracle Database 11gR2 includes a new space allocation method. When you create a non-partitioned heap table, the table segment creation is deferred to the first row insert. This functionality is enabled by default with the DEFERRED_SEGMENT_CREATION initialization parameter set to TRUE.

Advantages of this new space allocation method:
1. A significant amount of disk space can be saved for applications that create hundreds or thousands of tables upon installation,    many of which might never be populated.
2. The application installation time is reduced.

When you insert the first row into the table, the segments are created for the base table, its LOB columns, and its indexes. During segment creation, cursors on the table are invalidated. These operations have a small additional impact on performance.

A new parameter DEFERRED_SEGMENT_CREATION is created to control the feature. It can be enabled/disabled it at the session or system level (doesn’t require Instance restart).
Alter session set deferred_segment_creation=true;
Alter system set deferred_segment_creation=true;

Deferred segment creation is enabled by default. It’s also possible to enable/disable it for a single table by specifying the deferred segment creation clause.
CREATE TABLE…. SEGMENT CREATION IMMEDIATE
CREATE TABLE… SEGMENT CREATION DEFERRED

Example:

SQL> SHOW PARAMETERS deferred_segment_creation
NAME                                 TYPE        VALUE
———————————— ———– ——
deferred_segment_creation            boolean     TRUE

SQL> CREATE TABLE seg_test(c number, d varchar2(500));
Table created.
SQL> SELECT segment_name FROM user_segments;
no rows selected

Inserting rows and creating segments:
SQL> INSERT INTO seg_test VALUES(1, ‘aaaaaaa’);
1 row created.

SQL> SELECT segment_name FROM user_segments;
SEGMENT_NAME
——————————————————-
SEG_TEST

DB_ULTRA_SAFE is a new parameter brought in 11g.
It provides an combined mechanism to offer protection from various possible data corruptions and provides critical high availability advantages for Oracle Database. Setting DB_ULTRA_SAFE initialization parameter will configure the appropriate data protection block checking level in the database. It will control DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT.

DB_BLOCK_CHECKING: DB_BLOCK_CHECKING controls whether or not Oracle performs block checking for database blocks. Oracle checks a block by going through the data in the block, making sure it is logically self-consistent. Block checking can often prevent memory and data corruption. Block checking typically causes 1% to 10% overhead, depending on workload and the parameter value. The more updates or inserts in a workload, the more expensive it is to turn on block checking. You should set DB_BLOCK_CHECKING to FULL if the performance overhead is acceptable. For backward compatibility the use of FALSE (implying OFF) and TRUE (implying FULL) is preserved.
DB_BLOCK_CHECKSUM: detects redo and data block corruptions and detect corruptions on the primary database and protect the standby database. This parameter requires minimal CPU resources.
DB_LOST_WRITE_PROTECT (also introduced with 11gR1): enable or disable a physical standby database to detect lost write corruptions on both the primary and physical standby database.

DB_ULTRA_SAFE Parameter can be set to 3 different values: OFF, DATA_ONLY and DATA_AND_INDEX. Default value is OFF.

Here are the descriptions of these values:
OFF: It will not change values of DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT if they are explicitly set, otherwise all of them will set to default values.

DATA_ONLY: It will set DB_BLOCK_CHECKING to medium, DB_LOST_WRITE_PROTECT to typical and DB_BLOCK_CHECKSUM to full.

DATA_AND_INDEX: It will set DB_BLOCK_CHECKING to full, DB_LOST_WRITE_PROTECT to typical, and DB_BLOCK_CHECKSUM to full.

The only difference between DATA_AND_INDEX and DATA_ONLY is DB_BLOCK_CHECKING. When DB_BLOCK_CHECKING is set to full, Oracle will do substance checks for index blocks.

Note: Please check blog from Francisco Munoz Alavarez (Oracle ACE Director) over DB_ULTRA_SAFE parameter  — http://oraclenz.wordpress.com/2009/12/16/db_ultra_safe-a-new-gem-for-high-availability/

Posted in Advanced | Tagged: , | 1 Comment »

 
%d bloggers like this: