Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

KEEP Clause * Error: ORA-19811: cannot have files in DB_RECOVERY_FILE_DEST with keep attributes*

Posted by FatDBA on January 13, 2013

KEEP option with RMAN:
If you have any backup criterion set and want to override settings then there is ‘KEEP FOREVER | UNTIL TIME’ clause from recovery manager which will help you to achieve required. There are certain conditions that’s needs to be fulfilled before using FOREVER clause.

Conditions:
1. You should be connected with the CATALOG server  (If using KEEP FOREVER Clause. No need if using KEEP UNTIL TIME Clause).
2. Oracle will not allow you to save backup pieces/set’s inside the FRA. You have to change the backup directory.

Let’s take an example:

[oracle@localhost ~]$ rman target / catalog rman/oracle90@orcl;
RMAN> backup current controlfile keep forever;

Starting backup at 13-JAN-13
current log archived

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 01/13/2013 18:35:23
ORA-19811: cannot have files in DB_RECOVERY_FILE_DEST with keep attributes

Solution:
RMAN> backup current controlfile keep forever format ‘/u01/%U’;

Starting backup at 13-JAN-13
current log archived

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/20nvb5gk_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/21nvb5gm_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

current log archived
using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=66 RECID=39 STAMP=804623897
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/22nvb5gq_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 13-JAN-13
channel ORA_DISK_1: finished piece 1 at 13-JAN-13
piece handle=/u01/23nvb5gr_1_1 tag=TAG20130113T183811 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JAN-13

Posted in Advanced | Tagged: , , | Leave a Comment »

ORA-03113: end-of-file on communication channel **During ASM Instance startup**

Posted by FatDBA on January 8, 2013

Faced one more end-of-file communication error with one of my Test ASM Environment. Below are the steps i’ve tried to fix the problem.
Error:
[oracle@localhost bin]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jan 8 20:00:25 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> ORA-03113: end-of-file on communication channel
1st Step:

I’ve tried to check the CRS (Cluster Restart Service or Oracle Restart Service) of instance. Using crs_stats you can get a list of all services which are linked with CRS with their respective state and target details.
(Always look for STATE & TARGET out of other stats provided and try to fix ‘Types’ with ‘OFFLINE’ state).

Example:
[oracle@localhost bin]$  crs_stat -t

Name           Type           Target    State     Host
————————————————————
ora.cssd       ora.cssd.type  OFFLINE   OFFLINE
ora.diskmon    ora….on.type OFFLINE   OFFLINE
ora.asm        ora….on      ONLINE    OFFLINE

* Clearly reflects broken ora.asm service is broken and is OFFLINE right now.

2nd Step:
Forcibly de-configure CRS Service via Root user.
(You’ll find the Perl script in GRID Home/crs/install directory)

Example:
[root@localhost install]# ./rootcrs.pl -deconfig -force
2013-01-08 19:53:59: Parsing the host name
2013-01-08 19:53:59: Checking for super user privileges
2013-01-08 19:53:59: User has super user privileges
Using configuration parameter file: ./crsconfig_params
Usage: srvctl <command> <object> [<options>]
commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
objects: database|service|asm|diskgroup|listener|home|ons|eons
For detailed help on each command and object and its options use:
srvctl <command> -h or
srvctl <command> <object> -h
PRKO-2012 : nodeapps object is not supported in Oracle Restart
ACFS-9200: Supported
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4013: This command is not supported in a single-node configuration.
CRS-4000: Command Stop failed, or completed with errors.
You must kill crs processes or reboot the system to properly
cleanup the processes started by Oracle clusterware
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

Third Step:
Re-run root.sh script using Root account. This will configure OCR (Oracle Cluster Registry) keys for oracle user and restart ohasd (Oracle High Availability Service Deamon) again.

* Oracle Clusterware is run by Cluster Ready Services (CRS) which consisting of  Oracle Cluster Registry (OCR), which records and maintains the cluster and node membership information. Records Disk membership information in case of ASM configured on Non-RAC systems.

Example:
[root@localhost dbhome_1]# ./root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-01-08 19:55:58: Checking for super user privileges
2013-01-08 19:55:58: User has super user privileges
2013-01-08 19:55:58: Parsing the host name
Using configuration parameter file: /u01/app/oracle/product/11.2.0/dbhome_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
CRS-4664: Node localhost successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

acfsroot: ACFS-9302: No installation files found at /u01/app/oracle/product/11.2.0/dbhome_1/install/usm/EL5/i386/2.6.18-8/2.6.18-8.el5uek-i686/bin.

localhost     2013/01/08 19:56:20     /u01/app/oracle/product/11.2.0/dbhome_1/cdata/localhost/backup_20130108_195620.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.
You have new mail in /var/spool/mail/root

Step 4th:
Start cssd and diskmon services.

* Oracle Cluster Synchronization Services daemon (OCSSd) provides basic ‘group services’ support. This is also required in a single instance configuration if Automatic Storage Management (ASM) is use.
* diskmon (Disk Monitor daemon): Responsibel for performing Monitoring activities. The diskmon daemon is always started when ocssd starts.

[oracle@localhost ~]$ cd /u01/app/oracle/product/11.2.0/dbhome_1/bin
[oracle@localhost bin]$ ./crsctl start resource ora.cssd
CRS-2672: Attempting to start ‘ora.cssd’ on ‘localhost’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘localhost’
CRS-2676: Start of ‘ora.diskmon’ on ‘localhost’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘localhost’ succeeded

Step 5th:
Let’s try to Mount the ASM Diskgroup.
ASMCMD> mount DATA

(If not mounted, you’ll probably receive error codes: ORA-15110: ‘no diskgroups mounted’ and will be fixed once you mount Diskgroup).

Step 6th:
Finally start your ASM instance.

[oracle@localhost bin]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jan 8 20:00:25 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Automatic Storage Management option

SQL>

Done…

Posted in Advanced | Tagged: , | Leave a Comment »

ORA-03113: end-of-file on communication channel **Error due to insufficient Disk Space in ASM DGroup**

Posted by FatDBA on January 7, 2013

Steps to fix an ASM related issue/error when you failed to start an Instance configured on ASM Disk Groups.

Error Message:
ORA-03113: end-of-file on communication channel

When (Live/Real-Time Situation):
[oracle@localhost trace]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Jan 7 20:43:52 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1337720 bytes
Variable Size             239077000 bytes
Database Buffers          289406976 bytes
Redo Buffers                5840896 bytes
Database mounted.

ORA-03113: end-of-file on communication channel
Process ID: 10910
Session ID: 21 Serial number: 3

How to track the error message:
As always Alert log for that instance.

Snippetsfrom AlertLog:
Errors in file /u01/app/oracle/diag/rdbms/etr/etr/trace/etr_ora_6969.trc:
ORA-19816: WARNING: Files may exist in db_recovery_file_dest that are not known to database.
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15041: diskgroup “DATA” space exhausted
*************************************************************
WARNING: A file of type ARCHIVED LOG may exist in
db_recovery_file_dest that is not known to the database.
Use the RMAN command CATALOG RECOVERY AREA to re-catalog
any such files. If files cannot be cataloged, then manually
delete them using OS command. This is most likely the
result of a crash during file creation.
*************************************************************
Errors in file /u01/app/oracle/diag/rdbms/etr/etr/trace/etr_ora_6969.trc:
ORA-19504: failed to create file “+DATA”
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15041: diskgroup “DATA” space exhausted
ARCH: Error 19504 Creating archive log file to ‘+DATA’

Results: It clearly says about Space Exhaustion in Line Number 15 with error message ‘ORA-15041: diskgroup “DATA” space exhausted’
Let’s understand this error message using OERR Utlity:

[oracle@localhost ~]$ oerr ORA 15041
15041, 00000, “diskgroup \”%s\” space exhausted”
// *Cause:  The diskgroup ran out of space.
// *Action: Add more disks to the diskgroup, or delete some existing files.

Let’s Verify space of ASM Disk Group:
SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup;

NAME                                     STATE          TYPE     TOTAL_MB    FREE_MB
—————————— ———–        ——         ———-    ————————————
DATA                                  MOUNTED     NORMAL       4000         17

Only 17 MB of Free Space is left in Disk Group.

Resolution:
Create a new RAW Disk and alter Diskgroup to add that newly created rdisk and create some extra space.

SQL> alter diskgroup DATA add disk ‘/dev/raw/raw6’;
Diskgroup altered.

SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup;

NAME                                       STATE          TYPE     TOTAL_MB    FREE_MB
—————————— ———–        ——           ———- ——————————-
DATA                                     MOUNTED     NORMAL       5000       1015

Let’s try to start the same instance again:

SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1337720 bytes
Variable Size             239077000 bytes
Database Buffers          289406976 bytes
Redo Buffers                5840896 bytes
Database mounted.

Database opened.

Fixed.

Posted in Advanced | Tagged: , | Leave a Comment »

Row Chaining & Row Migration

Posted by FatDBA on January 5, 2013

Row Chaining:
Occurs when a row is too large to fit into an EMPTY data block.In this Oracle stores the DATA for the row in a CHAIN of one or more Data BLOCKS. Chainining occurs when row is Inserted or Updated and happens with rows that contains big data types i.e. LOB etc.
Example: suppose if you have DB_BLOCK_SIZE of 8k (standard) and you want to insert a row of more than 8K in to a Block, Oracle then uses other DB block to fit the remaining portion of the row to it and this is known as Row Chaining and always happens when a row is being INSERTED.

ora_row_chained_1

Row Migration:
Migrated row on the other hand is a row which has been updated larger than it initially was – and if as a result it doesn’t fit into its original block, the row itself is moved to a new block, but the header (kind of a stub pointer) of the row remains in original location. This is needed so that any indexes on the table would still be able to find that row using original ROWIDs stored in them).
Migration always happens in case of UPDATE.

ora_row_migration_1

To Identify Row Chaing and Row Migration:
1. USER_TABLES
2. Using view V$SYSSTAT
3. Using Analyze

How to avoid and eliminate Chained/Migrated rows ?
====================================================
1-ALTER TABLE … MOVE command.
You can MOVE the table by using ALTER TABLE … MOVE statement that enables you to relocate data of a non partitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE … MOVE statement with the COMPRESS keyword to store the new segment using table compression.

Note : Moving a table changes the row ids of the rows in the table. This causes indexes on the table to be marked UNUSABLE,and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

2-Increase PCTFREE.
The PCTFREE parameter sets the minimum percentage of a data block to be reserved as free space for possible updates to rows that already exist in that block. For example, assume that you specify the following parameter within a CREATE TABLE statement. If the PCTFREE has been set to a low value, that means there is not enough room in the block for updates. To avoid migration,all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.

3- Import/Export can be taken as an approach to eliminate the migrated rows.

4- Avoid to create tables with more than 255 columns.

 

——- T.E.S.T ——–

SQL> @$ORACLE_HOME/RDBMS\ADMIN\utlchain.sql
Table Created

SQL> Desc CHAINED_ROWS;

Name                                                     Null?                         Type
—————————————– ——– —————————-
OWNER_NAME                                                      VARCHAR2(30)
TABLE_NAME                                                         VARCHAR2(30)
CLUSTER_NAME                                                   VARCHAR2(30)
PARTITION_NAME                                               VARCHAR2(30)
SUBPARTITION_NAME                                      VARCHAR2(30)
HEAD_ROWID                                                       ROWID
ANALYZE_TIMESTAMP                                      DATE

SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’;

no rows selected

SQL> create table chain_row_test (a varchar(4000),b varchar(4000), c varchar(4000));
Table created.

SQL> insert into chain_row_test (a,b,c) values ( 1, rpad(‘*’,40000000,’*’), rpad(‘*’,2300000,’*’) )
1 row created.

SQL> commit ;
Commit complete.

SQL> SELECT count(*) FROM chained_rows;

COUNT(*)
———-
0

SQL> analyze table chain_row_test list chained rows into chained_rows ;
Table analyzed.

SQL> SELECT count(*) FROM chained_rows;

COUNT(*)
———-
1

SQL> select dbms_rowid.rowid_block_number(rowid) from chain_row_test;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
————————————
508711

Posted in Advanced | Tagged: | Leave a Comment »

Recovering Parameter File

Posted by FatDBA on January 2, 2013

Today while working discovered that both my Pfile and SPfile from one of my test database are missing which we realized after a bounce, when it failed to start and started throwing error message about loss with it’s previous location.

ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/u01/app/oracle/product/10.2.0/dbhome_2/dbs/initorcl.ora’

I tried starting my database from RMAN console to recover my SPfile from my latest Autobackup’s, which is not possible until the database is in atleast nomount mode by Recovery Manager.  This is not possible without parameter file hence Oracle allows RMAN to start-up related instance in nomount mode with the help of a DUMMY parameter file which it creates in it’s default location ($ORACLE_HOME/dbs).

The purpose of this DUMMY file is to aid restoration of SPfile from available backup’s. The file contains below information (10.2.0.1 Database)

db_name=DUMMY
remote_login_passwordfile=exclusive
compatible=10.2.0.1.0
sga_target=150M
_dummy_instance=TRUE

connected to target database (not started)
RMAN> startup nomount;
OR
RMAN> startup force nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/u01/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora’

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area     159019008 bytes

Fixed Size                     1335192 bytes
Variable Size                 75497576 bytes
Database Buffers              79691776 bytes
Redo Buffers                   2494464 bytes

Now you can retrieve your missing server parameter files from backup’s.

The another way to recover Parameter file when there is not any backup’s available is through the
1. Alert Log:
Database always copies all entries from pfile to alert log during every startup and shutdown of database.

2. If you have any recent stats reports available like: AWR (Automated Workload Repository) etc. :
It always contains PFile snap at the end of the report.

 

Posted in Advanced | Tagged: , | Leave a Comment »

Incrementally Updated Backup

Posted by FatDBA on January 1, 2013

Incrementally Updated Backups

Using this feature all changes between the SCN of the original image copy and the SCN of the incremental backup are applied to the image copy, winding it forward to make the equivalent of a new database image copy without the overhead of such a backup. The following example shows how this can be used.

RUN {
RECOVER COPY OF DATABASE WITH TAG ‘incr_backup’ UNTIL TIME ‘SYSDATE – 7’;
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘incr_backup’ DATABASE;
}

The RECOVER COPY… line will not do anything until the script has been running for more than 7 days. The BACKUP INCREMENTAL line will perform a complete backup (level 0) the first day it is run, with all subsequent backups being level 1 incremental backups. After 7 days, the RECOVER COPY… line will start to take effect, merging all incremental backups older than 7 days into the level 0 backup, effectively moving the level 0 backup forward. The effect of this is that you will permanently have a 7 day recovery window with a 7 day old level 0 backup and 6 level 1 incremental backups. Notice that the tag must be used to identify which incremental backups apply to which image copies.

If you wanted to keep your image copy as up to date as possible you might do the following.

RUN {
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘incr_backup’ DATABASE;
RECOVER COPY OF DATABASE WITH TAG ‘incr_backup’;
}

In this example the incremental backup is merged into the image copy as soon as it is completed.

Posted in Advanced | Tagged: | Leave a Comment »

AMM (Automatic Memory Management) & ASMM (Automatic Shared MM)

Posted by FatDBA on December 31, 2012

Evolution of Memory Management Features:

Memory management has evolved with each database release:

Oracle Database 10g
Automatic Shared Memory Management (ASMM) was introduced in 10g. You enable the automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value.

Oracle Database 11g
Automatic Memory Management is being introduced in 11g. This enables automatic tuning of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.

Oracle Database 11g supports various memory management methods, which are chosen by initialization parameter
settings. Oracle recommends that you enable the automatic memory management method.
1. Automatic Memory Management – For Both the SGA and Instance PGA
2. Automatic Shared Memory Management – For the SGA
3. Manual Shared Memory Management – For the SGA
4. Automatic PGA Memory Management – For the Instance PGA
5. Manual PGA Memory Management – For the Instance PGA

Untitled

Automatic Memory Management – For Both the SGA and Instance PGA

Beginning with Oracle Database 11g, Oracle Database can manage the SGA memory and instance PGA memory completely automatically. You designate only the total memory size to be used by the instance, and Oracle Database dynamically exchanges memory between the SGA and the instance PGA as needed to meet processing demands.
This capability is referred to as automatic memory management. With this memory management method, the database also dynamically tunes the sizes of the individual SGA components and the sizes of the individual PGAs. To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET).

Switching to Automatic Memory Management

1)Check the current values configured for SGA_TARGET and PGA_AGGREGATE_TARGET.

SQL>SHOW PARAMETER TARGET

NAME TYPE VALUE
—————————— ———– —————-
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 200M
sga_target big integer 500M

Add the values of pga_aggregate_target and sga_target. In our case it would sum to 700 M

2)Decide on a maximum amount of memory that you would want to allocate to the database which will determine the maximum value for the sum of the SGA and instance PGA sizes. In our case we decide to set to 808M

3)Change the parameter in initialization parameter file.

Using Spfile
==============
SQL>ALTER SYSTEM SET MEMORY_MAX_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET MEMORY_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET SGA_TARGET =0 SCOPE = SPFILE;
SQL>ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 SCOPE = SPFILE;

Using Pfile
==============
If you have started the instance with Pfile, then edit the pfile and set the parameters manually
MEMORY_MAX_TARGET = 808M
MEMORY_TARGET = 808M
SGA_TARGET =0
PGA_AGGREGATE_TARGET = 0

In case you do not specify any value for MEMORY_MAX_TARGET and only use MEMORY_TARGET then database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of
MEMORY_MAX_TARGET.
MEMORY_MAX_TARGET is a static parameter i.e it cannot be changed Dynamically and Instance has to be bounced for modifying the value. So ensure that you have set it to appropriate value.

4)Shutdown and startup the database
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area 845348864 bytes
Fixed Size 1303188 bytes
Variable Size 469765484 bytes
Database Buffers 369098752 bytes
Redo Buffers 5181440 bytes

SQL> show parameter target

NAME TYPE VALUE
———————————— ———– ——————————
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 808M
memory_target big integer 808M
pga_aggregate_target big integer 0
sga_target big integer 0

Automatic Shared Memory Management – For the SGA
If you want to exercise more direct control over the size of the SGA, you can disable automatic memory management and enable automatic shared memory management.This feature was introduced in 10g with a parameter known as SGA_TARGET. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of current workload without requiring any additional configuration. Please refer to following document for setting SGA_TARGET

In case you have enabled Automatic Memory Management , then to switch to Automatic Shared Memory Management , please follow below procedure

SQL>Alter system set MEMORY_TARGET=0 scope=both;
SQL>Alter system set SGA_TARGET=500M scope=both;

Posted in Uncategorized | Tagged: , | 4 Comments »

ORA-00845: MEMORY_TARGET not supported on this system

Posted by FatDBA on December 30, 2012

Today while working on one of my practice machine, I’ve started receiving error when tried to start-up one of the instance which is on Cooked File system. Error reads about MEMORY_TARGET.  To be precise about the error message and code, below is what i was getting second ago.

SQL> startup
ORA-00845: MEMORY_TARGET not supported on this system

Below are the steps in sequence what i performed to test and mitigate the error.

1. I’ve checked the pfile of the instance to check MEMORY_TARGET entires and values to discover if there is any problem with the values assigned (I have the AMM enabled in my machine)
*.memory_target=715128832

2. Then i’ve checked error details more closely using OERR utlity to find the cause of the problem and suggestions.

[oracle@localhost dbs]$ oerr ORA 845
00845, 00000, “MEMORY_TARGET not supported on this system”
// *Cause: The MEMORY_TARGET parameter was not supported on this operating system or /dev/shm was not sized correctly on Linux.
// *Action: Refer to documentation for a list of supported operating systems. Or, size /dev/shm to be at least the SGA_MAX_SIZE on each Oracle instance running on the system.

Error clearly points towards the /dev/shm size on OS and asking to set the size of it to atleast SGA_MAX_SIZE on instance running which is 684m.

3. Now when we have the Cause in hand let’s try to fix the problem.

let me check current settings.

[oracle@localhost dbs]$ df -h
Filesystem            Size    Used       Avail            Use%        Mounted on
/dev/mapper/VolGroup00-LogVol00
30G   23G        6.0G            80%         /
/dev/sda1              99M   55M       40M            58%          /boot
tmpfs                 664M  154M   510M         24%        /dev/shm        /*tmps (Temp File Storage area) is in MB’s – CAUSE CLEARED*/

And it’s 664M which is clearly below the SGA_MAX_SIZE = 684m

Some more checks:
[root@localhost ~]# more /etc/fstab
/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0

4. Let’s try to add up some memory space to tmpfs area.
I’ve added exact 1GB to /etc/fstab entry. Let me check the change

[root@localhost ~]# vi /etc/fstab

[root@localhost ~]# more /etc/fstab
/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults,size=1G        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0

Alrighty, to fix the settings/changes let’s remount /dev/shm

[root@localhost ~]# mount -o remount /dev/shm
[root@localhost ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
30G   23G  6.0G  80% /
/dev/sda1              99M   55M   40M  58% /boot
tmpfs                 1.0G  154M  4.9G   4% /dev/shm

D.O.N.E

5. Now we are done with the change, let’s try to start the instance once again.

[oracle@localhost ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2 is /u01/app/oracle

SQL> startup
ORACLE instance started.

Total System Global Area  715616256 bytes
Fixed Size                  1338924 bytes
Variable Size             482345428 bytes
Database Buffers          226492416 bytes
Redo Buffers                5439488 bytes
Database mounted.
Database opened.

It’s UP!!

Posted in Advanced | Tagged: | Leave a Comment »

Authentication: Password File and OS Based in Oracle.

Posted by FatDBA on December 28, 2012

Diffrence between “/ as sysdba” and “sys/pswrd as sysdba”

— sqlplus “/as sysdba” it mean is you use OS authorization and your user must member of dba(ORA_DBA) OS group,else operation will fail.
— sqlplus sys/pass@sid as sysdba it mean is you use passwordfile authorization .And in this case you need properly configure this file also need set remote_login_passwordfile= EXCLUSIVE or SHARED.

And @sid (sqlplus sys/pass@sid as sysdba) also need listener to be up, can be done either in the server or client side based on the entry in your tnsnames.ora otherwise you’ll receive error message on the SQL terminal the moment you try logging – ORA-12541: TNS:no listener
sqlplus / as sysdba does not require listener to be up but has to be done in the server side.

Posted in Basics | Tagged: | Leave a Comment »

ASM: File Extensions

Posted by FatDBA on December 27, 2012

If you have the ASM configured in your environment, it’s always hard to remember the ASM file names unless you have aliased the entires. But this is what i discovered while working on one of my Test machine which has the ASM Configured. I found filenames and extensions fully qualified but not just another techical opaques.

These are some of the datafiles which are part of my ASM and available on the diskgroup.

SQL> select name from v$datafile;

NAME
——————————————————————————–
+TESTDB_DATA1/orcl/datafile/tesla.261.795016577
+TESTDB_DATA1/orcl/datafile/example12.256.794584325
+TESTDB_DATA1/orcl/datafile/dixy.257.794593881
+TESTDB_DATA1/orcl/datafile/users1111.258.794825249
+TESTDB_DATA1/orcl/datafile/text.259.794825753
+TESTDB_DATA1/orcl/datafile/test12345.260.794840557

eg:
Let’s dissect one of the file
+TESTDB_DATA1/orcl/datafile/dixy.257.794593881
Here if watch carefully:
+TESTDB_DATA1: DiskGroup Name (+ indicates the root of the ASM)
orcl: Name of the client database installed.
datafile: The File Type
dixy: Name of the Data File.
257: Unique File Number
794593881: Database Incarnation Number.

To conform you can also crosscheck and validate using v$database_incarnation.

Posted in Advanced | Tagged: | Leave a Comment »