Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘Others’

VMWare: Reason: “Cannot open the disk, Reason: Failed to lock the file”.

Posted by FatDBA on August 25, 2013

Today while starting one of my Test machine/instance which is a part of an VMWare (This Team includes a total of 3 virtual machines) has started giving an error message which reads:

“Cannot open the disk: ‘H:\Machine ………….’ or one of the snapshot disks it depends on.Reason: Failed to lock the file”.

errrrrrro
And this error restricts us to open the same machine and kept on throwing the same again and again.

Error:
After reading errors it seems to be related with the locks that machines acquire in VMWare Team environment in order to access resources.

Resolution:
While we start our virtual machines there are many different folders related to locks created which contains .lck (Lock Files) and are used by the machine during the operation i.e M30076.lck, Mx456.lck et.
These locks are dynamic in nature and auto refreshes after some time. So, i removed/renamed all of the lock files (.lck) existed under all of these folder and tried the machine again and as what i’ve expected, it started working back again.

* Note: Performing these steps on PROD systems could be dangerous and can cause serious performance/hang issues. Performing such steps always requires recommendations from the VMWARE support.

Thanks
Prashant Dixit
“Sharing is Good”

Advertisement

Posted in Advanced | Tagged: , , | Leave a Comment »

Row Chaining & Row Migration

Posted by FatDBA on January 5, 2013

Row Chaining:
Occurs when a row is too large to fit into an EMPTY data block.In this Oracle stores the DATA for the row in a CHAIN of one or more Data BLOCKS. Chainining occurs when row is Inserted or Updated and happens with rows that contains big data types i.e. LOB etc.
Example: suppose if you have DB_BLOCK_SIZE of 8k (standard) and you want to insert a row of more than 8K in to a Block, Oracle then uses other DB block to fit the remaining portion of the row to it and this is known as Row Chaining and always happens when a row is being INSERTED.

ora_row_chained_1

Row Migration:
Migrated row on the other hand is a row which has been updated larger than it initially was – and if as a result it doesn’t fit into its original block, the row itself is moved to a new block, but the header (kind of a stub pointer) of the row remains in original location. This is needed so that any indexes on the table would still be able to find that row using original ROWIDs stored in them).
Migration always happens in case of UPDATE.

ora_row_migration_1

To Identify Row Chaing and Row Migration:
1. USER_TABLES
2. Using view V$SYSSTAT
3. Using Analyze

How to avoid and eliminate Chained/Migrated rows ?
====================================================
1-ALTER TABLE … MOVE command.
You can MOVE the table by using ALTER TABLE … MOVE statement that enables you to relocate data of a non partitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE … MOVE statement with the COMPRESS keyword to store the new segment using table compression.

Note : Moving a table changes the row ids of the rows in the table. This causes indexes on the table to be marked UNUSABLE,and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

2-Increase PCTFREE.
The PCTFREE parameter sets the minimum percentage of a data block to be reserved as free space for possible updates to rows that already exist in that block. For example, assume that you specify the following parameter within a CREATE TABLE statement. If the PCTFREE has been set to a low value, that means there is not enough room in the block for updates. To avoid migration,all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.

3- Import/Export can be taken as an approach to eliminate the migrated rows.

4- Avoid to create tables with more than 255 columns.

 

——- T.E.S.T ——–

SQL> @$ORACLE_HOME/RDBMS\ADMIN\utlchain.sql
Table Created

SQL> Desc CHAINED_ROWS;

Name                                                     Null?                         Type
—————————————– ——– —————————-
OWNER_NAME                                                      VARCHAR2(30)
TABLE_NAME                                                         VARCHAR2(30)
CLUSTER_NAME                                                   VARCHAR2(30)
PARTITION_NAME                                               VARCHAR2(30)
SUBPARTITION_NAME                                      VARCHAR2(30)
HEAD_ROWID                                                       ROWID
ANALYZE_TIMESTAMP                                      DATE

SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’;

no rows selected

SQL> create table chain_row_test (a varchar(4000),b varchar(4000), c varchar(4000));
Table created.

SQL> insert into chain_row_test (a,b,c) values ( 1, rpad(‘*’,40000000,’*’), rpad(‘*’,2300000,’*’) )
1 row created.

SQL> commit ;
Commit complete.

SQL> SELECT count(*) FROM chained_rows;

COUNT(*)
———-
0

SQL> analyze table chain_row_test list chained rows into chained_rows ;
Table analyzed.

SQL> SELECT count(*) FROM chained_rows;

COUNT(*)
———-
1

SQL> select dbms_rowid.rowid_block_number(rowid) from chain_row_test;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
————————————
508711

Posted in Advanced | Tagged: | Leave a Comment »

AMM (Automatic Memory Management) & ASMM (Automatic Shared MM)

Posted by FatDBA on December 31, 2012

Evolution of Memory Management Features:

Memory management has evolved with each database release:

Oracle Database 10g
Automatic Shared Memory Management (ASMM) was introduced in 10g. You enable the automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value.

Oracle Database 11g
Automatic Memory Management is being introduced in 11g. This enables automatic tuning of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.

Oracle Database 11g supports various memory management methods, which are chosen by initialization parameter
settings. Oracle recommends that you enable the automatic memory management method.
1. Automatic Memory Management – For Both the SGA and Instance PGA
2. Automatic Shared Memory Management – For the SGA
3. Manual Shared Memory Management – For the SGA
4. Automatic PGA Memory Management – For the Instance PGA
5. Manual PGA Memory Management – For the Instance PGA

Untitled

Automatic Memory Management – For Both the SGA and Instance PGA

Beginning with Oracle Database 11g, Oracle Database can manage the SGA memory and instance PGA memory completely automatically. You designate only the total memory size to be used by the instance, and Oracle Database dynamically exchanges memory between the SGA and the instance PGA as needed to meet processing demands.
This capability is referred to as automatic memory management. With this memory management method, the database also dynamically tunes the sizes of the individual SGA components and the sizes of the individual PGAs. To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET).

Switching to Automatic Memory Management

1)Check the current values configured for SGA_TARGET and PGA_AGGREGATE_TARGET.

SQL>SHOW PARAMETER TARGET

NAME TYPE VALUE
—————————— ———– —————-
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 200M
sga_target big integer 500M

Add the values of pga_aggregate_target and sga_target. In our case it would sum to 700 M

2)Decide on a maximum amount of memory that you would want to allocate to the database which will determine the maximum value for the sum of the SGA and instance PGA sizes. In our case we decide to set to 808M

3)Change the parameter in initialization parameter file.

Using Spfile
==============
SQL>ALTER SYSTEM SET MEMORY_MAX_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET MEMORY_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET SGA_TARGET =0 SCOPE = SPFILE;
SQL>ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 SCOPE = SPFILE;

Using Pfile
==============
If you have started the instance with Pfile, then edit the pfile and set the parameters manually
MEMORY_MAX_TARGET = 808M
MEMORY_TARGET = 808M
SGA_TARGET =0
PGA_AGGREGATE_TARGET = 0

In case you do not specify any value for MEMORY_MAX_TARGET and only use MEMORY_TARGET then database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of
MEMORY_MAX_TARGET.
MEMORY_MAX_TARGET is a static parameter i.e it cannot be changed Dynamically and Instance has to be bounced for modifying the value. So ensure that you have set it to appropriate value.

4)Shutdown and startup the database
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area 845348864 bytes
Fixed Size 1303188 bytes
Variable Size 469765484 bytes
Database Buffers 369098752 bytes
Redo Buffers 5181440 bytes

SQL> show parameter target

NAME TYPE VALUE
———————————— ———– ——————————
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 808M
memory_target big integer 808M
pga_aggregate_target big integer 0
sga_target big integer 0

Automatic Shared Memory Management – For the SGA
If you want to exercise more direct control over the size of the SGA, you can disable automatic memory management and enable automatic shared memory management.This feature was introduced in 10g with a parameter known as SGA_TARGET. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of current workload without requiring any additional configuration. Please refer to following document for setting SGA_TARGET

In case you have enabled Automatic Memory Management , then to switch to Automatic Shared Memory Management , please follow below procedure

SQL>Alter system set MEMORY_TARGET=0 scope=both;
SQL>Alter system set SGA_TARGET=500M scope=both;

Posted in Uncategorized | Tagged: , | 4 Comments »

Something Default: Control Files Backup

Posted by FatDBA on December 7, 2012

If trying to create a backup of your control file and you have not mentioned TO clause which allows you to choose any directory of your choice to move the backup of CF. If using 10g default location is always User Dump Destination and for 11g it’s DIAG (Automatic Diagnostic Repository).

Snippets from one of my Alert Log soon after i fired command.

alter database backup controlfile to trace
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12795.trc
Completed: alter database backup controlfile to trace

Posted in Advanced | Tagged: , | Leave a Comment »

Oracle Big Data: What’s the Hype …

Posted by FatDBA on November 5, 2012

“Oracle Big Data: A revolution that has already begun … ”

Read About It:

wp-bigdatawithoracle-1453236

bigdataappliance-datasheet-1453665

big-data-strategy-guide-1536569

managing_bd_w_hadoop_exadata

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: