Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Part 1: ASM Installation on 11gR2 (VMWare)

Posted by FatDBA on January 10, 2016

Hello Everyone,
Today i would like to start series/chapters describing Oracle Automatic Storage Management (Oracle ASM) concepts and provides an overview of Oracle ASM features. Followed posts will covers subjects like Installation, Configuration, Administration/Management, Monitoring. Troubleshooting and Optimization etc.

In this maiden post (Part 1) i would like to discuss and elaborate about ASM installation and related areas.

Prerequisites:
Considering that you already have the OS ready with all packages per-installed before we begin our ASM installation on the top. I will start with right from the scratch.

Step 1:
Preparing Disks or Partitions which will be used while creating the ASM diskgroups.
I’ve created 3 Persistent Disks each of 4GB in size from the VM Disk (I will perform all steps in VM environment).

This is how the VM Setting will look like once you are done with the Disk creation.
*Forgot about the Fifth Hard Disk of 10GB for now. Will explain the usage later on the series.

1

Once you have the disks created, Next you’ll have to format the newly created disks to make them usable: Using fdisk command. Command displays the status of available newly created partitions/disks as:
/dev/sdb, /dev/sdc, /dev/sdd – Each of 4GB (4294 MBs) in size.

[root@localhost ~]# fdisk -l

Disk /dev/sda: 91.3 GB, 91268055040 bytes, 178257920 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000aab6c

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 178257919 88615936 8e Linux LVM

Disk /dev/sdb: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Click Here to Read More!!

Posted in Advanced, Basics | Tagged: | Leave a Comment »

runInstaller Error: An unexpected error has been detected by HotSpot Virtual Machine

Posted by FatDBA on January 6, 2016

Hello Everyone,
Installing your Oracle Software using GUI Method requires to call “runInstaller” script and is always an easy step if you have proper permissions, DISPLAY settings in place.
But here i would like to discuss one of the case where I’ve spent several hours to fix one of the error that occurred every-time i called runInstaller script even after setting all required permissions and DISPLAY variables.

It fails to render the installer and creates a log file under /tmp directory with below mentioned contents.

#
# An unexpected error has been detected by HotSpot Virtual Machine:
#
# SIGSEGV (0xb) at pc=0x0000003e2ce14d70, pid=4000, tid=140717162321680
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_51-b10 mixed mode)
# Problematic frame:
# C [ld-linux-x86-64.so.2+0x14d70]
#

————— T H R E A D —————

Current thread (0x000000004220d3f0): JavaThread “AWT-EventQueue-0” [_thread_in_native, id=4014]

siginfo:si_signo=11, si_errno=0, si_code=128, si_addr=0x0000000000000000

Registers:
RAX=0x0000000000000001, RBX=0x000000004216ae50, RCX=0x000000009eba2203, RDX=0x000000000fabfbff
RSP=0x00007ffb44792278, RBP=0x00007ffb447923c0, RSI=0x0000000000000000, RDI=0x0000000000000058
R8 =0x0000000000000000, R9 =0x0000000000000000, R10=0x00007ffb447921f0, R11=0x000000004216ae50
R12=0x00007ffb447923e8, R13=0x0000000041f85330, R14=0x0000000000000000, R15=0x0000000000000000
RIP=0x0000003e2ce14d70, EFL=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000000
TRAPNO=0x000000000000000d

Top of Stack: (sp=0x00007ffb44792278)
0x00007ffb44792278: 0000003e2ce0aaea 0000000000000000

Signal Handlers:
SIGSEGV: [libjvm.so+0x67ed60], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGBUS: [libjvm.so+0x67ed60], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGFPE: [libjvm.so+0x582020], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGPIPE: [libjvm.so+0x582020], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGILL: [libjvm.so+0x582020], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x583ed0], sa_mask[0]=0x00000000, sa_flags=0x14000004
SIGHUP: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGINT: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGQUIT: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGTERM: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004

————— S Y S T E M —————

OS:Red Hat Enterprise Linux Server release 6.0 (Santiago)

uname:Linux 2.6.32-573.12.1.el6.x86_64 #1 SMP Tue Dec 15 06:42:08 PST 2015 x86_64
libc:glibc 2.12 NPTL 2.12
rlimit: STACK 10240k, CORE 0k, NPROC 16384, NOFILE 65536, AS infinity
load average:0.09 0.06 0.08

CPU:total 1 em64t

Memory: 4k page, physical 2046684k(69828k free), swap 2031612k(2031612k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (1.5.0_51-b10) for linux-amd64, built on Jun 6 2013 09:59:46 by java_re with gcc 3.2.2 (SuSE Linux)

time: Sat Jan 2 23:09:21 2016
elapsed time: 2 seconds

The workaround to the problem is to set “LD_BIND_NOW” environment variable to a value “1” as shown below and re-launch the installer.

bash-4.1$ export LD_BIND_NOW=1
bash-4.1$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB. Actual 27339 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-01_03-29-40AM. Please wait …
bash-4.1$

This bug seems to have reported on 11.2.0.1 & 11.2.0.3.

Hope That Helps
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | 3 Comments »

Linux YUM – Error: Cannot retrieve repository metadata (repomd.xml) for repository

Posted by FatDBA on January 6, 2016

Some time back I’ve got an error message with YUM even when the installation and configuration went successful.
Where it fails every time i called any of the YUM commands with error message “Cannot retrieve repository metadata (repomd.xml) for repository”.

[root@Fatdba ~]# yum list
Loaded plugins: refresh-packagekit
Repository ol6_latest is listed more than once in the configuration
Repository ol6_ga_base is listed more than once in the configuration
ftp://obiftp/YUM_local/GDS/obi/6.1/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 502”
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base_el6_local. Please verify its path and try again

Solution:
Try the following sequence of steps to fix this problem.

$ sudo su –
# cd /etc/yum.repos.d
# rm -f *
# wget http://public-yum.oracle.com/public-yum-ol6.repoThis needs Internet Connection
# yum clean all
# yum makecache

Hope That Helps
Prashant Dixit

Posted in Uncategorized | Leave a Comment »

Oracle GI 11.2 Installation on RHEL 7 – Error: ohasd failed to start the Clusterware.

Posted by FatDBA on January 6, 2016

Recently as a part of solution i proposed for a new infrastructure for one of the customer, project team came with an error which they encountered during the GRID Infrastructure – Especially soon after executing the ‘Root.sh’ script. This was actually 11.2.0.4 Grid Infrastructure software installation on Oracle Enterprise 7 which is the latest release from Oracle Corp.

While troubleshooting the problem i experienced much pain getting it to install. The installation process fails when the root.sh script is run.
* Sometimes to configure Grid Infrastructure for a Stand-Alone Server system asks to run the following command as the root user:
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

The error reported is:

[root@localhost /]# /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node localhost successfully pinned.
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2016-01-01 02:14:46.806:
[client(11401)]CRS-2101:The OLR was formatted using version 3.
2016-01-01 02:14:49.572:
[client(11424)]CRS-1001:The OCR was formatted using version 3.

ohasd failed to start at /u01/app/11.2.0/grid/crs/install/roothas.pl line 377, line 4.

I hunted though various blog posts and even Oracle Metalink initially but all of them were was of little to no use.
Finally, I stumbled across an apparently poorly indexed (and titled) support note (1951613.1) that made reference to a RHEL 7 specific patch. The patch number is: 18370031.

So a Patch download and a new installation process later, I was finally able to get the GI installer to properly register the ohasd services. In the end, I was glad it was a patch, that resolved the issue since (in theory) Oracle will support it. I was surprised that the Oracle Support tech was not able to locate the patch 🙂

In applying the patch it is a little different. You have to run the GI installer to the point where it instructs you to run root.sh. Before you run root.sh, you then use OPatch to install the provided patch. Then finally you run root.sh.

Below provided are the steps performed during the fix.
1. First i had to deinstall previous GRID configuration (Where I’ve got that error message after roo.sh execution).
– During the deinstallation process it will ask you to execute few scripts which will ultimately help you to deinstall the entire Oracle Restart stack.
2. Download, Unzip and Apply the patch using OPATCH.
3. Execute the root.sh script once you applied the patch.
4. Check the services status using crs_stat.

 

Step 1:

Deinstall previous GRID configuration
[root@localhost deinstall]# su – oracle
Last login: Fri Jan 1 02:17:02 EST 2016 on pts/1
[oracle@localhost ~]$ cd /u01/app/11.2.0/grid/deinstall
[oracle@localhost deinstall]$ ./deinstall

Checking for required files and bootstrapping …
Please wait …
Location of logs /tmp/deinstall2016-01-01_02-30-16AM/logs/
Click Here to Read More!!

Posted in Advanced | Tagged: | 26 Comments »

Sorry folks i have been a little busy lately!!!

Posted by FatDBA on December 3, 2015

Sorry If i haven’t been blogging lately or posting up the next part of the story. Soon I’ll try to post more and try to help.

Thanks
Prashant “FatDBA” Dixit

Posted in Uncategorized | Leave a Comment »

Statistics in Oracle!

Posted by FatDBA on May 5, 2015

 In this post I’ll try to summarize all sorts of statistics in Oracle, I strongly recommend reading the full article, as it contains information you may find it valuable in understanding Oracle statistics.

#####################################
Database | Schema | Table | Index Statistics
#####################################

Gather Database Statistics:
=======================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>100,METHOD_OPT=>’FOR ALL COLUMNS SIZE SKEWONLY’,
    CASCADE => TRUE,
    degree => 4,
    OPTIONS => ‘GATHER STALE’,
    GATHER_SYS => TRUE,
    STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will decide whether to collect index statistics or not.
DEGREE => 4 :Degree of parallelism.
options: 
       =>’GATHER’ :Gathers statistics on all objects in the schema.
       =>’GATHER AUTO’ :Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>’GATHER STALE’:Gathers statistics on stale objects. will return a list of stale objects.
       =>’GATHER EMPTY’:Gathers statistics on objects have no statistics.will return a list of no stats objects.
        =>’LIST AUTO’ : Returns a list of objects to be processed with GATHER AUTO.
        =>’LIST STALE’: Returns a list of stale objects as determined by looking at the *_tab_modifications views.
        =>’LIST EMPTY’: Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the ‘SYS’ user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all kind of statistics (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female), in a construction or security service company, where most of employees are male workforce,the gender column in employee table is likely to be skewed but in an entity like a hospital where the number of males almost equal the number of female workforce, the gender column is likely to be not skewed.

For faster execution:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What’s new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed “METHOD_OPT=>’FOR ALL COLUMNS SIZE SKEWONLY'” => As histograms is not recommended to be gathered on all columns.
Removed  “cascade => TRUE” To let Oracle determine whether index statistics to be collected or not.
Doubled the “degree => 8” but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale or missing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = ‘auto optimizer stats collection’;

To Enable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => ‘auto optimizer stats collection’,
    operation => NULL,
    window_name => NULL);
    END;
    /

In case you want to Disable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => ‘auto optimizer stats collection’,
    operation => NULL,
    window_name => NULL);
    END;
    /

To check the tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS=’YES’;

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it’s parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Statistics:
======================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
ownname =>’SCOTT’,
estimate_percent=>10,
degree=>1,
cascade=>TRUE,
options=>’GATHER STALE’);

Gather TABLE Statistics:
====================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name=’T1′;

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
    ownname => ‘SCOTT’,
    tabname => ‘EMP’,
    degree => 2,
    cascade => TRUE,
    METHOD_OPT => ‘FOR COLUMNS SIZE AUTO’,
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
    END;
    /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=>  :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between “” instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS  :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.

Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you’ve to re-gather statistics on that table.

Inside “DBA BUNDLE”, there is a script called “gather_stats.sh”, it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.

Gather Index Statistics:
===================
SQL> exec DBMS_STATS.GATHER_INDEX_STATS(ownname => ‘SCOTT’,indname => ‘EMP_I’,
estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);

####################
Fixed OBJECTS Statistics
####################

What are Fixed objects:
—————————-
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

How frequent to gather stats on fixed objects?
——————————————————-
Only one time for a representative workload unless you’ve one of these cases:

– After a major database or application upgrade.
– After implementing a new module.
– After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
– Poor performance/Hang encountered while querying dynamic views e.g. V$ views.

Note:
– It’s recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
– Also note that performance degradation may be experienced while the statistics are gathering.
– Having no statistics is better than having a non representative statistics.

How to gather stats on fixed objects:
———————————————

First Check the last analyzed date:
—— ———————————–
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED
       from dba_tab_statistics where table_name=’X$KGLDP’;
Second Export the current fixed stats in a table: (in case you need to revert back)
——- ———————————–
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE
       (‘OWNER’,’STATS_TABLE_NAME’,’TABLESPACE_NAME’);

SQL> EXEC dbms_stats.export_fixed_objects_stats
       (stattab=>’STATS_TABLE_NAME’,statown=>’OWNER’);
Third Gather the fixed objects stats:
——-  ————————————
SQL> exec dbms_stats.gather_fixed_objects_stats;

Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats();
SQL> exec DBMS_STATS.import_fixed_objects_stats
       (stattab =>’STATS_TABLE_NAME’,STATOWN =>’OWNER’);

#################
SYSTEM STATISTICS
#################

What is system statistics:
——————————-
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
—————————————-
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
———————————–
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it’s less accurate than “WORKLOAD statistics” but if you can’t capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics:
——————————-
This will gather statistics during the current workload [which supposed to be representative of actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats(‘start’);
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats(‘stop’);
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats(‘interval’,60);

Check the system values collected:
——————————————-
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;

cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:
-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn’t impose an additional overhead on your system.

Delete system statistics:
——————————
SQL> execute dbms_stats.delete_system_stats();

####################
Data Dictionary Statistics
####################

Facts:
——-
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,’ORACLE’);

When to gather Dictionary statistics:
———————————————
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
———————————————
SQL> select table_name, last_analyzed from dba_tables
where owner=’SYS’ and table_name like ‘%$’ order by 2;

Gather Dictionary Statistics:
———————————–
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
->Will gather stats on 20% of SYS schema tables.
or…
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS (‘SYS’);
->Will gather stats on 100% of SYS schema tables.
or…
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.

################
Extended Statistics “11g onwards”
################

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = ‘scott’;

In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats(‘SCOTT’,’EMP’,'(lower(ENAME))’) from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’EMP’, method_opt=> ‘for all columns size skewonly’);

OR
—-
*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
(ownname => ‘SCOTT’,tabname => ‘EMP’,
method_opt => ‘for all columns size skewonly for
columns (lower(ENAME))’);
end;
/

To check the Existence of extended statistics on a table:
———————————————————————-
SQL> select extension_name,extension from dba_stat_extensions where owner=’SCOTT’and table_name = ‘EMP’;
SYS_STU2JLSDWQAFJHQST7$QK81_YB (LOWER(“ENAME”))

Drop extended stats on column function:
——————————————————
SQL> exec dbms_stats.drop_extended_stats(‘SCOTT’,’EMP’,'(LOWER(“ENAME”))’);

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statement  are correlated e.g.(country,state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each columns. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in USA so the value of state_name are always influenced by country_id.
If there are extra columns are referenced in the “WHERE statement  with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats(‘SH’,’CUSTOMERS’, ‘(country_id,cust_state_province)’)from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats (‘SH’,’customers’,method_opt=> ‘for all columns size skewonly’);

OR

*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
(ownname => ‘SH’,tabname => ‘CUSTOMERS’,
method_opt => ‘for all columns size skewonly for
columns (country_id,cust_state_province)’);
end;
/

Drop extended stats on column group:
————————————————–
SQL> exec dbms_stats.drop_extended_stats(‘SH’,’CUSTOMERS’, ‘(country_id,cust_state_province)’);

#########
Histograms
#########

What are Histograms?
—————————–
> Holds data about values within a column in a table for number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
-Frequency histograms: is when distinct values (buckets) in the column is less than 255 (e.g. the number of countries is always less than 254).
-Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn’t collect histograms, it deletes them if you didn’t use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If application exclusively uses bind variables, Oracle recommends deleting any existing histograms and disabling Oracle histograms generation.

Cautions:
– Do not create them on Columns that are not being queried.
– Do not create them on every column of every table.
– Do not create them on the primary key column of a table.

Verify the existence of histograms:
———————————————
SQL> select column_name,histogram from dba_tab_col_statistics
where owner=’SCOTT’ and table_name=’EMP’;

Creating Histograms:
—————————
e.g.
SQL> Exec dbms_stats.gather_schema_stats
(ownname => ‘SCOTT’,
estimate_percent => dbms_stats.auto_sample_size,
method_opt => ‘for all columns size auto’,
degree => 7);

method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns .
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value .
FOR ALL INDEXES COLUMNS      => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms
where OWNER=’SCOTT’ table_name=’EMP’ group by column_name;

Drop Histograms: 11g
———————-
e.g.
SQL> Exec dbms_stats.delete_column_stats
(ownname=>’SH’, tabname=>’SALES’,
colname=>’PROD_ID’, col_stat_type=> HISTOGRAM);

Stop gather Histograms: 11g
——————————
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs
(‘SH’, ‘SALES’,’METHOD_OPT’, ‘FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID’);
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
———————-
e.g.
SQL> exec dbms_stats.delete_column_stats(user,’T’,’USERNAME’);

################################
Save/IMPORT & RESTORE STATISTICS:
################################
====================
Export /Import Statistics:
====================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
–  —————————–
SQL> Exec dbms_stats.create_stat_table(ownname => ‘SYSTEM’, stattab => ‘prod_stats’,tblspace => ‘USERS’);

2-Export statistics to the STATS table:
—————————————————
For Database stats:
SQL> Exec dbms_stats.export_database_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(‘ORIGINAL_SCHEMA’,’STATS_TABLE’,NULL,’STATS_TABLE_OWNER’);
For Table:
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats(ownname => ‘SCOTT’,tabname => ‘EMP’,stattab => ‘prod_stats’);
For Index:
SQL> Exec dbms_stats.export_INDEX_stats(ownname => ‘SCOTT’,indname => ‘PK_EMP’,stattab => ‘prod_stats’);
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats (ownname=>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’,stattab=>’prod_stats’);

3-Import statistics from PROD_STATS table to the dictionary:
———————————————————————————
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS
(ownname => ‘SCOTT’,stattab => ‘prod_stats’, statown => ‘SYSTEM’);
For Table stats and it’s indexes:
SQL> Exec dbms_stats.import_TABLE_stats
( ownname => ‘SCOTT’, stattab => ‘prod_stats’,tabname => ‘EMP’);
For Index:
SQL> Exec dbms_stats.import_INDEX_stats
( ownname => ‘SCOTT’, stattab => ‘prod_stats’, indname => ‘PK_EMP’);
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats
(ownname=>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’,stattab=>’prod_stats’);

4-Drop STAT Table:
————————–
SQL> Exec dbms_stats.DROP_STAT_TABLE (stattab => ‘prod_stats’,ownname => ‘SYSTEM’);

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
——————————————————
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1);

Restore Database stats as of timestamp:
—————————————————-
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1);

Restore SYSTEM stats as of timestamp:
—————————————————-
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1);

Restore FIXED OBJECTS stats as of timestamp:
—————————————————————-
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1);

Restore SCHEMA stats as of timestamp:
—————————————
SQL> Exec dbms_stats.restore_SCHEMA_stats
(ownname=>’SYSADM’,AS_OF_TIMESTAMP=>sysdate-1);
OR:
SQL> Exec dbms_stats.restore_schema_stats
(ownname=>’SYSADM’,AS_OF_TIMESTAMP=>’20-JUL-2008 11:15:00AM’);

Restore Table stats as of timestamp:
————————————————
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS
(ownname=>’SYSADM’, tabname=>’T01POHEAD’,AS_OF_TIMESTAMP=>sysdate-1);

=========
Advanced:
=========

To Check current Stats history retention period (days):
——————————————————————-
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability from dual;
To modify current Stats history retention period (days):
——————————————————————-
SQL> Exec dbms_stats.alter_stats_history_retention(60);

Purge statistics older than 10 days:
——————————————
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb,
segment_name,segment_type from dba_segments
where  tablespace_name = ‘SYSAUX’
and segment_name like ‘WRI$_OPTSTAT%’
and segment_type=’TABLE’
group by segment_name,segment_type order by 1 asc
/

Check Stats indexes size:
>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type
from dba_segments
where  tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’
and segment_type=’INDEX’
group by segment_name,segment_type order by 1 asc
/
Move Stats tables in same tablespace:
>>>>>
select ‘alter table ‘||segment_name||’  move tablespace
SYSAUX;’ from dba_segments
where tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’ and segment_type=’TABLE’
/
Rebuild stats indexes:
>>>>>>
select ‘alter index ‘||segment_name||’  rebuild online;’
from dba_segments where tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’ and segment_type=’INDEX’
/

Check for un-usable indexes:
>>>>>
select  di.index_name,di.index_type,di.status  from
dba_indexes di , dba_tables dt
where  di.tablespace_name = ‘SYSAUX’
and dt.table_name = di.table_name
and di.table_name like ‘%OPT%’
order by 1 asc
/

Delete Statistics:
==============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS (‘SCOTT’);
For Table stats and it’s indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats(ownname=>’SCOTT’,tabname=>’EMP’);
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats(ownname => ‘SCOTT’,indname => ‘PK_EMP’);
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats(ownname =>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’);

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.

Pending Statistics:  “11g onwards”
===============
What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them “system wide” unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’FALSE’);
Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’TRUE’);

Gather statistics: “as you used to do
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS(‘sh’,’SALES’);
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;
Then any SQL statement you will run will use the new pending statistics…

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS();

Once you finish don’t forget to return the Global PUBLISH parameter to TRUE:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’TRUE’);
>If you didn’t do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.

Posted in Uncategorized | Leave a Comment »

Huge Archive/Redo Generation in System!

Posted by FatDBA on April 14, 2015

On one of our production database we found huge archives started generating which in turn flooded the entire system and started hampering the performance and availability of the system.
Below are the stats which clearly reflects the hourly archival generation in system which has raised from an average of 25 archives/day to a maximum of 609 redo files a day.

DB DATE       TOTAL  H00 H01 H02 H03 H04 H05 H06 H07 H08 H09 H10 H11 H12 H13 H14 H15 H16 H17 H18 H19 H20 H21 H22 H23
— ———- —— — — — — — — — — — — — — — — — — — — — — — — — —
1 2015-04-04      5   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   0   2   0   1
1 2015-04-05     19   0   0   2   0   0   3   1   0   0   1   0   2   1   0   2   0   1   2   0   1   0   2   1   0
1 2015-04-06     27   1   0   2   0   1   2   0   1   0   1   1   2   1   1   2   1   2   2   0   1   1   2   2   1
1 2015-04-07     33   0   1   2   0   1   2   0   1   1   0   1   2   1   0   2   1   1   3   1   1   2   4   3   3
1 2015-04-08    136   3   3   5   3   3   5   4   4   4   4   5   7   5   6   8   7   7   8   7   7   7   9   8   7
1 2015-04-09    284   8   9  10   9   9  11   9   9  10  11  11  14  12  12  14  13  12  14  13  17  14  15  15  13
1 2015-04-10    345  14  14  16  14  13  14  13  12  13  13  13  16  13  14  15  14  16  16  15  14  15  17  16  15
1 2015-04-11    428  16  16  17  16  16  17  17  16  18  17  18  19  18  18  19  18  18  20  18  18  20  20  19  19
1 2015-04-12    609  19  19  21  21  21  22  21  21  21  21  20  23  22  29  30  31  32  34  31  30  30  31  30  29
1 2015-04-13    277  25  24  25  25  25  26  25  25  25  25  24   3   0   0   0   0   0   0   0   0   0   0   0   0

During investigation found that there are two of the sessions with ID – 3 and 13 generating huge amount of redo data during the period.

select s.username, s.osuser, s.status,s.sql_id, sr.* from
(select sid, round(value/1024/1024) as “RedoSize(MB)”
from v$statname sn, v$sesstat ss
where sn.name = ‘redo size’
and ss.statistic# = sn.statistic#
order by value desc) sr,
v$session s
where sr.sid = s.sid
and rownum <= 10

USERNAME   OSUSER                         STATUS       SQL_ID               SID RedoSize(MB)
———- —————————— ———— ————- ———- ————
oracle                         ACTIVE                              1            0
testuser    testadm                         INACTIVE     apnx8grhadf80          2            0
testuser    testadm                         ACTIVE                             3        90037
testuser    testadm                         INACTIVE                            6            0
testuser    testadm                         INACTIVE                            7            0
testuser    testadm                         INACTIVE     apnx8grhadf80          8            0
testuser    testadm                         INACTIVE                            9            0
testuser    testadm                         INACTIVE                           10            8
testuser    testadm                         ACTIVE       14f48saw6n9d1         13       189923
testuser    testadm                         INACTIVE                           15            0

Lets investigate and jump deep!
Alright, first step should be collecting details of objects which are changing frequently and altering/changing db blocks. Below mentioned script will help to achieve the purpose.

prompt  To show all segment level statistics in one screen
prompt
set lines 140 pages 100
col owner format A12
col object_name format A30
col statistic_name format A30
col object_type format A10
col value format 99999999999
col perc format 99.99
undef statistic_name
break on statistic_name
with  segstats as (
select * from (
select inst_id, owner, object_name, object_type , value ,
rank() over (partition by  inst_id, statistic_name order by value desc ) rnk , statistic_name
from gv$segment_statistics
where value >0  and statistic_name like ‘%’||’&&statistic_name’ ||’%’
) where rnk <31
)  ,
sumstats as ( select inst_id, statistic_name, sum(value) sum_value from gv$segment_statistics group by statistic_name, inst_id)
select a.inst_id, a.statistic_name, a.owner, a.object_name, a.object_type,a.value,(a.value/b.sum_value)*100 perc
from segstats a ,   sumstats b
where a.statistic_name = b.statistic_name
and a.inst_id=b.inst_id
order by a.statistic_name, a.value desc
/

INST_ID|STATISTIC_NAME                |OWNER       |OBJECT_NAME                   |OBJECT_TYP|       VALUE|  PERC
———-|——————————|————|——————————|———-|————|——
1|db block changes              |testuser     |PZ214                          |TABLE     |  2454298704| 71.83
1                               |testuser     |T94                           |TABLE     |    23416784|   .69
1                               |testuser     |PZ978                          |TABLE     |    19604784|   .57
1                               |testuser     |PZ919                          |TABLE     |    18204160|   .53
1                               |testuser     |T85                           |TABLE     |    15616624|   .46
1                               |testuser     |IH94                          |INDEX     |    14927984|   .44
1                               |testuser     |IPZ978                         |INDEX     |    14567840|   .43
1                               |testuser     |I296_1201032811_1             |INDEX     |    14219072|   .42
1                               |testuser     |PZ796                          |TABLE     |    13881712|   .41
1                               |testuser     |H94                           |TABLE     |    13818416|   .40
1                               |testuser     |I312_3_1                      |INDEX     |    12247776|   .36
1                               |testuser     |I312_6_1                      |INDEX     |    11906992|   .35
1                               |testuser     |I312_7_1                      |INDEX     |    11846864|   .35
1                               |testuser     |IPZ412                         |INDEX     |    11841360|   .35
1                               |testuser     |I178_1201032811_1             |INDEX     |    11618160|   .34
1                               |testuser     |PZ972                          |TABLE     |    11611392|   .34
1                               |testuser     |H312                          |TABLE     |    11312656|   .33
1                               |testuser     |IPZ796                         |INDEX     |    11292912|   .33
1                               |testuser     |I188_1101083000_1             |INDEX     |     9772816|   .29
1                               |testuser     |PZ412                          |TABLE     |     9646864|   .28
1                               |testuser     |IH312                         |INDEX     |     9040944|   .26
1                               |testuser     |I189_1201032712_1             |INDEX     |     8739376|   .26
1                               |testuser     |SYS_IL0000077814C00044$$      |INDEX     |     8680976|   .25
1                               |testuser     |I119_1000727019_1             |INDEX     |     8629808|   .25
1                               |testuser     |I119_1101082009_1             |INDEX     |     8561520|   .25
1                               |testuser     |I312_1705081004_1             |INDEX     |     8536656|   .25
1                               |testuser     |I216_1201032712_1             |INDEX     |     8306016|   .24
1                               |testuser     |I119_1404062203_1             |INDEX     |     8289520|   .24
1                               |testuser     |PZ988                          |TABLE     |     8156352|   .24
1                               |testuser     |I85_1703082001_1              |INDEX     |     8126528|   .24

Here in this scenario LOG MINER utility will be of a great help. Below is the method to immediately mine an archive-log with ease.

SQL> begin
sys.dbms_logmnr.ADD_LOGFILE (‘/vol2/oracle/arc/testdb/1_11412_833285103.arc’);
end;
/
begin
sys.dbms_logmnr.START_LOGMNR;
end;
/
PL/SQL procedure successfully completed.

I was using hard-coded 512 bytes for redo block size. You can use the following SQL statement to identify the redo block size.

SQL> select max(lebsz) from x$kccle;

MAX(LEBSZ)
———-
512

1 row selected.

I always prefer to create a table by querying the data from v$logmnr_contents dynamic performance view rather accessing the view directly which always makes things hazardous.

SQL> CREATE TABLE redo_analysis_212_2 nologging AS
SELECT data_obj#, oper,
rbablk * le.bsz + rbabyte curpos,
lead(rbablk*le.bsz+rbabyte,1,0) over (order by rbasqn, rbablk, rbabyte) nextpos
FROM
( SELECT DISTINCT data_obj#, operation oper, rbasqn, rbablk, rbabyte
FROM v$logmnr_contents
ORDER BY rbasqn, rbablk, rbabyte
) ,
(SELECT MAX(lebsz) bsz FROM x$kccle ) le
/

Table created.

Next yo can query the table now to get mining details.

set lines 120 pages 40
column data_obj# format  9999999999
column oper format A15
column object_name format A60
column total_redo format 99999999999999
compute sum label ‘Total Redo size’ of total_Redo on report
break on report
spool /tmp/redo_212_2.lst
select data_obj#, oper, obj_name, sum(redosize) total_redo
from
(
select data_obj#, oper, obj.name obj_name , nextpos-curpos-1 redosize
from redo_analysis_212_2 redo1, sys.obj$ obj
where (redo1.data_obj# = obj.obj# (+) )
and  nextpos !=0 — For the boundary condition
and redo1.data_obj#!=0
union all
select data_obj#, oper, ‘internal ‘ , nextpos-curpos  redosize
from redo_analysis_212_2 redo1
where  redo1.data_obj#=0 and  redo1.data_obj# = 0
and nextpos!=0
)
group by data_obj#, oper, obj_name
order by 4
/

DATA_OBJ#|OPER           |OBJ_NAME                      |     TOTAL_REDO
———–|—————|——————————|—————
78236|UPDATE         |PZ716                          |         132584
78227|INTERNAL       |PZ214                          |         142861
738603|DIRECT INSERT  |WRH$_ACTIVE_SESSION_HISTORY   |         170764
78546|INSERT         |PZ412                          |         179476
78101|UPDATE         |PZ989                          |         191276
78546|LOB_WRITE      |PZ412                          |         220850
78546|UPDATE         |PZ412                          |         314460
78038|UPDATE         |PZ972                          |         322060
77814|UPDATE         |PZ919                          |         375863
78227|LOB_WRITE      |PZ214                          |         399417
77814|LOB_WRITE      |PZ919                          |         407572
0|START          |internal                      |         760604
0|COMMIT         |internal                      |        2654020
78227|UPDATE         |PZ214                          |      452580201
|               |                              |—————
Total Redo |               |                              |      461746150

259 rows selected.

SQL> select OWNER,OBJECT_NAME,OBJECT_TYPE,CREATED,LAST_DDL_TIME,STATUS,TIMESTAMP from dba_objects where OBJECT_ID=’78227′;
rows will be truncated

OWNER       |OBJECT_NAME                                                 |OBJECT_TYP|CREATED    ||STATUS
————|————————————————————|———-|———–|———–|———–
testuser     |PZ214                                                        |TABLE     |04-DEC-2013||VALID

1 row selected.

Its clearly visible after reading mining results which indicates that out of 460 MB of archivelog (that was mined) 450 MB was occupied by UPDATES happened on the object PZ214. Now we have enough proof in our hands which can be shared with application/development teams to investigate issue.

After a parallel investigation, we ultimately found that it was some feature enabled at application end that caused this redo swamp in system which later on rectified and fixed the issue.

DB DATE       TOTAL  H00 H01 H02 H03 H04 H05 H06 H07 H08 H09 H10 H11 H12 H13 H14 H15 H16 H17 H18 H19 H20 H21 H22 H23
— ———- —— — — — — — — — — — — — — — — — — — — — — — — — —
1 2015-04-04      5   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   0   2   0   1
1 2015-04-05     19   0   0   2   0   0   3   1   0   0   1   0   2   1   0   2   0   1   2   0   1   0   2   1   0
1 2015-04-06     27   1   0   2   0   1   2   0   1   0   1   1   2   1   1   2   1   2   2   0   1   1   2   2   1
1 2015-04-07     33   0   1   2   0   1   2   0   1   1   0   1   2   1   0   2   1   1   3   1   1   2   4   3   3
1 2015-04-08    136   3   3   5   3   3   5   4   4   4   4   5   7   5   6   8   7   7   8   7   7   7   9   8   7
1 2015-04-09    284   8   9  10   9   9  11   9   9  10  11  11  14  12  12  14  13  12  14  13  17  14  15  15  13
1 2015-04-10    345  14  14  16  14  13  14  13  12  13  13  13  16  13  14  15  14  16  16  15  14  15  17  16  15
1 2015-04-11    428  16  16  17  16  16  17  17  16  18  17  18  19  18  18  19  18  18  20  18  18  20  20  19  19
1 2015-04-12    609  19  19  21  21  21  22  21  21  21  21  20  23  22  29  30  31  32  34  31  30  30  31  30  29
1 2015-04-13    371  25  24  25  25  25  26  25  25  25  25  24  28  26  19  10   1   2   3   2   1   1   2   2   0
1 2015-04-14      7   1   0   2   0   1   2   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

Posted in Uncategorized | 1 Comment »

Evaluating Storage Performance!

Posted by FatDBA on March 10, 2015

Being a DBA means you need to posses outstanding knowledge of DBA Subjects as obvious, but nowadays knowledge on storage/network/OS is a big surplus. This helps a modern day DBA to troubleshoot/rectify and helps to avoid million of hours wasted to identify DB issues when in fact there isn’t any.

This time i tried to discuss steps which one can take when there are waits likes – log file sync, db file async io submit, log file parallel write, control file parallel write and many of the parallel and sequential write observed in system.

Quick Fast Disk Test Results (DD Command):
———————————————
[oracle@dixitdb111 datafiles]$ time sh -c “dd if=/dev/zero of=dd-test-file bs=20k count=1000000 && sync”
1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 26.4959 seconds, 309 MB/s

real 0m44.318s
user 0m0.278s
sys 0m19.410s
You have new mail in /var/spool/mail/oracle

[oracle@dixitdb111 datafiles]$ ls -ltrh
-rw-r–r– 1 oracle oinstall 7.7G Jan 30 11:59 dd-test-file

ORION
————-

ORION (ORacle IO Numbers) imitates the type of I/O performed by Oracle databases, which makes possible for you to measure I/O performance for storage arrangements without actually installing Oracle. This is now included in the “$ORACLE_HOME/bin” directory of Database/Grid installations.

There are many of the options available for the orion to run.
e.g. oltp, olap and many more.
Below pasted is a beautiful explanation by Alex Gorbachev (ACED, IOUG, OAKTABLE Member & Renowned Blogger)
Link: http://www.uyoug.org.uy/eventos2013/OTNLAD2013-Benchmarking-Oracle-IO-Performance-with-ORION-by-Alex-Gorbachev.pdf

[oracle@dixitdb111 bin]$ ./orion -run oltp
ORION: ORacle IO Numbers — Version 11.2.0.3.0
orion_20150203_0800
Calibration will take approximately 2 minutes.
Using a large value for -cache_size may take longer.

This will result in few .csv and text files with IO results with some beautiful charts/graphs and tabular records for the runtime.

Reference:
————–
_hist.csv
Histogram of I/O latencies.
_iops.csv
Performance results of
small I/Os in IOPS.
_lat.csv
Latency of small I/Os in microseconds.
_mbps.csv
Performance results of
large I/Os in MBPS.
_summary.txt
Summary of the input parameters, along with the minimum small I/O latency (in secs),
the maximum MBPS, and the maximum IOPS observed.
_trace.txt
Extended, unprocessed output

I/O calibration is one of those magical option. This feature enables user to assess the performance of the storage subsystem, and determine whether I/O performance problems are caused by the database or the storage subsystem. Unlike other external I/O calibration tools that issue I/Os sequentially, the I/O calibration feature of Oracle Database issues I/Os randomly using Oracle datafiles to access the storage media, producing results that more closely match the actual performance of the database.

SELECT d.name,
i.asynch_io
FROM v$datafile d,
v$iostat_file i
WHERE d.file# = i.file_no
AND i.filetype_name = ‘Data File’;

NAME ASYNCH_IO
—————————————————————– ———
/dbmnt1/dixitdb/datafiles/system01.dbf ASYNC_ON
/dbmnt2/dixitdb/datafiles/undotbs1_01.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/sysaux01.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/users01.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem01.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem02.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem03.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem04.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem05.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem06.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem07.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem08.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem09.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem10.dbf ASYNC_ON
/dbmnt2/dixitdb/datafiles/unicode1tbs01.dbf ASYNC_ON
/dbmnt2/dixitdb/datafiles/unicode2atbs01.dbf ASYNC_ON
/dbmnt2/dixitdb/datafiles/r11testtbs.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem11.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem12.dbf ASYNC_ON
/dbmnt2/dixitdb/datafiles/artest.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem13.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem14.dbf ASYNC_ON
/dbmnt1/dixitdb/datafiles/dbsystem15.dbf ASYNC_ON
/dbmnt3/dixitdb/datafiles/dbsystem16.dbf ASYNC_ON

* below query is resource intensive. Load/CPU spikes are expected during the run.

SET SERVEROUTPUT ON
DECLARE
l_latency PLS_INTEGER;
l_iops PLS_INTEGER;
l_mbps PLS_INTEGER;
BEGIN
DBMS_RESOURCE_MANAGER.calibrate_io (num_physical_disks => 1,
max_latency => 20,
max_iops => l_iops,
max_mbps => l_mbps,
actual_latency => l_latency);

DBMS_OUTPUT.put_line(‘Max IOPS = ‘ || l_iops);
DBMS_OUTPUT.put_line(‘Max MBPS = ‘ || l_mbps);
DBMS_OUTPUT.put_line(‘Latency = ‘ || l_latency);
END;
/

Max IOPS = 610
Max MBPS = 67
Latency = 19

==================
Calibration runs can be monitored using the V$IO_CALIBRATION_STATUS view.

SET LINESIZE 100
COLUMN start_time FORMAT A20
COLUMN end_time FORMAT A20

SELECT TO_CHAR(start_time, ‘DD-MON-YYY HH24:MI:SS’) AS start_time,
TO_CHAR(end_time, ‘DD-MON-YYY HH24:MI:SS’) AS end_time,
max_iops,
max_mbps,
max_pmbps,
latency,
num_physical_disks AS disks
FROM dba_rsrc_io_calibrate;

START_TIME END_TIME MAX_IOPS MAX_MBPS MAX_PMBPS LATENCY DISKS
——————– ——————– ———- ———- ———- ———- ———-
30-JAN-015 09:49:10 30-JAN-015 09:53:14 610 67 27 19 1

Thanks
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

File is DELETED but didn’t reclaim space in filesystem – LINUX

Posted by FatDBA on March 9, 2015

There are times when even after you deleted a file in Linux but that didn’t reclaim space in filesystem.
Below command shows that there are few of the files which are deleted from the system but are still active and still taking the space on the disk.

[oracle@dixitdb053 /proc]# /usr/sbin/lsof |grep -i deleted
oracle 11441 oracle 19w REG 253,2 1810 3129788 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_diag_11441.trc (deleted)
oracle 11441 oracle 20w REG 253,2 126 3129795 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_diag_11441.trm (deleted)
oracle 30157 oracle 19w REG 253,2 14182 3129684 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_vkrm_30157.trc (deleted)
dd 32592 oracle 1w REG 253,2 24238593024 3129587 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_ora_31778.trm (deleted)

Here the file is available under mount point — /opt/oracle and is still 86% full.

[oracle@dixitdb053 /proc/11441/fd]# df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
31614888 6127104 23856156 21% /
/dev/mapper/VolGroup02-LogVol05
36124288 29165480 5123800 86% /opt/oracle
/dev/sda1 101086 15711 80156 17% /boot
tmpfs 1956756 0 1956756 0% /dev/shm
113.11.88.199:/vol/dixitdb053_BO
110100480 58354656 51745824 54% /db01
113.11.88.199:/vol/vol_dixitdb053_backup
104857600 55011648 49845952 53% /backup

Well, the space will be automatically reclaimed depending on the size of the file deleted.
Below are the steps in order to remove it instantly from the file system


Steps:
=============

1. Go to /proc directory which contains PID of all active processes and files.
2. login to respective PID

[oracle@dixitdb053 /proc/11441/fd]# cd /proc/32592
[oracle@dixitdb053 /proc/32592]# cd fd
[oracle@dixitdb053 /proc/32592/fd]# pwd
/proc/32592/fd

3. Check if there is a link and in the end it says deleted.

[oracle@dixitdb053 /proc/32592/fd]# ls -ltrh
total 0
lrwx—— 1 oracle oinstall 64 Mar 7 07:57 2 -> /dev/pts/1
l-wx—— 1 oracle oinstall 64 Mar 7 08:25 1 -> /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_ora_31778.trm (deleted)
lr-x—— 1 oracle oinstall 64 Mar 7 08:25 0 -> /dev/zero

4. Type > in the number that was shown in that line and it will release the space instantly.

[oracle@dixitdb053 /proc/32592/fd]# > 1

[oracle@dixitdb053 /proc/32592/fd]# ls -ltrh
total 0
lrwx—— 1 oracle oinstall 64 Mar 7 07:57 2 -> /dev/pts/1
l-wx—— 1 oracle oinstall 64 Mar 7 08:25 1 -> /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_ora_31778.trm (deleted)
lr-x—— 1 oracle oinstall 64 Mar 7 08:25 0 -> /dev/zero

5. verify it once again by listing the open files in the filesystem to see if there is still an open file with the deleted status.

Now the space is reclaimed and has been brought down to 0.

[oracle@dixitdb053 /proc/32592/fd]# /usr/sbin/lsof |grep -i deleted
oracle 11441 oracle 19w REG 253,2 0 3129788 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_diag_11441.trc (deleted)
oracle 11441 oracle 20w REG 253,2 0 3129795 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_diag_11441.trm (deleted)
oracle 30157 oracle 19w REG 253,2 14792 3129684 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_vkrm_30157.trc (deleted)
dd 32592 oracle 1w REG 253,2 0 3129587 /opt/oracle/diag/rdbms/dixitdb/dixitdb/trace/dixitdb_ora_31778.trm (deleted)

And the mount point /opt/oracle usage went down too and reached to 17%.

[oracle@dixitdb053 /proc/32592/fd]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
31G 5.9G 23G 21% /
/dev/mapper/VolGroup02-LogVol05
35G 5.3G 28G 17% /opt/oracle
/dev/sda1 99M 16M 79M 17% /boot
tmpfs 1.9G 0 1.9G 0% /dev/shm
113.11.88.199:/vol/dixitdb053_BO
105G 56G 50G 54% /db01
113.11.88.199:/vol/vol_dixitdb053_backup
100G 53G 48G 53% /backup

Thanks
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Some Serious Fun with the Database!!

Posted by FatDBA on March 4, 2015

Reading Alert log using SQL Terminal:
select message_text from X$DBGALERTEXT where rownum <= 10;

MESSAGE_TEXT
—————————————————————————————————-
Adjusting the default value of parameter parallel_max_servers
from 320 to 120 due to the value of parameter processes (150)
Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = 64 KB

Total Shared Global Region in Large Pages = 0 KB (0%)

Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB

How to print a message to the alert log:
SQL> exec dbms_system.ksdwrt(1, ‘This message goes to trace file in the udump location’);
PL/SQL procedure successfully completed.

SQL> exec dbms_system.ksdwrt(2, ‘This message goes to the alert log’);
PL/SQL procedure successfully completed.

SQL> exec dbms_system.ksdwrt(3, ‘This message goes to the alert log and trace file in the udump location’);
PL/SQL procedure successfully completed.

SQL> exec dbms_system.ksdwrt(2, ‘ORA-0000118111: Testing monitoring tool’);
PL/SQL procedure successfully completed.

Thanks
Prashant Dixit

Posted in Advanced | Leave a Comment »