Tales From A Lazy Fat DBA

Fan of Oracle DB & Performance, PostgreSQL & Cassandra … \,,/

  • Likes

    • 227,592
  • Archives

  • Categories

  • Subscribe

Archive for the ‘Uncategorized’ Category

Datastax Certified Cassandra Administrator, some tips & more

Posted by FatDBA on August 21, 2020

Hi Guys,

With a sharp rise in NoSQL databases, many of the organizations are making a transition from traditional databases to distributed and high performance databases like ‘Cassandra’. Cassandra has become Apache’s one of the most popular projects. Though there are multiple NoSQL databases available in the market but no one has the features like peer-to-peer architecture, HA and Fault tolerant, Column based, Highly perform-ant, Schema Less, tunable consistency, great analytical possibilities, easy to scale-up & scale-down, distributed and the list goes on and on and on.

Cassandra already proved it’s mettle and is magical for IoT, Sensor data, Event based, Time series data, voucher generation systems and with other data models. Datastax provides best in class database management software and wide-range services with 24×7 support to get more from your Cassandra. Alongside comes some really cool features and tools i.e. opscenter (GUI), Nodesync (for enti entropy repairs), great SOLR integration, dsetool (similar to nodetool with more capabilities), sstableloader, pre-flight check tool, yaml file compare tools, stress tools, extra commands i.e. dsefs and many more.

DataStax is a pioneer and they have their own Cassandra certification path/track to prove you have valid credentials to work with Cassandra database either as a developer or an administrator. Now question comes where to start ?? – In fact many of you have asked me about my latest credentials ‘Datastax Apache Cassandra 3.x Administrator Associate‘, I was getting questions like how to prepare, how to book the exam and many other related questions. So, this post will be all about covering topics like how to prepare and book exam along with few tips.

I would always prefer to go point wise to make things more ordered and easy to digest.

1. Create your account on Datastax Academy.
Link: https://auth.cloud.datastax.com/auth/realms/CloudUsers/login-actions/registration?client_id=absorb&tab_id=lv4-57nRbu4

2. Go to the option ‘Catalog’ to lookout for courses available.
You have to choose between the Administrator (3 course based curriculum) or Developer (3 Courses based curriculum) track. I have completed the ADMIN path and it has three courses DS101 (Introduction), DS201 (Foundations) and DS210 (Operations with Apache Cassandra). All of the courses are beautifully designed, contains large numbers of demos, presentations, guides, quiz and a pre-build Ubuntu VM where you can all exercises.

Though the presentations and program covers every topic and all major parameters and topics but still if you want to read in depth, they have their own document collection and can be accessed through their website https://docs.datastax.com/en/landing_page/doc/landing_page/current.html or from https://cassandra.apache.org/doc/latest/

Note: There are few other specialized courses available too within the catalog i.e. Kafka connectors, DSE Graph, DSE Analytics, DSE Search etc.

3. Other learning platforms
Github: https://github.com/datastax
Can be very useful specially if you are preparing for developer track.
Youtube: Full of some great presentations, videos and some precious workshops and demos.
https://www.youtube.com/user/DataStaxMedia
Twitter: For news (about webinars etc.), press releases and other exciting information.
https://twitter.com/DataStax (@DataStax)

4. All set!
Once you are done with your all three of your courses under ADMIN track, you are done and ready for the certification. Go to ‘Datastax Certification’ widget within catalog and book your exam by creating your profile on their certification website.
https://certification.mettl.com/datastax/applicant/signup

Currently they are giving one free exam vouchers and those will be issued at the end of the series for participants of the workshop.

5. Once registered you have to choose your exam type – Admin or Developer.
Both of the exams has 60 questions that you have to complete within 90 minutes, exam fees (right now) is $145
Note: It’s good that you check your system comparability before the exam, for more details follow their official guidelines.

So, don’t wait, go and enroll for the course and grab a chance for giving free certification and more importantly stand out from the crowd. These widely accepted and recognized credentials will help you in your continued professional development and is an ideal way to gain a greater understanding of your industry, and to enhance your knowledge and skills. It also offers excellent chances to network among Cassandra geeks.

Hope It Helps!
Prashant Dixit

Posted in Basics, Uncategorized | Tagged: | Leave a Comment »

root.sh failing while installing 12cR2 on RHEL7 “Failed to create keys in the OLR” – Did your hostname starts with a number ?

Posted by FatDBA on July 29, 2019

Hi Guys,

I know its been too long since i last posted and it all happened due to some site authentication issues and some personal priorities. Here I am back with new issues, all related with performance, administration, troubleshooting, optimization and other subjects.

This time would like to share one of the issue that i have faced while installing Oracle 12c Release 2 (Yes, I still do installations, sometimes 🙂 ) on a brand new RHEL7 box where everything was good till I ran root.sh which got failed due to a weird error which initially got no hint behind the problem.
Initially i though if this qualifies to be a post and deserves a place here but actually I have spend few days identifying the cause and hours that I have spend with support, so just want to save all that time for you all who might facing the same issue and looking something on Google 🙂

So lets get started!
This is what exactly I got when ran the root.sh script



[root@8811913-monkey-db1:/u011/app1/12.2.0.1/grid]# ./root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u011/app1/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u011/app1/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u011/app1/12.2.0.1/crsdata/8811913-monkey-db1/crsconfig/roothas_2019-02-18_00-59-22AM.log
Site name (8811913-monkey-db1) is invalid.clscfg -localadd -z  [-avlookup]
                 -p property1:value1,property2:value2...

  -avlookup       - Specify if the operation is during clusterware upgrade
  -z   - Specify the site GUID for this node
  -p propertylist - list of cluster properties and its value pairs

 Adds keys in OLR for the HASD.
WARNING: Using this tool may corrupt your cluster configuration. Do not
         use unless you positively know what you are doing.

 Failed to create keys in the OLR, rc = 100, Message:


2019/02/18 00:59:28 CLSRSC-188: Failed to create keys in Oracle Local Registry
Died at /u011/app1/12.2.0.1/grid/crs/install/oraolr.pm line 552.
The command '/u011/app1/12.2.0.1/grid/perl/bin/perl -I/u011/app1/12.2.0.1/grid/perl/lib -I/u011/app1/12.2.0.1/grid/crs/install /u011/app1/12.2.0.1/grid/crs/install/roothas.pl ' execution failed


The error simply said that the script failed to ‘create the keys in OLR’. These keys were for HASD that it was attempting to add. I verified all run time logs that got created the time but they too gave no idea about this problem. That is when I had to engage the Oracle customer support and came to know that this all happened due to a new BUG (BUG 26581118 – ALLOW HOSTNAME WITH NUMERIC VALUE) that comes in to picture when you have the hostname starts with a numeral or number and is an RHEL7 and is specific to Oracle 12c Release 2.

Oracle suggested a bug fix (Patch Number: 26751067) for this issue. This is a MERGE patch and fixes both Bug 25499276 & 26581118. One more thing, you have to apply this patch before the root.sh script.
So let me quickly show how to do that (removing all redundant and other sections).



[oracle@8811913-monkey-db1:/u011/app1/12.2.0.1/grid/OPatch]$ ./opatch napply -oh /u011/app1/12.2.0.1/grid -local 26751067/26751067/
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2019, Oracle Corporation.  All rights reserved.

...
......

Patch 26751067 successfully applied.
Log file location: /u011/app1/12.2.0.1/grid/cfgtoollogs/opatch/opatch2019-02-18_01-05-41AM_1.log

OPatch succeeded.
[oracle@8811913-monkey-db1:/u011/app1/12.2.0.1/grid/OPatch]$
[oracle@8811913-monkey-db1:/u011/app1/12.2.0.1/grid/OPatch]$


Ran the root.sh after patching and it went smooth.
BTW, in case you don’t want to do all this, simply change the hostname and put any alphabet in front of your hostname i.e. 8811913 –> A8811913 — That’s It!

Hope It Helps!

Thanks
Prashant Dixit

Posted in troubleshooting, Uncategorized | Tagged: | 1 Comment »

Oracle DB Security Assessment Tool (DBSAT)

Posted by FatDBA on March 2, 2018

Hi Everyone,

Would like to discuss about one of the request came from my earlier projects to identify sensitive data (Tables, objects etc.) within their databases so that external policies can be enforced later on, but the customer only permitted us to use any inbuilt or Oracle branded audit tool and not any third party security/compliance auditing tools.

And then we landed to use Oracle In-Built database security assessment tool name as DBSAT.
DBSAT has three components: Collector, Reporter, and Discoverer. Collector and Reporter work together to discover risk areas and produce reports on those risk areas and produces the final assessment report in HTML and CSV formats.
You can use DBSAT report findings to:

– Fix immediate short-term risks
– Implement a comprehensive security strategy
– Support your regulatory compliance program
– Promote security best practices

Lets see what it is and how to use it.

Step 1: Unzip the package.

[oracle@dixitlab software]$ unzip dbsat.zip
Archive: dbsat.zip
inflating: dbsat
inflating: dbsat.bat
inflating: sat_reporter.py
inflating: sat_analysis.py
inflating: sat_collector.sql
inflating: xlsxwriter/app.py
inflating: xlsxwriter/chart_area.py
inflating: xlsxwriter/chart_bar.py
inflating: xlsxwriter/chart_column.py
inflating: xlsxwriter/chart_doughnut.py
inflating: xlsxwriter/chart_line.py
inflating: xlsxwriter/chart_pie.py
inflating: xlsxwriter/chart.py
inflating: xlsxwriter/chart_radar.py
inflating: xlsxwriter/chart_scatter.py
inflating: xlsxwriter/chartsheet.py
inflating: xlsxwriter/chart_stock.py
inflating: xlsxwriter/comments.py
inflating: xlsxwriter/compat_collections.py
inflating: xlsxwriter/compatibility.py
inflating: xlsxwriter/contenttypes.py
inflating: xlsxwriter/core.py
inflating: xlsxwriter/custom.py
inflating: xlsxwriter/drawing.py
inflating: xlsxwriter/format.py
inflating: xlsxwriter/__init__.py
inflating: xlsxwriter/packager.py
inflating: xlsxwriter/relationships.py
inflating: xlsxwriter/shape.py
inflating: xlsxwriter/sharedstrings.py
inflating: xlsxwriter/styles.py
inflating: xlsxwriter/table.py
inflating: xlsxwriter/theme.py
inflating: xlsxwriter/utility.py
inflating: xlsxwriter/vml.py
inflating: xlsxwriter/workbook.py
inflating: xlsxwriter/worksheet.py
inflating: xlsxwriter/xmlwriter.py
inflating: xlsxwriter/LICENSE.txt
inflating: Discover/bin/discoverer.jar
inflating: Discover/lib/ojdbc6.jar
inflating: Discover/conf/sample_dbsat.config
inflating: Discover/conf/sensitive_en.ini

Step 2: Configure the ‘dbsat configuration’ file.
Next you have to configre the main config file (dbsat.config) available under Discover/conf directory.

[oracle@dixitlab conf]$ pwd
/home/oracle/software/Discover/conf

[oracle@dixitlab conf]$ ls -ltrh
total 20K
-rwxrwxrwx. 1 oracle oinstall 13K Jan 16 22:58 sensitive_en.ini
-rwxrwxrwx. 1 oracle oinstall 2.4K Mar 1 22:12 dbsat.config

Few of the important parameters are given below.
vi dbsat.config

DB_HOSTNAME = localhost
DB_PORT = 1539
DB_SERVICE_NAME =tunedb
SENSITIVE_PATTERN_FILES = sensitive_en.ini >>>>> This param users sensitive_en.ini file for the English language patterns, which contains 75 patterns
ex: CREDIT_CARD_NUMBER, CARD_SECURITY_PIN, MEDICAL_INFORMATION, SOCIAL_SECURITY_NUMBER etc.

 

Step 3: Run the discoverer against the database to collect the information.

[oracle@dixitlab software]$ $(dirname $(dirname $(readlink -f $(which javac))))    --- To check the JAVAHOME.
-bash: /usr/java/jdk1.8.0_131: is a directory
[oracle@dixitlab software]$ export JAVA_HOME=/usr/java/jdk1.8.0_131

[oracle@dixitlab conf]$ cd ../..
[oracle@dixitlab software]$ ./dbsat discover -c Discover/conf/sample_dbsat.config tunedb_data

Database Security Assessment Tool version 2.0.1 (December 2017)

This tool is intended to assist in you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.

Enter username: system
Enter password:
Connection Successful- Retrying regarding "tunedb" as SID
DBSAT Discover ran successfully.
Calling /usr/bin/zip to encrypt the generated reports...

Enter password:
Verify password:
zip warning: tunedb_data_report.zip not found or empty
adding: tunedb_data_discover.html (deflated 88%)
adding: tunedb_data_discover.csv (deflated 84%)
Zip completed successfully.

We have the audit reports created under the tool directory.
Sample report attached with this report.

https://1drv.ms/f/s!Arob5fjpN041ga58isTgjF-wBPLI0A
tunedb_data – Oracle Database Security Risk Assessment

Hope It Helps
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

Active Data Guard (ADG) is included in Golden Gate License on EE edition.

Posted by FatDBA on August 22, 2016

The license for Oracle GoldenGate includes a full use license for Oracle Active Data Guard, and a full use license for XStream in the Oracle Database.

Active Data Guard is a superset of Data Guard capabilities included with Oracle Enterprise Edition and can be purchased as the Active Data Guard Option for Oracle Database Enterprise Edition. It is included with every Oracle GoldenGate license, offering customers the ability to acquire the complete set of advanced Oracle replication capabilities with a single purchase.

Posted in Uncategorized | 1 Comment »

Linux YUM – Error: Cannot retrieve repository metadata (repomd.xml) for repository

Posted by FatDBA on January 6, 2016

Some time back I’ve got an error message with YUM even when the installation and configuration went successful.
Where it fails every time i called any of the YUM commands with error message “Cannot retrieve repository metadata (repomd.xml) for repository”.

[root@Fatdba ~]# yum list
Loaded plugins: refresh-packagekit
Repository ol6_latest is listed more than once in the configuration
Repository ol6_ga_base is listed more than once in the configuration
ftp://obiftp/YUM_local/GDS/obi/6.1/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 502”
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base_el6_local. Please verify its path and try again

Solution:
Try the following sequence of steps to fix this problem.

$ sudo su –
# cd /etc/yum.repos.d
# rm -f *
# wget http://public-yum.oracle.com/public-yum-ol6.repoThis needs Internet Connection
# yum clean all
# yum makecache

Hope That Helps
Prashant Dixit

Posted in Uncategorized | Leave a Comment »

Sorry folks i have been a little busy lately!!!

Posted by FatDBA on December 3, 2015

Sorry If i haven’t been blogging lately or posting up the next part of the story. Soon I’ll try to post more and try to help.

Thanks
Prashant “FatDBA” Dixit

Posted in Uncategorized | Leave a Comment »

Statistics in Oracle!

Posted by FatDBA on May 5, 2015

 In this post I’ll try to summarize all sorts of statistics in Oracle, I strongly recommend reading the full article, as it contains information you may find it valuable in understanding Oracle statistics.

#####################################
Database | Schema | Table | Index Statistics
#####################################

Gather Database Statistics:
=======================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>100,METHOD_OPT=>’FOR ALL COLUMNS SIZE SKEWONLY’,
    CASCADE => TRUE,
    degree => 4,
    OPTIONS => ‘GATHER STALE’,
    GATHER_SYS => TRUE,
    STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will decide whether to collect index statistics or not.
DEGREE => 4 :Degree of parallelism.
options: 
       =>’GATHER’ :Gathers statistics on all objects in the schema.
       =>’GATHER AUTO’ :Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>’GATHER STALE’:Gathers statistics on stale objects. will return a list of stale objects.
       =>’GATHER EMPTY’:Gathers statistics on objects have no statistics.will return a list of no stats objects.
        =>’LIST AUTO’ : Returns a list of objects to be processed with GATHER AUTO.
        =>’LIST STALE’: Returns a list of stale objects as determined by looking at the *_tab_modifications views.
        =>’LIST EMPTY’: Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the ‘SYS’ user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all kind of statistics (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female), in a construction or security service company, where most of employees are male workforce,the gender column in employee table is likely to be skewed but in an entity like a hospital where the number of males almost equal the number of female workforce, the gender column is likely to be not skewed.

For faster execution:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What’s new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed “METHOD_OPT=>’FOR ALL COLUMNS SIZE SKEWONLY'” => As histograms is not recommended to be gathered on all columns.
Removed  “cascade => TRUE” To let Oracle determine whether index statistics to be collected or not.
Doubled the “degree => 8” but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale or missing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = ‘auto optimizer stats collection’;

To Enable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => ‘auto optimizer stats collection’,
    operation => NULL,
    window_name => NULL);
    END;
    /

In case you want to Disable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => ‘auto optimizer stats collection’,
    operation => NULL,
    window_name => NULL);
    END;
    /

To check the tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS=’YES’;

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it’s parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Statistics:
======================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
ownname =>’SCOTT’,
estimate_percent=>10,
degree=>1,
cascade=>TRUE,
options=>’GATHER STALE’);

Gather TABLE Statistics:
====================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name=’T1′;

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
    ownname => ‘SCOTT’,
    tabname => ‘EMP’,
    degree => 2,
    cascade => TRUE,
    METHOD_OPT => ‘FOR COLUMNS SIZE AUTO’,
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
    END;
    /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=>  :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between “” instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS  :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.

Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you’ve to re-gather statistics on that table.

Inside “DBA BUNDLE”, there is a script called “gather_stats.sh”, it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.

Gather Index Statistics:
===================
SQL> exec DBMS_STATS.GATHER_INDEX_STATS(ownname => ‘SCOTT’,indname => ‘EMP_I’,
estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);

####################
Fixed OBJECTS Statistics
####################

What are Fixed objects:
—————————-
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

How frequent to gather stats on fixed objects?
——————————————————-
Only one time for a representative workload unless you’ve one of these cases:

– After a major database or application upgrade.
– After implementing a new module.
– After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
– Poor performance/Hang encountered while querying dynamic views e.g. V$ views.

Note:
– It’s recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
– Also note that performance degradation may be experienced while the statistics are gathering.
– Having no statistics is better than having a non representative statistics.

How to gather stats on fixed objects:
———————————————

First Check the last analyzed date:
—— ———————————–
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED
       from dba_tab_statistics where table_name=’X$KGLDP’;
Second Export the current fixed stats in a table: (in case you need to revert back)
——- ———————————–
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE
       (‘OWNER’,’STATS_TABLE_NAME’,’TABLESPACE_NAME’);

SQL> EXEC dbms_stats.export_fixed_objects_stats
       (stattab=>’STATS_TABLE_NAME’,statown=>’OWNER’);
Third Gather the fixed objects stats:
——-  ————————————
SQL> exec dbms_stats.gather_fixed_objects_stats;

Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats();
SQL> exec DBMS_STATS.import_fixed_objects_stats
       (stattab =>’STATS_TABLE_NAME’,STATOWN =>’OWNER’);

#################
SYSTEM STATISTICS
#################

What is system statistics:
——————————-
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
—————————————-
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
———————————–
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it’s less accurate than “WORKLOAD statistics” but if you can’t capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics:
——————————-
This will gather statistics during the current workload [which supposed to be representative of actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats(‘start’);
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats(‘stop’);
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats(‘interval’,60);

Check the system values collected:
——————————————-
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;

cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:
-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn’t impose an additional overhead on your system.

Delete system statistics:
——————————
SQL> execute dbms_stats.delete_system_stats();

####################
Data Dictionary Statistics
####################

Facts:
——-
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,’ORACLE’);

When to gather Dictionary statistics:
———————————————
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
———————————————
SQL> select table_name, last_analyzed from dba_tables
where owner=’SYS’ and table_name like ‘%$’ order by 2;

Gather Dictionary Statistics:
———————————–
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
->Will gather stats on 20% of SYS schema tables.
or…
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS (‘SYS’);
->Will gather stats on 100% of SYS schema tables.
or…
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.

################
Extended Statistics “11g onwards”
################

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = ‘scott’;

In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats(‘SCOTT’,’EMP’,'(lower(ENAME))’) from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’EMP’, method_opt=> ‘for all columns size skewonly’);

OR
—-
*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
(ownname => ‘SCOTT’,tabname => ‘EMP’,
method_opt => ‘for all columns size skewonly for
columns (lower(ENAME))’);
end;
/

To check the Existence of extended statistics on a table:
———————————————————————-
SQL> select extension_name,extension from dba_stat_extensions where owner=’SCOTT’and table_name = ‘EMP’;
SYS_STU2JLSDWQAFJHQST7$QK81_YB (LOWER(“ENAME”))

Drop extended stats on column function:
——————————————————
SQL> exec dbms_stats.drop_extended_stats(‘SCOTT’,’EMP’,'(LOWER(“ENAME”))’);

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statement  are correlated e.g.(country,state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each columns. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in USA so the value of state_name are always influenced by country_id.
If there are extra columns are referenced in the “WHERE statement  with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats(‘SH’,’CUSTOMERS’, ‘(country_id,cust_state_province)’)from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats (‘SH’,’customers’,method_opt=> ‘for all columns size skewonly’);

OR

*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
(ownname => ‘SH’,tabname => ‘CUSTOMERS’,
method_opt => ‘for all columns size skewonly for
columns (country_id,cust_state_province)’);
end;
/

Drop extended stats on column group:
————————————————–
SQL> exec dbms_stats.drop_extended_stats(‘SH’,’CUSTOMERS’, ‘(country_id,cust_state_province)’);

#########
Histograms
#########

What are Histograms?
—————————–
> Holds data about values within a column in a table for number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
-Frequency histograms: is when distinct values (buckets) in the column is less than 255 (e.g. the number of countries is always less than 254).
-Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn’t collect histograms, it deletes them if you didn’t use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If application exclusively uses bind variables, Oracle recommends deleting any existing histograms and disabling Oracle histograms generation.

Cautions:
– Do not create them on Columns that are not being queried.
– Do not create them on every column of every table.
– Do not create them on the primary key column of a table.

Verify the existence of histograms:
———————————————
SQL> select column_name,histogram from dba_tab_col_statistics
where owner=’SCOTT’ and table_name=’EMP’;

Creating Histograms:
—————————
e.g.
SQL> Exec dbms_stats.gather_schema_stats
(ownname => ‘SCOTT’,
estimate_percent => dbms_stats.auto_sample_size,
method_opt => ‘for all columns size auto’,
degree => 7);

method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns .
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value .
FOR ALL INDEXES COLUMNS      => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms
where OWNER=’SCOTT’ table_name=’EMP’ group by column_name;

Drop Histograms: 11g
———————-
e.g.
SQL> Exec dbms_stats.delete_column_stats
(ownname=>’SH’, tabname=>’SALES’,
colname=>’PROD_ID’, col_stat_type=> HISTOGRAM);

Stop gather Histograms: 11g
——————————
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs
(‘SH’, ‘SALES’,’METHOD_OPT’, ‘FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID’);
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
———————-
e.g.
SQL> exec dbms_stats.delete_column_stats(user,’T’,’USERNAME’);

################################
Save/IMPORT & RESTORE STATISTICS:
################################
====================
Export /Import Statistics:
====================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
–  —————————–
SQL> Exec dbms_stats.create_stat_table(ownname => ‘SYSTEM’, stattab => ‘prod_stats’,tblspace => ‘USERS’);

2-Export statistics to the STATS table:
—————————————————
For Database stats:
SQL> Exec dbms_stats.export_database_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(‘ORIGINAL_SCHEMA’,’STATS_TABLE’,NULL,’STATS_TABLE_OWNER’);
For Table:
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats(ownname => ‘SCOTT’,tabname => ‘EMP’,stattab => ‘prod_stats’);
For Index:
SQL> Exec dbms_stats.export_INDEX_stats(ownname => ‘SCOTT’,indname => ‘PK_EMP’,stattab => ‘prod_stats’);
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats (ownname=>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’,stattab=>’prod_stats’);

3-Import statistics from PROD_STATS table to the dictionary:
———————————————————————————
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS
(ownname => ‘SCOTT’,stattab => ‘prod_stats’, statown => ‘SYSTEM’);
For Table stats and it’s indexes:
SQL> Exec dbms_stats.import_TABLE_stats
( ownname => ‘SCOTT’, stattab => ‘prod_stats’,tabname => ‘EMP’);
For Index:
SQL> Exec dbms_stats.import_INDEX_stats
( ownname => ‘SCOTT’, stattab => ‘prod_stats’, indname => ‘PK_EMP’);
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats
(ownname=>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’,stattab=>’prod_stats’);

4-Drop STAT Table:
————————–
SQL> Exec dbms_stats.DROP_STAT_TABLE (stattab => ‘prod_stats’,ownname => ‘SYSTEM’);

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
——————————————————
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1);

Restore Database stats as of timestamp:
—————————————————-
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1);

Restore SYSTEM stats as of timestamp:
—————————————————-
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1);

Restore FIXED OBJECTS stats as of timestamp:
—————————————————————-
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1);

Restore SCHEMA stats as of timestamp:
—————————————
SQL> Exec dbms_stats.restore_SCHEMA_stats
(ownname=>’SYSADM’,AS_OF_TIMESTAMP=>sysdate-1);
OR:
SQL> Exec dbms_stats.restore_schema_stats
(ownname=>’SYSADM’,AS_OF_TIMESTAMP=>’20-JUL-2008 11:15:00AM’);

Restore Table stats as of timestamp:
————————————————
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS
(ownname=>’SYSADM’, tabname=>’T01POHEAD’,AS_OF_TIMESTAMP=>sysdate-1);

=========
Advanced:
=========

To Check current Stats history retention period (days):
——————————————————————-
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability from dual;
To modify current Stats history retention period (days):
——————————————————————-
SQL> Exec dbms_stats.alter_stats_history_retention(60);

Purge statistics older than 10 days:
——————————————
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb,
segment_name,segment_type from dba_segments
where  tablespace_name = ‘SYSAUX’
and segment_name like ‘WRI$_OPTSTAT%’
and segment_type=’TABLE’
group by segment_name,segment_type order by 1 asc
/

Check Stats indexes size:
>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type
from dba_segments
where  tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’
and segment_type=’INDEX’
group by segment_name,segment_type order by 1 asc
/
Move Stats tables in same tablespace:
>>>>>
select ‘alter table ‘||segment_name||’  move tablespace
SYSAUX;’ from dba_segments
where tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’ and segment_type=’TABLE’
/
Rebuild stats indexes:
>>>>>>
select ‘alter index ‘||segment_name||’  rebuild online;’
from dba_segments where tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’ and segment_type=’INDEX’
/

Check for un-usable indexes:
>>>>>
select  di.index_name,di.index_type,di.status  from
dba_indexes di , dba_tables dt
where  di.tablespace_name = ‘SYSAUX’
and dt.table_name = di.table_name
and di.table_name like ‘%OPT%’
order by 1 asc
/

Delete Statistics:
==============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS (‘SCOTT’);
For Table stats and it’s indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats(ownname=>’SCOTT’,tabname=>’EMP’);
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats(ownname => ‘SCOTT’,indname => ‘PK_EMP’);
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats(ownname =>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’);

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.

Pending Statistics:  “11g onwards”
===============
What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them “system wide” unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’FALSE’);
Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’TRUE’);

Gather statistics: “as you used to do
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS(‘sh’,’SALES’);
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;
Then any SQL statement you will run will use the new pending statistics…

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS();

Once you finish don’t forget to return the Global PUBLISH parameter to TRUE:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’TRUE’);
>If you didn’t do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.

Posted in Uncategorized | Leave a Comment »

Huge Archive/Redo Generation in System!

Posted by FatDBA on April 14, 2015

On one of our production database we found huge archives started generating which in turn flooded the entire system and started hampering the performance and availability of the system.
Below are the stats which clearly reflects the hourly archival generation in system which has raised from an average of 25 archives/day to a maximum of 609 redo files a day.

DB DATE       TOTAL  H00 H01 H02 H03 H04 H05 H06 H07 H08 H09 H10 H11 H12 H13 H14 H15 H16 H17 H18 H19 H20 H21 H22 H23
— ———- —— — — — — — — — — — — — — — — — — — — — — — — — —
1 2015-04-04      5   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   0   2   0   1
1 2015-04-05     19   0   0   2   0   0   3   1   0   0   1   0   2   1   0   2   0   1   2   0   1   0   2   1   0
1 2015-04-06     27   1   0   2   0   1   2   0   1   0   1   1   2   1   1   2   1   2   2   0   1   1   2   2   1
1 2015-04-07     33   0   1   2   0   1   2   0   1   1   0   1   2   1   0   2   1   1   3   1   1   2   4   3   3
1 2015-04-08    136   3   3   5   3   3   5   4   4   4   4   5   7   5   6   8   7   7   8   7   7   7   9   8   7
1 2015-04-09    284   8   9  10   9   9  11   9   9  10  11  11  14  12  12  14  13  12  14  13  17  14  15  15  13
1 2015-04-10    345  14  14  16  14  13  14  13  12  13  13  13  16  13  14  15  14  16  16  15  14  15  17  16  15
1 2015-04-11    428  16  16  17  16  16  17  17  16  18  17  18  19  18  18  19  18  18  20  18  18  20  20  19  19
1 2015-04-12    609  19  19  21  21  21  22  21  21  21  21  20  23  22  29  30  31  32  34  31  30  30  31  30  29
1 2015-04-13    277  25  24  25  25  25  26  25  25  25  25  24   3   0   0   0   0   0   0   0   0   0   0   0   0

During investigation found that there are two of the sessions with ID – 3 and 13 generating huge amount of redo data during the period.

select s.username, s.osuser, s.status,s.sql_id, sr.* from
(select sid, round(value/1024/1024) as “RedoSize(MB)”
from v$statname sn, v$sesstat ss
where sn.name = ‘redo size’
and ss.statistic# = sn.statistic#
order by value desc) sr,
v$session s
where sr.sid = s.sid
and rownum <= 10

USERNAME   OSUSER                         STATUS       SQL_ID               SID RedoSize(MB)
———- —————————— ———— ————- ———- ————
oracle                         ACTIVE                              1            0
testuser    testadm                         INACTIVE     apnx8grhadf80          2            0
testuser    testadm                         ACTIVE                             3        90037
testuser    testadm                         INACTIVE                            6            0
testuser    testadm                         INACTIVE                            7            0
testuser    testadm                         INACTIVE     apnx8grhadf80          8            0
testuser    testadm                         INACTIVE                            9            0
testuser    testadm                         INACTIVE                           10            8
testuser    testadm                         ACTIVE       14f48saw6n9d1         13       189923
testuser    testadm                         INACTIVE                           15            0

Lets investigate and jump deep!
Alright, first step should be collecting details of objects which are changing frequently and altering/changing db blocks. Below mentioned script will help to achieve the purpose.

prompt  To show all segment level statistics in one screen
prompt
set lines 140 pages 100
col owner format A12
col object_name format A30
col statistic_name format A30
col object_type format A10
col value format 99999999999
col perc format 99.99
undef statistic_name
break on statistic_name
with  segstats as (
select * from (
select inst_id, owner, object_name, object_type , value ,
rank() over (partition by  inst_id, statistic_name order by value desc ) rnk , statistic_name
from gv$segment_statistics
where value >0  and statistic_name like ‘%’||’&&statistic_name’ ||’%’
) where rnk <31
)  ,
sumstats as ( select inst_id, statistic_name, sum(value) sum_value from gv$segment_statistics group by statistic_name, inst_id)
select a.inst_id, a.statistic_name, a.owner, a.object_name, a.object_type,a.value,(a.value/b.sum_value)*100 perc
from segstats a ,   sumstats b
where a.statistic_name = b.statistic_name
and a.inst_id=b.inst_id
order by a.statistic_name, a.value desc
/

INST_ID|STATISTIC_NAME                |OWNER       |OBJECT_NAME                   |OBJECT_TYP|       VALUE|  PERC
———-|——————————|————|——————————|———-|————|——
1|db block changes              |testuser     |PZ214                          |TABLE     |  2454298704| 71.83
1                               |testuser     |T94                           |TABLE     |    23416784|   .69
1                               |testuser     |PZ978                          |TABLE     |    19604784|   .57
1                               |testuser     |PZ919                          |TABLE     |    18204160|   .53
1                               |testuser     |T85                           |TABLE     |    15616624|   .46
1                               |testuser     |IH94                          |INDEX     |    14927984|   .44
1                               |testuser     |IPZ978                         |INDEX     |    14567840|   .43
1                               |testuser     |I296_1201032811_1             |INDEX     |    14219072|   .42
1                               |testuser     |PZ796                          |TABLE     |    13881712|   .41
1                               |testuser     |H94                           |TABLE     |    13818416|   .40
1                               |testuser     |I312_3_1                      |INDEX     |    12247776|   .36
1                               |testuser     |I312_6_1                      |INDEX     |    11906992|   .35
1                               |testuser     |I312_7_1                      |INDEX     |    11846864|   .35
1                               |testuser     |IPZ412                         |INDEX     |    11841360|   .35
1                               |testuser     |I178_1201032811_1             |INDEX     |    11618160|   .34
1                               |testuser     |PZ972                          |TABLE     |    11611392|   .34
1                               |testuser     |H312                          |TABLE     |    11312656|   .33
1                               |testuser     |IPZ796                         |INDEX     |    11292912|   .33
1                               |testuser     |I188_1101083000_1             |INDEX     |     9772816|   .29
1                               |testuser     |PZ412                          |TABLE     |     9646864|   .28
1                               |testuser     |IH312                         |INDEX     |     9040944|   .26
1                               |testuser     |I189_1201032712_1             |INDEX     |     8739376|   .26
1                               |testuser     |SYS_IL0000077814C00044$$      |INDEX     |     8680976|   .25
1                               |testuser     |I119_1000727019_1             |INDEX     |     8629808|   .25
1                               |testuser     |I119_1101082009_1             |INDEX     |     8561520|   .25
1                               |testuser     |I312_1705081004_1             |INDEX     |     8536656|   .25
1                               |testuser     |I216_1201032712_1             |INDEX     |     8306016|   .24
1                               |testuser     |I119_1404062203_1             |INDEX     |     8289520|   .24
1                               |testuser     |PZ988                          |TABLE     |     8156352|   .24
1                               |testuser     |I85_1703082001_1              |INDEX     |     8126528|   .24

Here in this scenario LOG MINER utility will be of a great help. Below is the method to immediately mine an archive-log with ease.

SQL> begin
sys.dbms_logmnr.ADD_LOGFILE (‘/vol2/oracle/arc/testdb/1_11412_833285103.arc’);
end;
/
begin
sys.dbms_logmnr.START_LOGMNR;
end;
/
PL/SQL procedure successfully completed.

I was using hard-coded 512 bytes for redo block size. You can use the following SQL statement to identify the redo block size.

SQL> select max(lebsz) from x$kccle;

MAX(LEBSZ)
———-
512

1 row selected.

I always prefer to create a table by querying the data from v$logmnr_contents dynamic performance view rather accessing the view directly which always makes things hazardous.

SQL> CREATE TABLE redo_analysis_212_2 nologging AS
SELECT data_obj#, oper,
rbablk * le.bsz + rbabyte curpos,
lead(rbablk*le.bsz+rbabyte,1,0) over (order by rbasqn, rbablk, rbabyte) nextpos
FROM
( SELECT DISTINCT data_obj#, operation oper, rbasqn, rbablk, rbabyte
FROM v$logmnr_contents
ORDER BY rbasqn, rbablk, rbabyte
) ,
(SELECT MAX(lebsz) bsz FROM x$kccle ) le
/

Table created.

Next yo can query the table now to get mining details.

set lines 120 pages 40
column data_obj# format  9999999999
column oper format A15
column object_name format A60
column total_redo format 99999999999999
compute sum label ‘Total Redo size’ of total_Redo on report
break on report
spool /tmp/redo_212_2.lst
select data_obj#, oper, obj_name, sum(redosize) total_redo
from
(
select data_obj#, oper, obj.name obj_name , nextpos-curpos-1 redosize
from redo_analysis_212_2 redo1, sys.obj$ obj
where (redo1.data_obj# = obj.obj# (+) )
and  nextpos !=0 — For the boundary condition
and redo1.data_obj#!=0
union all
select data_obj#, oper, ‘internal ‘ , nextpos-curpos  redosize
from redo_analysis_212_2 redo1
where  redo1.data_obj#=0 and  redo1.data_obj# = 0
and nextpos!=0
)
group by data_obj#, oper, obj_name
order by 4
/

DATA_OBJ#|OPER           |OBJ_NAME                      |     TOTAL_REDO
———–|—————|——————————|—————
78236|UPDATE         |PZ716                          |         132584
78227|INTERNAL       |PZ214                          |         142861
738603|DIRECT INSERT  |WRH$_ACTIVE_SESSION_HISTORY   |         170764
78546|INSERT         |PZ412                          |         179476
78101|UPDATE         |PZ989                          |         191276
78546|LOB_WRITE      |PZ412                          |         220850
78546|UPDATE         |PZ412                          |         314460
78038|UPDATE         |PZ972                          |         322060
77814|UPDATE         |PZ919                          |         375863
78227|LOB_WRITE      |PZ214                          |         399417
77814|LOB_WRITE      |PZ919                          |         407572
0|START          |internal                      |         760604
0|COMMIT         |internal                      |        2654020
78227|UPDATE         |PZ214                          |      452580201
|               |                              |—————
Total Redo |               |                              |      461746150

259 rows selected.

SQL> select OWNER,OBJECT_NAME,OBJECT_TYPE,CREATED,LAST_DDL_TIME,STATUS,TIMESTAMP from dba_objects where OBJECT_ID=’78227′;
rows will be truncated

OWNER       |OBJECT_NAME                                                 |OBJECT_TYP|CREATED    ||STATUS
————|————————————————————|———-|———–|———–|———–
testuser     |PZ214                                                        |TABLE     |04-DEC-2013||VALID

1 row selected.

Its clearly visible after reading mining results which indicates that out of 460 MB of archivelog (that was mined) 450 MB was occupied by UPDATES happened on the object PZ214. Now we have enough proof in our hands which can be shared with application/development teams to investigate issue.

After a parallel investigation, we ultimately found that it was some feature enabled at application end that caused this redo swamp in system which later on rectified and fixed the issue.

DB DATE       TOTAL  H00 H01 H02 H03 H04 H05 H06 H07 H08 H09 H10 H11 H12 H13 H14 H15 H16 H17 H18 H19 H20 H21 H22 H23
— ———- —— — — — — — — — — — — — — — — — — — — — — — — — —
1 2015-04-04      5   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   0   2   0   1
1 2015-04-05     19   0   0   2   0   0   3   1   0   0   1   0   2   1   0   2   0   1   2   0   1   0   2   1   0
1 2015-04-06     27   1   0   2   0   1   2   0   1   0   1   1   2   1   1   2   1   2   2   0   1   1   2   2   1
1 2015-04-07     33   0   1   2   0   1   2   0   1   1   0   1   2   1   0   2   1   1   3   1   1   2   4   3   3
1 2015-04-08    136   3   3   5   3   3   5   4   4   4   4   5   7   5   6   8   7   7   8   7   7   7   9   8   7
1 2015-04-09    284   8   9  10   9   9  11   9   9  10  11  11  14  12  12  14  13  12  14  13  17  14  15  15  13
1 2015-04-10    345  14  14  16  14  13  14  13  12  13  13  13  16  13  14  15  14  16  16  15  14  15  17  16  15
1 2015-04-11    428  16  16  17  16  16  17  17  16  18  17  18  19  18  18  19  18  18  20  18  18  20  20  19  19
1 2015-04-12    609  19  19  21  21  21  22  21  21  21  21  20  23  22  29  30  31  32  34  31  30  30  31  30  29
1 2015-04-13    371  25  24  25  25  25  26  25  25  25  25  24  28  26  19  10   1   2   3   2   1   1   2   2   0
1 2015-04-14      7   1   0   2   0   1   2   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

Posted in Uncategorized | 1 Comment »

Opatch Failed error code 73: OUI-67073: UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables”

Posted by FatDBA on March 2, 2015

Issue:
Upgrade error from 11.2.0.2 to 11.2.0.4

Error Description:
Oracle SPU / CPU patch deployment using Opatch filed with following error message.
Following executables are active :
/u01/app/oracle/product/11.2.0.2/home/lib/libclntsh.so.11.1
UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables” failed.
Log file location: /u01/app/oracle/product/11.2.0.2/home/cfgtoollogs/opatch/opatch2014-9-14_12-10-00PM.log

OPatch failed with error code 73

Cause:
There are some files which are locked or some processes still running while applying patch . those should be avoided

Full Error in log:

[Mar 1, 2015 4:19:20 PM] Finish fuser command /sbin/fuser /u01/app/oracle/product/11.2.0.2/home/lib/libclntsh.so.11.1 at Fri Nov 22 14:10:20 CET 2014
[Mar 1, 2015 4:19:20 PM] Following executables are active:
/u01/app/oracle/product/11.2.0.2/home/lib/libclntsh.so.11.1
[Mar 1, 2015 4:19:20 PM] Prerequisite check “CheckActiveFilesAndExecutables” failed.
The details are:
Following executables are active:
/u01/app/oracle/product/11.2.0.2/home/lib/libclntsh.so.11.1
[Mar 1, 2015 4:19:20 PM] OUI-67073:UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables” failed.
[Mar 1, 2015 4:19:20 PM] Finishing UtilSession at Fri Nov 22 14:10:20 CET 2014

Solution Description
==================================

This error is simple. Firstly make sure DB and listener are down .

Solution 1:
some processes are still in running .to find out them try
ps -ef|grep db_name
then kill each process by using kill -9 1196 command

Solution 2:
check which process is locking this library file using below command and kill it:

$ /sbin/fuser /u01/app/oracle/product/11.2.0.2/home/lib/libclntsh.so.11.1
/u01/app/oracle/product/11.2.0.2/home/lib/libclntsh.so.11.1: 1196m 2215m

$ kill -9 1196

now run opatch apply and it will run without any issues this time.


Hope That Helps
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Using _ALLOW_RESETLOGS_CORRUPTION in case of corruption: How to recover & open the database ?

Posted by FatDBA on March 2, 2015

Recently while opening a TEST RAC database we found the database down and was not available, we tried to start the database but received communication errors and left us clueless. Due to some urgent POC activity scheduled on the database we started investigating the root cause right away for error.
ORA-03113: end-of-file on communication channel

[oracle@testdbdixit ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Thu Feb 26 01:04:23 2015
Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to an idle instance.
SQL> startup
ORA-03113: end-of-file on communication channel

We tried to open the database in MOUNT mode and it reached to phase easily without any error.

SQL> STARTUP MOUNT;
ORACLE instance started.

Total System Global Area 1.6034E+10 bytes
Fixed Size 2269072 bytes
Variable Size 2449473648 bytes
Database Buffers 1.3556E+10 bytes
Redo Buffers 26480640 bytes
Database mounted.

But the recovery of any kind got failed because of the database running in No Archivelog Mode & being a test instance there is also not any RMAN backups configured as well … #TotalDisaster 😦 😦

SQL> ALTER DATABASE RECOVER DATABASE UNTIL CANCEL;
ALTER DATABASE RECOVER DATABASE UNTIL CANCEL
*
ERROR at line 1:
ORA-00279: change 7311130 generated at 02/25/2015 22:00:18 needed for thread 2
ORA-00289: suggestion : +FRA
ORA-15173: entry ‘ARCHIVELOG’ does not exist in directory ‘DIXITDB’
ORA-00280: change 7311130 for thread 2 is in sequence #207

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 233
Current log sequence 234

When tried to start the cancel based incomplete recovery, it went successful but received few inconsistency errors for system datafile if we try to open the database in RESETLOGS mode.

SQL> ALTER DATABASE RECOVER CANCEL;
ALTER DATABASE RECOVER CANCEL
*
ERROR at line 1:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘+DATA/DIXITDB/datafile/system.256.871197447’

As expected RESETLOGS mode failed too.

SQL> ALTER DATABASE OPEN RESETLOGS;
ALTER DATABASE OPEN RESETLOGS
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘+DATA/DIXITDB/datafile/system.256.871197447’

Resolution:
====================

*Note: Underscore (hidden/undocumented) parameters should only be used after consent with Oracle Support and should always tried and tested in sandbox environments before applying them in prod environment.

There is a hidden parameter _ALLOW_RESETLOGS_CORRUPTION=TRUE which will allow us to open database even though it’s not properly recovered.
ALTER SYSTEM SET “_allow_resetlogs_corruption”= TRUE SCOPE = SPFILE;
Tip: Also change the undo_management to “Manual”

After the two changes in the spfile you can open the database with:

sqlplus “/ as sysdba”
startup force

Note: Well there is no 100% guarantee that setting _ALLOW_RESETLOGS_CORRUPTION=TRUE will open the database. However, once the database is opened, then we must immediately rebuild the database. Database rebuild means doing the following, namely: (1) perform a full-database export, (2) create a brand new and separate database, and finally (3) import the recent export dump. This option can be tedious and time consuming, but once we successfully open the new database, then we expect minimal or perhaps no data loss at all. Before you try this option, ensure that you have a good and valid backup of the current database.

Previous Settings:
SQL> show parameter undo

NAME TYPE VALUE
———————————— ———– ——————————
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1

SQL> alter system set undo_management=manual scope=spfile;
System altered.

SQL> ALTER SYSTEM SET “_allow_resetlogs_corruption”= TRUE SCOPE = SPFILE;
System altered.

SQL> shut immediate;

And after setting all the requisite parameters to defined values, we finally saw that ‘Database Opened’ message on the SQL prompt … 🙂 🙂

SQL> startup force;
ORACLE instance started.

Total System Global Area 1.6034E+10 bytes
Fixed Size 2269072 bytes
Variable Size 2449473648 bytes
Database Buffers 1.3556E+10 bytes
Redo Buffers 26480640 bytes
Database mounted.
Database opened.

SQL> ALTER DATABASE OPEN RESETLOGS;
Database altered.

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 1
Current log sequence 2

SQL> alter system switch logfile;
System altered.

And a new incarnation of the database happened after opening database in RESETLOGS mode.

SQL> select INCARNATION#,RESETLOGS_TIME,STATUS, RESETLOGS_ID from v$database_incarnation;

INCARNATION# RESETLOGS STATUS RESETLOGS_ID
———— ——— ——- ————
1 24-AUG-13 PARENT 824297850
2 09-FEB-15 PARENT 871197521
3 26-FEB-15 CURRENT 872646322

In Short:
=====================
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) Recover database
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace (There is a strict need of changing the UNDO_MANAGEMENT to AUTO in Prod databases)
change undo_management to “AUTO” and undo_tablespace to “NewTablespace”
9) Bounce database.

Hope That Helps!
Prashant Dixit

Posted in Uncategorized | Tagged: , | 6 Comments »

 
%d bloggers like this: