Tales From A Lazy Fat DBA

Fan of Oracle DB & Performance, PostgreSQL & Cassandra … \,,/

  • Likes

    • 236,066
  • Archives

  • Categories

  • Subscribe

Archive for the ‘Uncategorized’ Category

Could not send replication command “TIMELINE_HISTORY”: ERROR: could not open file pg_wal/00xxxx.history

Posted by FatDBA on October 20, 2020

Hi All,

Ever encountered a situation where the backup history (TIMELINE_HISTORY) file was deleted by mistake or maybe someone removed it purposely, it was quite old and you try to restore a new backup. I remember many issues related with replication, backup tools (BARMAN & BART) that you might face if that file is removed from PG_WAL directory. Would like to discuss a problem that we encountered while taking BART backup on EDB version 10.

These ‘timeline files’ are quite important, as using the timeline history files, the pg_basebackup can follow the latest timeline present in the primary, just as it can follow new timelines appearing in an archive WAL directory. So, in short, it shows which timeline it branched off from and when. These history files are necessary to allow the system to pick the right WAL segment files when recovering from an archive that contains multiple timelines. So, its important to have this file there in WAL directory.


[enterprisedb@fatdba ~]$ bart -c /usr/edb-bart-1.1/etc/bart.cfg BACKUP -s edbserver --backup-name MAINFULLBKP_10-13-20
INFO:  DebugTarget - getVar(checkDiskSpace.bytesAvailable)
INFO:  creating full backup using pg_basebackup for server 'edbserver'
INFO:  creating backup for server 'edbserver'
INFO:  backup identifier: '1602788909136'
ERROR: backup failed for server 'edbserver' 

pg_basebackup: could not send replication command "TIMELINE_HISTORY": ERROR:  could not open file "pg_wal/00000002.history": No such file or directory 

1633701/1633701 kB (100%), 2/2 tablespaces
pg_basebackup: child process exited with error 1
pg_basebackup: removing data directory "/edbbackup/edbserver/1602788909136"
 

The file is not there under said directory.


[enterprisedb@fatdba ~]$ cd /edb/as10/as10/data/pg_wal/
[enterprisedb@fatdba pg_wal]$ ls
0000000200000005000000EA  0000000200000005000000EB.00000060.backup  0000000200000005000000ED  archive_status
0000000200000005000000EB  0000000200000005000000EC                  0000000200000005000000EE
 

In case of file missing/moved, you can always create a brand new empty file and that will be used by the respective utility and will be populated with metadata soon after. So, in order to quickly restore this issue, let’s create one.

 [enterprisedb@fatdba pg_wal]$ touch 00000002.history
[enterprisedb@fatdba pg_wal]$
[enterprisedb@fatdba pg_wal]$ ls *hist*
00000002.history 

Let’s try to take the backup once again.


[enterprisedb@fatdba pg_wal]$ bart -c /usr/edb-bart-1.1/etc/bart.cfg BACKUP -s edbserver --backup-name MAINFULLBKP_10-13-20
INFO:  DebugTarget - getVar(checkDiskSpace.bytesAvailable)
INFO:  creating full backup using pg_basebackup for server 'edbserver'
INFO:  creating backup for server 'edbserver'
INFO:  backup identifier: '1602789425665'
INFO:  backup completed successfully
INFO:
BART VERSION: 2.5.5
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1602789425665
BACKUP NAME: MAINFULLBKP_10-13-20
BACKUP PARENT: none
BACKUP LOCATION: /edbbackup/edbserver/1602789425665
BACKUP SIZE: 1.57 GB
BACKUP FORMAT: tar
BACKUP TIMEZONE: Europe/Berlin
XLOG METHOD: stream
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 1
 Oid     Name      Location
 42250   UNKNOWN   /edb/as10/as10/data_test/pg_tblspc

START WAL LOCATION: 0000000200000005000000ED
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2020-10-15 21:17:05 CEST
STOP TIME: 2020-10-15 21:17:38 CEST
TOTAL DURATION: 33 sec(s)


[enterprisedb@fatdba pg_wal]$  bart -c /usr/edb-bart-1.1/etc/bart.cfg SHOW-BACKUPS
 SERVER NAME   BACKUP ID       BACKUP NAME            BACKUP PARENT   BACKUP TIME                BACKUP SIZE   WAL(s) SIZE   WAL FILES   STATUS

 edbserver     1602789425665   MAINFULLBKP_10-13-20   none            2020-10-15 21:17:38 CEST   1.57 GB       16.00 MB      1           active
 

And it worked.


Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

pg_dump: aborting because of server version mismatch — pg_restore: [archiver] unsupported version (1.13) in file header

Posted by FatDBA on October 16, 2020

Hi Guys,

First of all, this isn’t a problem but something that you should always set in case if your have multiple PostgreSQL versions or flavors running on the same host, else you might encounter some really strange errors. Few of the examples – You have PostgreSQL Community and EDB running or you have two different installations (versions) on the same server.

This can cause some basic command to fail, like in below example the pg_dump utility throwed an error about “server version mismatch” and it was all because this a POC box where we have more than three different PostgreSQL installations (Community PostgreSQL and EDB) and all of different versions. Two of them are for the EDB with same user ‘enterprisedb’ and one for community version. So either you set your bash_profile smartly, else you try something what I will be discussing next.

Okay, so this was the exact error what I have got when I tried to call pg_dump.


-bash-4.1$ pg_dump -p 6001 -U postgres -t classes > /tmp/classestable_psql_commdb_dump.dmp
pg_dump: server version: 11.9; pg_dump version: 8.4.20
pg_dump: aborting because of server version mismatch
 

There could be multiple issues or errors that you might encounter, one more that could arise due to multiple installations on same host.
Below, pg_restore failed with ‘unsupported version’ error.


-bash-4.1$ pg_restore -d postgresqlnew -h 10.0.0.144 -U postgres /tmp/commpsql_fulldbbkp.dmp
pg_restore: [archiver] unsupported version (1.13) in file header
 

This seems strange initially because the version of the utlity and postgresql is exactly the same.


-bash-4.1$ pg_restore --version
pg_restore (PostgreSQL) 8.4.20
-bash-4.1$ psql --version
psql (PostgreSQL) 8.4.20
 

Okay, let’s find how many pg_dump utilities exists in this database and their location.


-bash-4.1$ find / -name pg_dump -type f 2>/dev/null
/opt/edb/as10_BACKP_10042020SATIND/bin/pg_dump
/edb/as10/as10/bin/pg_dump
/usr/bin/pg_dump
/usr/pgsql-11/bin/pg_dump
/usr/edb/as10/bin/pg_dump
/usr/edb/as11/bin/pg_dump
 

So, we have 3 different pg_dump utlities here, all from different locations, and I know which version I would like to call. So, I can create a symbolic link to get rid of this error or to avoid writing the full/absolute path.


-bash-4.1$ sudo ln -s /usr/pgsql-11/bin/pg_dump /usr/bin/pg_dump --force
[sudo] password for postgres:
 

Great, It’s done. You can do the same for pg_restore too. Now lets try to call the same command all over again, to take a backup of single table with name ‘classes’.


-bash-4.1$ pg_dump -p 6001 -U postgres -t classes > /tmp/classestable_psql_commdb_dump.dmp

-bash-4.1$ ls -ll /tmp/classes*
-rw-r--r--. 1 postgres postgres 915 Oct 15 11:41 /tmp/classestable_psql_commdb_dump.dmp
-bash-4.1$
 

And it worked as expected.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Leave a Comment »

EDB PostgreSQL BART Error: tablespace_path is not set

Posted by FatDBA on October 16, 2020

Today would like to discuss about the issue that we faced while doing a BART restore operation of one of the EDB 11 PostgreSQL instance. This was a new system under realization phase (before delivery to customer). So, during one of the test we saw the restore got failed with a message which says something about the value ‘tablespace_path’. I know I have a tablespace in this system, but I initially though that BART will take care of it by its own, but its was not the case.

Below was the error what I have encountered during the test.


[enterprisedb@fatdba archived_wals]$ bart -c /usr/edb-bart-1.1/etc/bart.cfg RESTORE -s edbserver -i 1602187005158 -p /edb/as10/as10/data/
INFO:  restoring backup '1602187005158' of server 'edbserver'
ERROR: "tablespace_path" is not set
[enterprisedb@fatdba archived_wals]$
 

Okay, let’s first check tablespace details, we use the metacommand of ‘db’ to get that info about tablespaces. Okay, so we have it’s location, size details.
Let’s go inside the said directoy and see what all is there.
Note: Last two are the default tablespaces so need to worry about them.


enterprisedb=# \db+
                                              List of tablespaces
    Name    |    Owner     |           Location           | Access privileges | Options |  Size   | Description
------------+--------------+------------------------------+-------------------+---------+---------+-------------
 newtblspc  | dixit        | /home/enterprisedb/newtblspc |                   |         | 52 kB   |
 pg_default | enterprisedb |                              |                   |         | 1362 MB |
 pg_global  | enterprisedb |                              |                   |         | 774 kB  |
(3 rows)

[enterprisedb@fatdba pg_tblspc]$ pwd
/edb/as10/as10/data_test/pg_tblspc
[enterprisedb@fatdba pg_tblspc]$ ls -ltrh
total 4.0K
lrwxrwxrwx. 1 enterprisedb enterprisedb   28 May  5 17:58 42250 -> /home/enterprisedb/newtblspc
drwx------. 3 enterprisedb enterprisedb 4.0K Oct  8 21:56 PG_10_201707211
 

Okay, so we have a soft-link created for the tablespace under PG_TBLSPC directory under DATA dir with OID 42250.
Now when we have all the information, time to add requisite parameter in bart.cfg file to consider tablespaces, just like below.
Format: OID_1=tablespace_path_1;OID_2=tablespace_path_2 …
example: tablespace_path = 42250=/edb/as10/as10/data_test/pg_tblspc
Note: tablespace_path parameter must exist or to be empty at the time you perform the BART RESTORE operation.

Now let’s modify our bart confguration file, will look something like below with the ‘tablespace_path’ option set.


[EDBSERVER]
host = 10.0.0.144
port = 5444
user = enterprisedb
backup_name = mktg_%year-%month-%dayT%hour:%minute
cluster_owner = enterprisedb
description = "EDB PROD server"
archive_command='scp %p enterprisedb@10.0.0.144:/edbbackup/edbserver/archived_wals/%f'
tablespace_path = 42250=/edb/as10/as10/data_test/pg_tblspc
allow_incremental_backups=enabled
 

All set for the restore now, let’s try that.


[enterprisedb@fatdba pg_tblspc]$ bart -c /usr/edb-bart-1.1/etc/bart.cfg RESTORE -s edbserver -i 1602187005158 -p /edb/as10/as10/data/
INFO:  restoring backup '1602187005158' of server 'edbserver'
WARNING: tablespace restore path is not empty (/edb/as10/as10/data_test/pg_tblspc), restoring anyway
INFO:  base backup restored
INFO:  writing recovery.conf file
INFO:  WAL file(s) will be streamed from the BART host
INFO:  archiving is disabled
INFO:  permissions set on $PGDATA
INFO:  restore completed successfully
[enterprisedb@fatdba pg_tblspc]$
[enterprisedb@fatdba pg_tblspc]$
 

This is fixed.

Hope That Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Datastax Certified Cassandra Administrator, some tips & more

Posted by FatDBA on August 21, 2020

Hi Guys,

With a sharp rise in NoSQL databases, many of the organizations are making a transition from traditional databases to distributed and high performance databases like ‘Cassandra’. Cassandra has become Apache’s one of the most popular projects. Though there are multiple NoSQL databases available in the market but no one has the features like peer-to-peer architecture, HA and Fault tolerant, Column based, Highly perform-ant, Schema Less, tunable consistency, great analytical possibilities, easy to scale-up & scale-down, distributed and the list goes on and on and on.

Cassandra already proved it’s mettle and is magical for IoT, Sensor data, Event based, Time series data, voucher generation systems and with other data models. Datastax provides best in class database management software and wide-range services with 24×7 support to get more from your Cassandra. Alongside comes some really cool features and tools i.e. opscenter (GUI), Nodesync (for enti entropy repairs), great SOLR integration, dsetool (similar to nodetool with more capabilities), sstableloader, pre-flight check tool, yaml file compare tools, stress tools, extra commands i.e. dsefs and many more.

DataStax is a pioneer and they have their own Cassandra certification path/track to prove you have valid credentials to work with Cassandra database either as a developer or an administrator. Now question comes where to start ?? – In fact many of you have asked me about my latest credentials ‘Datastax Apache Cassandra 3.x Administrator Associate‘, I was getting questions like how to prepare, how to book the exam and many other related questions. So, this post will be all about covering topics like how to prepare and book exam along with few tips.

I would always prefer to go point wise to make things more ordered and easy to digest.

1. Create your account on Datastax Academy.
Link: https://auth.cloud.datastax.com/auth/realms/CloudUsers/login-actions/registration?client_id=absorb&tab_id=lv4-57nRbu4

2. Go to the option ‘Catalog’ to lookout for courses available.
You have to choose between the Administrator (3 course based curriculum) or Developer (3 Courses based curriculum) track. I have completed the ADMIN path and it has three courses DS101 (Introduction), DS201 (Foundations) and DS210 (Operations with Apache Cassandra). All of the courses are beautifully designed, contains large numbers of demos, presentations, guides, quiz and a pre-build Ubuntu VM where you can all exercises.

Though the presentations and program covers every topic and all major parameters and topics but still if you want to read in depth, they have their own document collection and can be accessed through their website https://docs.datastax.com/en/landing_page/doc/landing_page/current.html or from https://cassandra.apache.org/doc/latest/

Note: There are few other specialized courses available too within the catalog i.e. Kafka connectors, DSE Graph, DSE Analytics, DSE Search etc.

3. Other learning platforms
Github: https://github.com/datastax
Can be very useful specially if you are preparing for developer track.
Youtube: Full of some great presentations, videos and some precious workshops and demos.
https://www.youtube.com/user/DataStaxMedia
Twitter: For news (about webinars etc.), press releases and other exciting information.
https://twitter.com/DataStax (@DataStax)

4. All set!
Once you are done with your all three of your courses under ADMIN track, you are done and ready for the certification. Go to ‘Datastax Certification’ widget within catalog and book your exam by creating your profile on their certification website.
https://certification.mettl.com/datastax/applicant/signup

Currently they are giving one free exam vouchers and those will be issued at the end of the series for participants of the workshop.

5. Once registered you have to choose your exam type – Admin or Developer.
Both of the exams has 60 questions that you have to complete within 90 minutes, exam fees (right now) is $145
Note: It’s good that you check your system comparability before the exam, for more details follow their official guidelines.

So, don’t wait, go and enroll for the course and grab a chance for giving free certification and more importantly stand out from the crowd. These widely accepted and recognized credentials will help you in your continued professional development and is an ideal way to gain a greater understanding of your industry, and to enhance your knowledge and skills. It also offers excellent chances to network among Cassandra geeks.

Hope It Helps!
Prashant Dixit

Posted in Basics, Uncategorized | Tagged: | Leave a Comment »

root.sh failing while installing 12cR2 on RHEL7 “Failed to create keys in the OLR” – Did your hostname starts with a number ?

Posted by FatDBA on July 29, 2019

Hi Guys,

I know its been too long since i last posted and it all happened due to some site authentication issues and some personal priorities. Here I am back with new issues, all related with performance, administration, troubleshooting, optimization and other subjects.

This time would like to share one of the issue that i have faced while installing Oracle 12c Release 2 (Yes, I still do installations, sometimes 🙂 ) on a brand new RHEL7 box where everything was good till I ran root.sh which got failed due to a weird error which initially got no hint behind the problem.
Initially i though if this qualifies to be a post and deserves a place here but actually I have spend few days identifying the cause and hours that I have spend with support, so just want to save all that time for you all who might facing the same issue and looking something on Google 🙂

So lets get started!
This is what exactly I got when ran the root.sh script



[root@8811913-monkey-db1:/u011/app1/12.2.0.1/grid]# ./root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u011/app1/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u011/app1/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u011/app1/12.2.0.1/crsdata/8811913-monkey-db1/crsconfig/roothas_2019-02-18_00-59-22AM.log
Site name (8811913-monkey-db1) is invalid.clscfg -localadd -z  [-avlookup]
                 -p property1:value1,property2:value2...

  -avlookup       - Specify if the operation is during clusterware upgrade
  -z   - Specify the site GUID for this node
  -p propertylist - list of cluster properties and its value pairs

 Adds keys in OLR for the HASD.
WARNING: Using this tool may corrupt your cluster configuration. Do not
         use unless you positively know what you are doing.

 Failed to create keys in the OLR, rc = 100, Message:


2019/02/18 00:59:28 CLSRSC-188: Failed to create keys in Oracle Local Registry
Died at /u011/app1/12.2.0.1/grid/crs/install/oraolr.pm line 552.
The command '/u011/app1/12.2.0.1/grid/perl/bin/perl -I/u011/app1/12.2.0.1/grid/perl/lib -I/u011/app1/12.2.0.1/grid/crs/install /u011/app1/12.2.0.1/grid/crs/install/roothas.pl ' execution failed


The error simply said that the script failed to ‘create the keys in OLR’. These keys were for HASD that it was attempting to add. I verified all run time logs that got created the time but they too gave no idea about this problem. That is when I had to engage the Oracle customer support and came to know that this all happened due to a new BUG (BUG 26581118 – ALLOW HOSTNAME WITH NUMERIC VALUE) that comes in to picture when you have the hostname starts with a numeral or number and is an RHEL7 and is specific to Oracle 12c Release 2.

Oracle suggested a bug fix (Patch Number: 26751067) for this issue. This is a MERGE patch and fixes both Bug 25499276 & 26581118. One more thing, you have to apply this patch before the root.sh script.
So let me quickly show how to do that (removing all redundant and other sections).



[oracle@8811913-monkey-db1:/u011/app1/12.2.0.1/grid/OPatch]$ ./opatch napply -oh /u011/app1/12.2.0.1/grid -local 26751067/26751067/
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2019, Oracle Corporation.  All rights reserved.

...
......

Patch 26751067 successfully applied.
Log file location: /u011/app1/12.2.0.1/grid/cfgtoollogs/opatch/opatch2019-02-18_01-05-41AM_1.log

OPatch succeeded.
[oracle@8811913-monkey-db1:/u011/app1/12.2.0.1/grid/OPatch]$
[oracle@8811913-monkey-db1:/u011/app1/12.2.0.1/grid/OPatch]$


Ran the root.sh after patching and it went smooth.
BTW, in case you don’t want to do all this, simply change the hostname and put any alphabet in front of your hostname i.e. 8811913 –> A8811913 — That’s It!

Hope It Helps!

Thanks
Prashant Dixit

Posted in troubleshooting, Uncategorized | Tagged: | 1 Comment »

Oracle DB Security Assessment Tool (DBSAT)

Posted by FatDBA on March 2, 2018

Hi Everyone,

Would like to discuss about one of the request came from my earlier projects to identify sensitive data (Tables, objects etc.) within their databases so that external policies can be enforced later on, but the customer only permitted us to use any inbuilt or Oracle branded audit tool and not any third party security/compliance auditing tools.

And then we landed to use Oracle In-Built database security assessment tool name as DBSAT.
DBSAT has three components: Collector, Reporter, and Discoverer. Collector and Reporter work together to discover risk areas and produce reports on those risk areas and produces the final assessment report in HTML and CSV formats.
You can use DBSAT report findings to:

– Fix immediate short-term risks
– Implement a comprehensive security strategy
– Support your regulatory compliance program
– Promote security best practices

Lets see what it is and how to use it.

Step 1: Unzip the package.

[oracle@dixitlab software]$ unzip dbsat.zip
Archive: dbsat.zip
inflating: dbsat
inflating: dbsat.bat
inflating: sat_reporter.py
inflating: sat_analysis.py
inflating: sat_collector.sql
inflating: xlsxwriter/app.py
inflating: xlsxwriter/chart_area.py
inflating: xlsxwriter/chart_bar.py
inflating: xlsxwriter/chart_column.py
inflating: xlsxwriter/chart_doughnut.py
inflating: xlsxwriter/chart_line.py
inflating: xlsxwriter/chart_pie.py
inflating: xlsxwriter/chart.py
inflating: xlsxwriter/chart_radar.py
inflating: xlsxwriter/chart_scatter.py
inflating: xlsxwriter/chartsheet.py
inflating: xlsxwriter/chart_stock.py
inflating: xlsxwriter/comments.py
inflating: xlsxwriter/compat_collections.py
inflating: xlsxwriter/compatibility.py
inflating: xlsxwriter/contenttypes.py
inflating: xlsxwriter/core.py
inflating: xlsxwriter/custom.py
inflating: xlsxwriter/drawing.py
inflating: xlsxwriter/format.py
inflating: xlsxwriter/__init__.py
inflating: xlsxwriter/packager.py
inflating: xlsxwriter/relationships.py
inflating: xlsxwriter/shape.py
inflating: xlsxwriter/sharedstrings.py
inflating: xlsxwriter/styles.py
inflating: xlsxwriter/table.py
inflating: xlsxwriter/theme.py
inflating: xlsxwriter/utility.py
inflating: xlsxwriter/vml.py
inflating: xlsxwriter/workbook.py
inflating: xlsxwriter/worksheet.py
inflating: xlsxwriter/xmlwriter.py
inflating: xlsxwriter/LICENSE.txt
inflating: Discover/bin/discoverer.jar
inflating: Discover/lib/ojdbc6.jar
inflating: Discover/conf/sample_dbsat.config
inflating: Discover/conf/sensitive_en.ini

Step 2: Configure the ‘dbsat configuration’ file.
Next you have to configre the main config file (dbsat.config) available under Discover/conf directory.

[oracle@dixitlab conf]$ pwd
/home/oracle/software/Discover/conf

[oracle@dixitlab conf]$ ls -ltrh
total 20K
-rwxrwxrwx. 1 oracle oinstall 13K Jan 16 22:58 sensitive_en.ini
-rwxrwxrwx. 1 oracle oinstall 2.4K Mar 1 22:12 dbsat.config

Few of the important parameters are given below.
vi dbsat.config

DB_HOSTNAME = localhost
DB_PORT = 1539
DB_SERVICE_NAME =tunedb
SENSITIVE_PATTERN_FILES = sensitive_en.ini >>>>> This param users sensitive_en.ini file for the English language patterns, which contains 75 patterns
ex: CREDIT_CARD_NUMBER, CARD_SECURITY_PIN, MEDICAL_INFORMATION, SOCIAL_SECURITY_NUMBER etc.

 

Step 3: Run the discoverer against the database to collect the information.

[oracle@dixitlab software]$ $(dirname $(dirname $(readlink -f $(which javac))))    --- To check the JAVAHOME.
-bash: /usr/java/jdk1.8.0_131: is a directory
[oracle@dixitlab software]$ export JAVA_HOME=/usr/java/jdk1.8.0_131

[oracle@dixitlab conf]$ cd ../..
[oracle@dixitlab software]$ ./dbsat discover -c Discover/conf/sample_dbsat.config tunedb_data

Database Security Assessment Tool version 2.0.1 (December 2017)

This tool is intended to assist in you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.

Enter username: system
Enter password:
Connection Successful- Retrying regarding "tunedb" as SID
DBSAT Discover ran successfully.
Calling /usr/bin/zip to encrypt the generated reports...

Enter password:
Verify password:
zip warning: tunedb_data_report.zip not found or empty
adding: tunedb_data_discover.html (deflated 88%)
adding: tunedb_data_discover.csv (deflated 84%)
Zip completed successfully.

We have the audit reports created under the tool directory.
Sample report attached with this report.

https://1drv.ms/f/s!Arob5fjpN041ga58isTgjF-wBPLI0A
tunedb_data – Oracle Database Security Risk Assessment

Hope It Helps
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

Active Data Guard (ADG) is included in Golden Gate License on EE edition.

Posted by FatDBA on August 22, 2016

The license for Oracle GoldenGate includes a full use license for Oracle Active Data Guard, and a full use license for XStream in the Oracle Database.

Active Data Guard is a superset of Data Guard capabilities included with Oracle Enterprise Edition and can be purchased as the Active Data Guard Option for Oracle Database Enterprise Edition. It is included with every Oracle GoldenGate license, offering customers the ability to acquire the complete set of advanced Oracle replication capabilities with a single purchase.

Posted in Uncategorized | 1 Comment »

Linux YUM – Error: Cannot retrieve repository metadata (repomd.xml) for repository

Posted by FatDBA on January 6, 2016

Some time back I’ve got an error message with YUM even when the installation and configuration went successful.
Where it fails every time i called any of the YUM commands with error message “Cannot retrieve repository metadata (repomd.xml) for repository”.

[root@Fatdba ~]# yum list
Loaded plugins: refresh-packagekit
Repository ol6_latest is listed more than once in the configuration
Repository ol6_ga_base is listed more than once in the configuration
ftp://obiftp/YUM_local/GDS/obi/6.1/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 502”
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base_el6_local. Please verify its path and try again

Solution:
Try the following sequence of steps to fix this problem.

$ sudo su –
# cd /etc/yum.repos.d
# rm -f *
# wget http://public-yum.oracle.com/public-yum-ol6.repoThis needs Internet Connection
# yum clean all
# yum makecache

Hope That Helps
Prashant Dixit

Posted in Uncategorized | Leave a Comment »

Sorry folks i have been a little busy lately!!!

Posted by FatDBA on December 3, 2015

Sorry If i haven’t been blogging lately or posting up the next part of the story. Soon I’ll try to post more and try to help.

Thanks
Prashant “FatDBA” Dixit

Posted in Uncategorized | Leave a Comment »

Statistics in Oracle!

Posted by FatDBA on May 5, 2015

 In this post I’ll try to summarize all sorts of statistics in Oracle, I strongly recommend reading the full article, as it contains information you may find it valuable in understanding Oracle statistics.

#####################################
Database | Schema | Table | Index Statistics
#####################################

Gather Database Statistics:
=======================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>100,METHOD_OPT=>’FOR ALL COLUMNS SIZE SKEWONLY’,
    CASCADE => TRUE,
    degree => 4,
    OPTIONS => ‘GATHER STALE’,
    GATHER_SYS => TRUE,
    STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will decide whether to collect index statistics or not.
DEGREE => 4 :Degree of parallelism.
options: 
       =>’GATHER’ :Gathers statistics on all objects in the schema.
       =>’GATHER AUTO’ :Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>’GATHER STALE’:Gathers statistics on stale objects. will return a list of stale objects.
       =>’GATHER EMPTY’:Gathers statistics on objects have no statistics.will return a list of no stats objects.
        =>’LIST AUTO’ : Returns a list of objects to be processed with GATHER AUTO.
        =>’LIST STALE’: Returns a list of stale objects as determined by looking at the *_tab_modifications views.
        =>’LIST EMPTY’: Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the ‘SYS’ user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all kind of statistics (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female), in a construction or security service company, where most of employees are male workforce,the gender column in employee table is likely to be skewed but in an entity like a hospital where the number of males almost equal the number of female workforce, the gender column is likely to be not skewed.

For faster execution:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What’s new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed “METHOD_OPT=>’FOR ALL COLUMNS SIZE SKEWONLY'” => As histograms is not recommended to be gathered on all columns.
Removed  “cascade => TRUE” To let Oracle determine whether index statistics to be collected or not.
Doubled the “degree => 8” but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale or missing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = ‘auto optimizer stats collection’;

To Enable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => ‘auto optimizer stats collection’,
    operation => NULL,
    window_name => NULL);
    END;
    /

In case you want to Disable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => ‘auto optimizer stats collection’,
    operation => NULL,
    window_name => NULL);
    END;
    /

To check the tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS=’YES’;

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it’s parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Statistics:
======================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
ownname =>’SCOTT’,
estimate_percent=>10,
degree=>1,
cascade=>TRUE,
options=>’GATHER STALE’);

Gather TABLE Statistics:
====================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name=’T1′;

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
    ownname => ‘SCOTT’,
    tabname => ‘EMP’,
    degree => 2,
    cascade => TRUE,
    METHOD_OPT => ‘FOR COLUMNS SIZE AUTO’,
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
    END;
    /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=>  :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between “” instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS  :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.

Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you’ve to re-gather statistics on that table.

Inside “DBA BUNDLE”, there is a script called “gather_stats.sh”, it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.

Gather Index Statistics:
===================
SQL> exec DBMS_STATS.GATHER_INDEX_STATS(ownname => ‘SCOTT’,indname => ‘EMP_I’,
estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);

####################
Fixed OBJECTS Statistics
####################

What are Fixed objects:
—————————-
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

How frequent to gather stats on fixed objects?
——————————————————-
Only one time for a representative workload unless you’ve one of these cases:

– After a major database or application upgrade.
– After implementing a new module.
– After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
– Poor performance/Hang encountered while querying dynamic views e.g. V$ views.

Note:
– It’s recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
– Also note that performance degradation may be experienced while the statistics are gathering.
– Having no statistics is better than having a non representative statistics.

How to gather stats on fixed objects:
———————————————

First Check the last analyzed date:
—— ———————————–
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED
       from dba_tab_statistics where table_name=’X$KGLDP’;
Second Export the current fixed stats in a table: (in case you need to revert back)
——- ———————————–
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE
       (‘OWNER’,’STATS_TABLE_NAME’,’TABLESPACE_NAME’);

SQL> EXEC dbms_stats.export_fixed_objects_stats
       (stattab=>’STATS_TABLE_NAME’,statown=>’OWNER’);
Third Gather the fixed objects stats:
——-  ————————————
SQL> exec dbms_stats.gather_fixed_objects_stats;

Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats();
SQL> exec DBMS_STATS.import_fixed_objects_stats
       (stattab =>’STATS_TABLE_NAME’,STATOWN =>’OWNER’);

#################
SYSTEM STATISTICS
#################

What is system statistics:
——————————-
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
—————————————-
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
———————————–
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it’s less accurate than “WORKLOAD statistics” but if you can’t capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics:
——————————-
This will gather statistics during the current workload [which supposed to be representative of actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats(‘start’);
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats(‘stop’);
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats(‘interval’,60);

Check the system values collected:
——————————————-
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;

cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:
-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn’t impose an additional overhead on your system.

Delete system statistics:
——————————
SQL> execute dbms_stats.delete_system_stats();

####################
Data Dictionary Statistics
####################

Facts:
——-
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,’ORACLE’);

When to gather Dictionary statistics:
———————————————
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
———————————————
SQL> select table_name, last_analyzed from dba_tables
where owner=’SYS’ and table_name like ‘%$’ order by 2;

Gather Dictionary Statistics:
———————————–
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
->Will gather stats on 20% of SYS schema tables.
or…
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS (‘SYS’);
->Will gather stats on 100% of SYS schema tables.
or…
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.

################
Extended Statistics “11g onwards”
################

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = ‘scott’;

In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats(‘SCOTT’,’EMP’,'(lower(ENAME))’) from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’EMP’, method_opt=> ‘for all columns size skewonly’);

OR
—-
*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
(ownname => ‘SCOTT’,tabname => ‘EMP’,
method_opt => ‘for all columns size skewonly for
columns (lower(ENAME))’);
end;
/

To check the Existence of extended statistics on a table:
———————————————————————-
SQL> select extension_name,extension from dba_stat_extensions where owner=’SCOTT’and table_name = ‘EMP’;
SYS_STU2JLSDWQAFJHQST7$QK81_YB (LOWER(“ENAME”))

Drop extended stats on column function:
——————————————————
SQL> exec dbms_stats.drop_extended_stats(‘SCOTT’,’EMP’,'(LOWER(“ENAME”))’);

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statement  are correlated e.g.(country,state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each columns. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in USA so the value of state_name are always influenced by country_id.
If there are extra columns are referenced in the “WHERE statement  with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats(‘SH’,’CUSTOMERS’, ‘(country_id,cust_state_province)’)from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats (‘SH’,’customers’,method_opt=> ‘for all columns size skewonly’);

OR

*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
(ownname => ‘SH’,tabname => ‘CUSTOMERS’,
method_opt => ‘for all columns size skewonly for
columns (country_id,cust_state_province)’);
end;
/

Drop extended stats on column group:
————————————————–
SQL> exec dbms_stats.drop_extended_stats(‘SH’,’CUSTOMERS’, ‘(country_id,cust_state_province)’);

#########
Histograms
#########

What are Histograms?
—————————–
> Holds data about values within a column in a table for number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
-Frequency histograms: is when distinct values (buckets) in the column is less than 255 (e.g. the number of countries is always less than 254).
-Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn’t collect histograms, it deletes them if you didn’t use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If application exclusively uses bind variables, Oracle recommends deleting any existing histograms and disabling Oracle histograms generation.

Cautions:
– Do not create them on Columns that are not being queried.
– Do not create them on every column of every table.
– Do not create them on the primary key column of a table.

Verify the existence of histograms:
———————————————
SQL> select column_name,histogram from dba_tab_col_statistics
where owner=’SCOTT’ and table_name=’EMP’;

Creating Histograms:
—————————
e.g.
SQL> Exec dbms_stats.gather_schema_stats
(ownname => ‘SCOTT’,
estimate_percent => dbms_stats.auto_sample_size,
method_opt => ‘for all columns size auto’,
degree => 7);

method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns .
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value .
FOR ALL INDEXES COLUMNS      => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms
where OWNER=’SCOTT’ table_name=’EMP’ group by column_name;

Drop Histograms: 11g
———————-
e.g.
SQL> Exec dbms_stats.delete_column_stats
(ownname=>’SH’, tabname=>’SALES’,
colname=>’PROD_ID’, col_stat_type=> HISTOGRAM);

Stop gather Histograms: 11g
——————————
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs
(‘SH’, ‘SALES’,’METHOD_OPT’, ‘FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID’);
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
———————-
e.g.
SQL> exec dbms_stats.delete_column_stats(user,’T’,’USERNAME’);

################################
Save/IMPORT & RESTORE STATISTICS:
################################
====================
Export /Import Statistics:
====================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
–  —————————–
SQL> Exec dbms_stats.create_stat_table(ownname => ‘SYSTEM’, stattab => ‘prod_stats’,tblspace => ‘USERS’);

2-Export statistics to the STATS table:
—————————————————
For Database stats:
SQL> Exec dbms_stats.export_database_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats(statown => ‘SYSTEM’, stattab => ‘prod_stats’);
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(‘ORIGINAL_SCHEMA’,’STATS_TABLE’,NULL,’STATS_TABLE_OWNER’);
For Table:
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats(ownname => ‘SCOTT’,tabname => ‘EMP’,stattab => ‘prod_stats’);
For Index:
SQL> Exec dbms_stats.export_INDEX_stats(ownname => ‘SCOTT’,indname => ‘PK_EMP’,stattab => ‘prod_stats’);
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats (ownname=>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’,stattab=>’prod_stats’);

3-Import statistics from PROD_STATS table to the dictionary:
———————————————————————————
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS
(stattab => ‘prod_stats’,statown => ‘SYSTEM’);
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS
(ownname => ‘SCOTT’,stattab => ‘prod_stats’, statown => ‘SYSTEM’);
For Table stats and it’s indexes:
SQL> Exec dbms_stats.import_TABLE_stats
( ownname => ‘SCOTT’, stattab => ‘prod_stats’,tabname => ‘EMP’);
For Index:
SQL> Exec dbms_stats.import_INDEX_stats
( ownname => ‘SCOTT’, stattab => ‘prod_stats’, indname => ‘PK_EMP’);
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats
(ownname=>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’,stattab=>’prod_stats’);

4-Drop STAT Table:
————————–
SQL> Exec dbms_stats.DROP_STAT_TABLE (stattab => ‘prod_stats’,ownname => ‘SYSTEM’);

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
——————————————————
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1);

Restore Database stats as of timestamp:
—————————————————-
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1);

Restore SYSTEM stats as of timestamp:
—————————————————-
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1);

Restore FIXED OBJECTS stats as of timestamp:
—————————————————————-
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1);

Restore SCHEMA stats as of timestamp:
—————————————
SQL> Exec dbms_stats.restore_SCHEMA_stats
(ownname=>’SYSADM’,AS_OF_TIMESTAMP=>sysdate-1);
OR:
SQL> Exec dbms_stats.restore_schema_stats
(ownname=>’SYSADM’,AS_OF_TIMESTAMP=>’20-JUL-2008 11:15:00AM’);

Restore Table stats as of timestamp:
————————————————
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS
(ownname=>’SYSADM’, tabname=>’T01POHEAD’,AS_OF_TIMESTAMP=>sysdate-1);

=========
Advanced:
=========

To Check current Stats history retention period (days):
——————————————————————-
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability from dual;
To modify current Stats history retention period (days):
——————————————————————-
SQL> Exec dbms_stats.alter_stats_history_retention(60);

Purge statistics older than 10 days:
——————————————
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb,
segment_name,segment_type from dba_segments
where  tablespace_name = ‘SYSAUX’
and segment_name like ‘WRI$_OPTSTAT%’
and segment_type=’TABLE’
group by segment_name,segment_type order by 1 asc
/

Check Stats indexes size:
>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 120
select sum(bytes/1024/1024) Mb, segment_name,segment_type
from dba_segments
where  tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’
and segment_type=’INDEX’
group by segment_name,segment_type order by 1 asc
/
Move Stats tables in same tablespace:
>>>>>
select ‘alter table ‘||segment_name||’  move tablespace
SYSAUX;’ from dba_segments
where tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’ and segment_type=’TABLE’
/
Rebuild stats indexes:
>>>>>>
select ‘alter index ‘||segment_name||’  rebuild online;’
from dba_segments where tablespace_name = ‘SYSAUX’
and segment_name like ‘%OPT%’ and segment_type=’INDEX’
/

Check for un-usable indexes:
>>>>>
select  di.index_name,di.index_type,di.status  from
dba_indexes di , dba_tables dt
where  di.tablespace_name = ‘SYSAUX’
and dt.table_name = di.table_name
and di.table_name like ‘%OPT%’
order by 1 asc
/

Delete Statistics:
==============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS (‘SCOTT’);
For Table stats and it’s indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats(ownname=>’SCOTT’,tabname=>’EMP’);
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats(ownname => ‘SCOTT’,indname => ‘PK_EMP’);
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats(ownname =>’SCOTT’,tabname=>’EMP’,colname=>’EMPNO’);

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.

Pending Statistics:  “11g onwards”
===============
What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them “system wide” unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’FALSE’);
Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’TRUE’);

Gather statistics: “as you used to do
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS(‘sh’,’SALES’);
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;
Then any SQL statement you will run will use the new pending statistics…

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS();

Once you finish don’t forget to return the Global PUBLISH parameter to TRUE:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS(‘PUBLISH’,’TRUE’);
>If you didn’t do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.

Posted in Uncategorized | Leave a Comment »

 
%d bloggers like this: