Tales From A Lazy Fat DBA

Den of an Oracle DB Performance freak & a fan of PostgreSQL, Cassandra & MySQL …

  • Likes

    • 142,102
  • Archives

  • Categories

  • Subscribe

  • Advertisements
  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Disclaimer!

    FatDBA or Oracle ‘Ant’ is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Posts Tagged ‘awr’

Visualize your database performance statistics using Tableau.

Posted by FatDBA on January 28, 2018

Hi Everyone,

Today’s post is all about producing some interactive data visualizations your system/database statistics using one of the most popular BI tool Tableau.
There are lot’s of other tools which can be used for your data analysis i.e. Excel, Microstrategy Analytics, Domo, QlikView but i always find Tableau easiest way to do such things, specially during all my database audit, 360 health reviews and troubleshooting task-forces.

Tableau offers a suite of tools that include an online, desktop and server version. All of these versions provide a easy-to-use drag and drop interface that can help you quickly turn your data into business insights. Like many other data analytics and visualization tools, Tableau can connect to local or remote data of many different formats.

Okay now after that short introduction of the tool, time to do some tests using the tool.
I have divided process in to three step activity and are discussed below.

First: Data Collection
You can collect your AWR reports in TEXT format and which will be later on parsed to create a CSV file.
There are many tools/scripts available online to generate multiple AWR reports of your database. I see an awesome work was already done by FlashDBA (Download). You can use his script to generate batch AWR reports and yes in TEXT format ONLY!

Example:

[oracle@dixitlab AWR]$ ls -ltr
total 12852
-rw-r--r--. 1 oracle oinstall 225031 Jan 27 21:25 awrrpt_1_445_446.txt
-rw-r--r--. 1 oracle oinstall 255010 Jan 27 21:26 awrrpt_1_446_447.txt
-rw-r--r--. 1 oracle oinstall 253217 Jan 27 21:26 awrrpt_1_447_448.txt
-rw-r--r--. 1 oracle oinstall 253217 Jan 27 21:26 awrrpt_1_448_449.txt
-rw-r--r--. 1 oracle oinstall 244229 Jan 27 21:26 awrrpt_1_449_450.txt
........
.................

Second: Data Fold or Compression.
Now next you need a tool to fold your AWR reports in to a CSV. Here once again FlashDBA did a marvelous job, he wrote one fabulous script to parse your AWR text files and generate a final AWR report.
You can download the script from his Github (Download Link)

As far as the script, you need to pass the format of your files and direct output to a CSV as shown below.

[oracle@dixitlab AWR]$ ./awr-parser.sh awr*.txt > tunedbperftests.csv

Info : Parsing file awrrpt_1_445_446.txt at 2018-01-27 21:32:49
Info : Parsing file awrrpt_1_446_447.txt at 2018-01-27 21:32:53
Info : Parsing file awrrpt_1_447_448.txt at 2018-01-27 21:32:56
Info : Parsing file awrrpt_1_448_449.txt at 2018-01-27 21:33:01
Info : Parsing file awrrpt_1_449_450.txt at 2018-01-27 21:33:07
Info : Parsing file awrrpt_1_450_451.txt at 2018-01-27 21:33:15
Info : Parsing file awrrpt_1_451_452.txt at 2018-01-27 21:33:21
....
........
Info : Parsing file awrrpt_1_499_500.txt at 2018-01-27 21:36:56
Info : No more files found
Info :
Info : ______SUMMARY______
Info : Files found : 55
Info : Files processed : 55
Info : Processing errors : 0
Info :
Info : Completed with 0 errors
[oracle@dixitlab AWR]$

With that you are done with the parsing of reports and have got the final CSV which we will be using to play around within Tableau.
Contents inside the parsed file.

Filename	Database Name	Instance Number	Instance Name	Database Version	Cluster	Hostname	Host OS	Num CPUs	Server Memory (GB)	DB Block Size	Begin Snap	Begin Time	End Snap	End Time	Elapsed Time (mins)	DB Time (mins)	Average Active Sessions	Busy Flag	Logical Reads/sec
awrrpt_1_445_446.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	445	1/26/2018 21:57	446	1/26/2018 23:00	62.24	6.33	0.1	N	2629.5
awrrpt_1_446_447.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	446	1/26/2018 23:00	447	1/27/2018 0:00	60.19	12.18	0.2	N	13973.4
awrrpt_1_447_448.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	447	1/27/2018 0:00	448	1/27/2018 1:00	60.15	13.52	0.2	N	14055.8
awrrpt_1_448_449.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	448	1/27/2018 1:00	449	1/27/2018 2:00	60.15	10.13	0.1	N	11597.4
awrrpt_1_449_450.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	449	1/27/2018 2:00	450	1/27/2018 3:00	60.16	0.03	0	N	65.4
awrrpt_1_450_451.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	450	1/27/2018 3:00	451	1/27/2018 4:00	60.12	0.02	0	N	70.3
awrrpt_1_452_453.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	452	1/27/2018 5:00	453	1/27/2018 6:00	60.13	0.69	0	N	189.8
awrrpt_1_453_454.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	453	1/27/2018 6:00	454	1/27/2018 7:00	60.13	2.88	0	N	2439.1
awrrpt_1_454_455.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	454	1/27/2018 7:00	455	1/27/2018 8:00	60.14	12.57	0.2	N	14027.3
awrrpt_1_455_456.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	455	1/27/2018 8:00	456	1/27/2018 9:00	60.14	10.11	0.1	N	13916.6
awrrpt_1_456_457.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	456	1/27/2018 9:00	457	1/27/2018 10:00	60.14	10.26	0.1	N	13941.5
awrrpt_1_457_458.txt	TUNEDB	PRIMARY	tunedb	EE	1	dixitlab.localdo	Linux x86 64-bit	1	2.77	8K	457	1/27/2018 10:00	458	1/27/2018 10:50	50.25	18.67	0.3	N	14118.9

Third: Data Representation using Tableau.
Okay so here we have the final parsed CSV of all those TEXT AWR reports named ‘tunedbperftests.csv’ and we are ready to play around and learn.

Immediately after launching you will see couple of options available for Data Sources on the left. Choose TEXT as the source and browse the CSV to load.

Next you will see all rows of your data source (tunedbperftests.csv in our case) file.

Next click on Worksheet, your personal area to play.

Tableau then divides the data in two main types: dimensions and measures. Dimensions are usually those fields that cannot be aggregated; measures, as its name suggests, are those fields that can be measured, aggregated, or used for mathematical operations. Dimension fields are usually used for row or column headings; measures are usually used for plotting or giving values to the sizes of markers.

             

Next tab is for Analytical functions, i.e. If you want to add a constant, average, mean, median averages or any reference lines to your graph/chart.

All good now, so we all all set yto plot out performance charts for that DB historical information that we have collected in the form of CSV and loaded to Tableau. Lets, plot for average hard parse per/second, Average DB Time, Average Pareses, Average Transactions happened against Time (Hourly rate of BEGIN TIME as a measure).

So, using above Area Graph you have plotted the average metric usages on the database during a time period.

Next, i will visualize one of the most prominent db wait event observed in the database during the probe (data collection) period ‘DPR’ or ‘Direct Path Reads’ and will plot the Bar graph against the TIME (Hourly BEGIN TIME).

Some more stats visualisations, this time ‘top 5 waits‘ and their hourly frequency.

There are lot’s of other things that you can do with your statistics, i mean you can plot your data in the form of Square, side by side circle, polygon, pie char, polygons, gantt bar, line graph, area graphs, box-and-whisker plots, highlight tables and many more.

So, imagine and you can visualize your database statistics using Tableau!

Questions are welcome. Happy reading! 🙂 🙂

Hope It Helps
Prashant Dixit

Advertisements

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

AAS or Average Number Of Active Sessions – The first thing to look in an AWR & its Uses.

Posted by FatDBA on January 27, 2018

Hi All,

Today’s post is all about answering the question ‘What is the very first thing that one should look out for in an AWR report ?‘. I have got this question so many times in the past about the first statistic i prefer to look at when troubleshooting a performance problem so though of answering this by writing this post with some real-time examples.

And the answer is ‘AAS‘ or ‘Average Number of Active Sessions’ is the first thing that i always look out for while reading AWR reports because It gives you a quick idea about how busy the system is and about the workload happening.

Okay so first lets understand what is an ‘Active Session’ : In simple words It is a session currently spending time in the database (i.e. from v$session where status=’ACTIVE’).
Now, what exactly is AAS – It’s the ratio or rate of change of DB time over clock time. The value of this metric is calculated by using a standard formula of (DB Time/Elapsed Time).

Lets calculate the value for one of the system.

Host Name	        Platform	       CPUs   
dixitLab1.fatdba.com	Linux x86 64-bit	16

AAS In this case : 1024.72/60.04 => 17.067 of average active sessions during the snap interval of an hour.

Let’s further decode the magical Figures of AAS.
We always use CPU Count as a standard for comparing the AAS. Few rule of thumbs while doing this comparison are give below.
– If the AAS is higher than the number of CPU you have then there is a problem. i.e In above example we have an AAS value of 17 and CPU
count 16, hence we could have performance problems and needs investigation.
– If the value is very high than the number of CPUs then there is a choke-point in the database.

You could also use the AAS to plot your graphs, lines and Manhattan’s as one of the axis to compare it with CPU consumption and quickly pin point the pain areas and time slots. Let understand and use it through a scenario.

Assume one fine day you got a call from monitoring team that they have observed huge spikes in system resource usage and many of the other metrics set on the dashboard are in red. And as usual lot’s of fingers and eyes started pointing towards you and the DBA team.

Now you as a DBA quickly generated the AWR for that specific time frame to understand the system behavior and performance and observed a huge workload is happening on the database with AAS of 305 (For a 2 Node RAC database with 128 CPUs collectively) and some huge peaks for Application class (i.e. row lock contentions etc.), User IO classified waits (i.e. DBF Sequential Reads, read by other session etc.) and some Network class waits (i.e SQL*Net message from dblink waits) in your database.

Now you want to understand the trend for wait classes for the database during last few days. Here you can use DBA_HIST_ACTIVE_SESSION_HISTORY view to collect historical statistics for the database which you will use to plot charts using excel, tableau etc.

I have collected similar stats using ASH view and have plotted a graph using few of my data representation tools to understand this transient variation in system performance.

Here you see a sudden spike in DB wait Classes (Specially User IO, Cluster, Application and Network) on March 5th with average number of active sessions (AAS) stacked for both of RAC nodes was close around 305. Which if compare it with number of total CPUs (64+64=128) is extremely high.

After further investigation you understand that it’s application class wait ‘enq: TX – row lock contention‘ which is the primary cause of this high system resources utilization.

Below graph is a representation of AAS Waiting on Application class event ‘enq: TX – row lock contention’ per Instance on the database where we can the same happening. A constant then a sudden raise in row locking contentions.

And you have identified the major sources contributing towards this row locking during the probe period of last 7 days till now. You can do a join on dba_hist_active_sess_history and dba_hist_snapshot to get this historical information — Read my previous article on how to get this past information from AWR repository.

Now when you have narrowed down the problem and have identified the problematic SQLs with their total contribution, you can now start the query optimization/tuning to fix the issue.
There are lot of other data representations you can do by using AAS as one of the graph axis i.e. AAS on CPU and Top Wait Events and will discuss in my further posts.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | 2 Comments »

Parse CPU to Parse Elapsd % – Lets clear the Perplex!

Posted by FatDBA on January 26, 2018

Happy Republic Day!

 

Hi Mates,

I see a lot of confusion, mix-ups and perplexity in between the DBAs on few of the metrics under ‘Instance Efficiency % section’ in AWR reports. Specially for one of the metric “Parse CPU to Parse Elapsd %”. In today’s post i will try to explain the metrics in detail which will help you to understand it in depth and clear the muddiness.

Few of the lines you might have read about this metric i.e.
“we should always look for as low as possible numbers for this metric …”
“Try to achieve the impossible value of zero for this one “

– Even some of the Metalink notes are misleading too.

Scenario:

Below is the snippet from one of my test box.

Note: Just taking a look at the instance efficiency ratios can be very dangerous and i advise to first start with Load Profile, top 5 waits and there on …

Okay, so my definition of this statistic differ from what you judge after reading the name of the metric – This datum signals the delay/wait in parsing of SQL queries during the snap interval.
In our example the value is 1.37% this means that for every CPU second spend parsing we spent about 72.99 (100/1.37) Seconds of clock time. It can happen due to various reasons i.e Latch or any contention between the sessions etc.

The ideal value for this stat should also be 100% like rest of the ratios (Yes, that’s correct!!).

Let’s see how this value was calculated, what all it considers while deducing that final figure of 1.37%.
It takes the “parse time cpu/parse time elapsed * 100” to get the ‘Parse CPU to Parse Elapsed’ figure.

 

Statistic	                              Total	per Second     per Trans
------------------------------------------------------------------------------------------------
parse time cpu                                  398	      0.11	0.01
parse time elapsed                           29,055	      8.07	0.81

 

So, in short each time there is a drop in this metric from 100%, means the database was waiting for something which slowed down the parse times.
If you want to dig in deep then you should trace the session using 10046 tracing with level 8 to see where the other % of parse time is being spent and leaving this for readers to test.

 

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

 
%d bloggers like this: