Tales From A Lazy Fat DBA

Den of an Oracle DB Performance freak & a fan of PostgreSQL & Cassandra …. \,,/

  • Likes

    • 156,204
  • Archives

  • Categories

  • Subscribe

  • Advertisements
  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Disclaimer!

    FatDBA or Oracle ‘Ant’ is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Posts Tagged ‘performance’

All about Oracle 12c Database In-Memory!

Posted by FatDBA on December 27, 2017

Hi Mates,

Today i will discuss about the Database In-Memory option of Oracle 12c databases. I know i am little late to add about this feature but recently I’ve implemented the option for one of my customer in Latin Americas region and there I’ve got more exposure to understand it better and implement the feature.

So, lets start with most probably my last post of the year!

Recently i have got a chance to test and to benchmark the performance gains in one of out Pre-Prod environment. This feature applies on Tablespaces, Tables, MViews, (Sub) Partitions (Except objects owned by SYS, SYSTEM and SYSAUX). To understand the feature i would first like to shed some light on two of the ways how Oracle stores tables on both Disk and Memory using conventional ‘Row Format’ and with the all new In-Memory ‘Column’ format.

Row Arrangement: Is the same old traditional method to store data in row formats. This is best for OLTP systems as queries runs faster with this approach as it quickly fetches all of the columns in a record.
Column Based Arrangement: This way it stores records in a separate column store. This proves good for OLAP systems where large set of data is chosen but only for few number of columns.

Hence, based on above two methods or data arrangements in database it is clear that the row based method is best for DMLs and column based arrangement is good when selecting large portion of data, so both of the two methods have their own respective pros and cons. But starting from Oracle 12.1.0.2 we have the all new feature of ‘DB In-Memory’ which use best of both the approaches. I mean it uses both row and column arrangements to keep data in memory. Our smart optimizer automatically knows which query to route as per the workload (OLTP and for Analytical processing).

The In-Memory feature uses the IM Column store which is a new occupant of the SGA (In-Memory Area). This In-Memory Area is adjusted by a parameter INMEMORY_SIZE.

As far as IM Column Store it is filled by the information collected by worker processes e.g. w001, w002 etc.; each of the worker process updates the IM Compression Units or IMCUs.

So, enough the background and internals involved, lets jump to the real work and check how this thing practically works.
First check the minimum comparability of the database

COMPATIBLE = 12.1.0.0.0

Lets first enable the in-memory column store. There are many of the related parameters and are given below.

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
inmemory_adg_enabled                 boolean     TRUE
inmemory_clause_default              string
inmemory_expressions_usage           string      ENABLE
inmemory_force                       string      DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query                       string      ENABLE
inmemory_size                        big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
inmemory_virtual_columns             string      MANUAL
optimizer_inmemory_aware             boolean     TRUE

Before i show next logs on how to enable it, i would first like to show you the database startup details with no In-Memory enabled.

SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             771753984 bytes
Database Buffers          402653184 bytes
Redo Buffers                8155136 bytes
Database mounted.
Database opened.
SQL>

Okay now lets enable it!
Now a question might be asked about the criterion of adding objects to the pool. So in my opinion the best candidates would
be — Very hot data, large segments (not less than 1MB), collect these stats from sources like Oracle segment statistics, number of column scans, AWR reports etc. Analytical queries etc.

SQL> alter system set inmemory_size=250m scope=spfile;

System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             822085632 bytes
Database Buffers           83886080 bytes
Redo Buffers                8155136 bytes
In-Memory Area            268435456 bytes
Database mounted.
Database opened.

Now i will try to load some stuff in to the memory.

SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate 
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                DISABLED

00:14:42 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE                         >>>>>>> 1MB pool is used to store the column formatted data.
64KB POOL                     50331648          0 DONE                         >>>>>>> 64 KB pool is used to store the metadata about its residents.

Now i am moving the discussed table to in memory.

exec DBMS_INMEMORY.POPULATE(schema_name => dixit, table_name => 'TEST1');
or Using

00:15:01 SQL> alter table test1 inmemory;

Table altered.


00:17:59 SQL> SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                ENABLED  NONE     AUTO            FOR QUERY LOW     NO DUPLICATE




00:20:17 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE
64KB POOL                     50331648          0 DONE


00:18:25 SQL> select owner, segment_name, populate_status from v$im_segments;

no rows selected

Above results shows that no segment is added to the pool. Now lets try to query the table and see the results again.

00:20:26 SQL> select count(*) from test1;

  COUNT(*)
----------
   1000000



00:20:48 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 POPULATING
64KB POOL                     50331648          0 POPULATING


Okay now it's doing something as status from DONE has changed to POLULATING. Lets repeat the same command again.

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016   49283072 DONE
64KB POOL                     50331648     458752 DONE

Okay its there in the pool now! Lets check the in memory area, what we have inside it.

00:23:56 SQL> select owner, segment_name, populate_status from v$im_segments;

OWNER      SEGMENT_NAME         POPULATE_STAT
---------- -------------------- -------------
DIXIT      TEST1                COMPLETED

Lets conform if table is fully populated we have to look at v$im_segments_detail which compares the number of blocks in In-Memory and in the TEST1 table.

SELECT m.inst_id, 
       m.blocksinmem, 
       m.datablocks 
FROM   v$im_segments_detail m, 
       user_objects o 
WHERE  m.dataobj = o.object_id 
AND    o.object_name = 'TEST1';

   INST_ID BLOCKSINMEM DATABLOCKS
---------- ----------- ----------
         1        6922       6922

Lets see if there is any changes captured in Execution plan of the query.

SQL> explain plan for select count(*) from test1;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3896847026

-----------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    71   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE             |       |     1 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| TEST1 |  1000K|    71   (2)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

Now we have a new operation type added to the plan ‘TABLE ACCESS INMEMORY FULL’. It says that the object was accessed directly from the memory.

You can compare the response times with/without the In-Memory Store by simply disabling In-Memory functionality in your session:

ALTER SESSION SET inmemory_query = ENABLE;
ALTER SESSION SET inmemory_query = DISABLE;

You can anytime remove the objects from the pool. Other options to use with In memory.

SQL> ALTER TABLESPACE example INMEMORY;

Enabling the In-Memory attribute on the TEST1 table but excluding the “dest_id” column

SQL> ALTER TABLE TEST1 INMEMORY NO INMEMORY(dest_id);

Enabling the in memory option for table TEST1 and setting the priority to CRITICAL. With this option set on object (Other options are HIGH, MEDIUM, LOW), it will be populated immediately after the database is opened.

SQL> ALTER TABLE TEST1 INMEMORY PRIORITY CRITICAL;

There are few other options like Compression (Objects compressed during population) , Joins, Scans are also there and will be covered in a separate post or you can read the official documentation on in memory to understand them.

But i would like to discuss little about how the In-Memory option works in RAC environment.

In case of user querying the database in-memory in RAC, serial queries will only access a fraction of the data from its own node. IMCUs or IM Compression Units are not traveled using interconnect or using cache fusion. It fetches the remainder data from the disk itself.
Parallel execution helps as it starts multiple processes and ensures that at least one parallel server slave is allocated for each RAC instance. For that we have to set the parallel_degree_policy or Auto DOP to AUTO which makes the query coordinator ICMU aware and it (QC) automatically starts parallel server processes on correct server.

Now after all the discussion time to share my final words on the subject.
As we know that caching or pooling are the concepts there with Oracle RDBMS or with any of the databases from a very long time; then – why this in memory now ?
We already have few of the areas like KEEP/RECYCLE pools, Result cache etc. to keep cache data or results.
So my answer is that the Oracle In-Memory column store enables objects to load in memory in compressed columnar format which makes the scans to perform better that on-disk reads and leads to performance boosts. Use it when there is a SQL related performance issue and you’ve tried all tuning methods and each one of them are failed to fix the problem and yes If the In-Memory performance benefit can outperform the additional Oracle license costs.

Hope It Helps
Prashant Dixit

Advertisements

Posted in Advanced, Basics | Tagged: , | Leave a Comment »

Auto Stats Gathering in Oracle 12c & Something Interesting :)

Posted by FatDBA on December 14, 2017

Hi Fellas,
Starting from Oracle 12c there is a new feature added which collects the statistics when you perform the Bulk Loads when using any of the two methods:
– CREATE TABLE AS SELECT (CTAS)
– INSERT INTO … SELECT (Into an empty table using DPR or Direct Path Read).

SQL> explain plan for create table dixittab as select * from scottisdead;
Explained.
 
SQL> select * from table(DBMS_XPLAN.DISPLAY);
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 14312189
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |               |   500K|  8812K|   612   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | DIXITTAB      |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |               |   500K|  8812K|   371   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SCOTTISDEAD   |   500K|  8812K|   371   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------
10 rows selected. 

Above in execution plan you’ll see the new operation named “OPTIMIZER STATISTICS GATHERING” at ID 2.
Lets verify if the stats are collected.

SQL> select table_name, last_analyzed from user_tables where table_name = 'DIXITTAB';
 
TABLE_NAME       LAST_ANALYZED
---------------- -------------
DIXITTAB         12-DEC-17

Yup, stats were collected!
Same way stats will be auto collected during the other type of Bulk Load method (INSERT INTO .. SELECT).

There may be times when you want to disable this feature, situations like.
– Long/Huge Insert operations which is taking huge time on STATS GATHERING.
– With an extremely large dataset where you don’t want to collect stats.

In order to achieve that we have the option, with the use of a hint which instructs oracle to not gather table statistics.

SQL> create table dixittab as select /*+NO_GATHER_OPTIMIZER_STATISTICS */* from scottisdead;

Now, something interesting i would like to discuss …..
Is there any other condition when the stats won’t be collected automatically except barring it using NO_GATHER_OPTIMIZER_STATISTICS Hint ?

Lets try to do some conventional bulk loading using INSERT INTO .. SELECT method.
To do some tests – I am intentionally commenting few of the columns both the tables. I’ve commented DATE_VAL column of newly created table TABLE1 and DATE_VALUE of the table selected SAMPLE.


SQL> create table table1 (ident number, date_val date, text_val varchar2(4000));
Table created.


SQL> insert /*+ append */ into table1
(IDENT
--, DATE_VAL
, TEXT_VAL)
SELECT ID
--, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE; 

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------

Plan hash value: 1523099961
-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | INSERT STATEMENT   |        |   100K|  2539K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT    | TABLE1 |       |       |            |          |
|   2 |   TABLE ACCESS FULL| SAMPLE |   100K|  2539K|   154   (1)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

😦 😦 Why, the auto stats gathering behavior not repeated this time ??

This happened because Oracle needs inclusion of all the columns of a table in order to kick in the OPTIMIZER STATISTICS GATHERING operation —> Let me show you what i said in above statement.

SQL> insert /*+ append */ into table1
(IDENT
, DATE_VAL
, TEXT_VAL)
SELECT ID
, DATE_VALUE
, TEXT_VALUE
FROM SAMPLE;  

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------

Plan hash value: 1523099961
-------------------------------------------------------------------------------------------
| Id  | Operation                        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | TABLE1 |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |        |   100K|  3320K|   154   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | SAMPLE |   100K|  3320K|   154   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------

10 rows selected.

Yes, the the stats were collected this time when we’ve included all the columns of the tables.
I haven’t seen any documentation on this restriction on this new feature of Oracle 12c. Hope Oracle adds this soon this to their documentation 🙂 …..

Hope It Helps!
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | Leave a Comment »

Optimizer making wrong decisions. Histogram is that you ?

Posted by FatDBA on December 4, 2017

Hey Mates,

Would discuss a scenario that i faced some time back with one of the customer where one of the query (Part of Monthly application maintenance) started taking huge amount of time to complete.
During the analysis I’ve found it doing some extremely expensive Full Table Scans on a fat table when it shouldn’t. Let me elaborate what was the issue.

After reading the execution plan and the extended traces of Optimizer (10053) to understand the decisions made by the CBO.

Traces gave me a hint that the optimizer was able to accurately estimate the cardinality for all of the predicates as long as string is 32 bytes or less. But at one of the operation step in execution plan the estimations were wrong and crooked. I saw the CBO decided to ‘Return each of the row’ irrespective of strings and it was actually happening because the histogram contains only the first 32 characters of the string data.

So the histogram was considering only first 32 characters of the column value and the optimizer this particular value i.e “siebelm01-vip@2002:101:11f::101:1” is happening 5776 times in the table. So it ignored the index and used full table scan. Oracle will only consider the first 32 characters in the column in generating histograms and in such cases what we faced as a aftermath, the optimizer will make wrong decisions.

So as a immediate solution to the problem I’ve deleted the histogram on this specific column and this plan fixed this query plan change issue.

BEGIN
dbms_stats.delete_column_stats(ownname=>'SIEBELADMIN', tabname=>'TX2121', colname=>'C131HJA12', col_stat_type=>'HISTOGRAM');
END;
/

Other fixes:
– In case of longer strings (32 or more) in case of text search use Oracle Text or CONTAINS operator in SQL logic.

Hope That Helps!
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

How to use “SQL Patch” technique to fix Query performance issues.

Posted by FatDBA on November 30, 2017

Hi Guys,

In my last post “How to fix SQL Plan issues using OUTLINE DATA.” where we learned how to fix wrong cost estimates by the CBO using OUTLINE DATA of optimal plan available. But there are few issues with the approach and only worked with ad-hoc statements, as failed to handle situations when

– How to deal with situations when the SQL is coming from bundled applications and cannot be modified ?
– How to look for less number of hints leading to better execution plan?
– How, if it’s not possible to use the SQL BASELINES.

As discussed in the same last post where we added the full list of OUTLINE DATA, it seems little complicated, dirty and disordered to add the full outline set to the SQL statement.
ex: select /*+ …full outline here…*/

So, how to add required set of HINTS to force the better plan for the SQL.

Lets do a test to explain this subject in detail.

SQL TEXT: select count(*) from dixit.test;
Problem Statement: Query taking huge time to complete the COUNT of rows for TEST table and doing a FULL TABLE SCAN despite of Indexes.

Now after querying the DBA_HIST_SQLSTAT view i saw a total of 2 execution plans available and query is somehow picking the wrong plan or the costliest plan.
So, lets grab the OUTLINE data of the better plan first.

select * from table(dbms_xplan.display_awr(‘SQL_ID’, PHV, format => ‘ADVANCED’));
OR
EXPLAIN PLAN for select count(*) from dixit.test;
select * from table(dbms_xplan.display(format=>’+OUTLINE’));

Outline Data
————-
/*+
BEGIN_OUTLINE_DATA
INDEX_RS_ASC(@"SEL$1" "TEST"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.2.0.4')
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Lets create the SQL PATCH for the statement to force the SQL to use it with required number of HINTS.

begin
dbms_sqldiag_internal.i_create_patch(
sql_text => 'select count(*) from dixit.test',
hint_text => 'INDEX_RS_ASC(@"SEL$1" "STAGE"@"SEL$1")',
name => 'test_sql_patch_dixit');
end; /

I ran the query once again and as expected it ran this time with very less elapsed time and used Index Scan in place of expensive FTS.
I also noticed a new NOTE coming just below the execution plan.

Note
--------
- SQL patch "test_parallel_patch" used for this statement

In case if want to delete the patch.

begin
DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'test_sql_patch_dixit');
end;
/


HOPE IT HELPS!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

How to fix SQL Plan issues using OUTLINE DATA.

Posted by FatDBA on November 30, 2017

Hi Mates,
Today would like to discuss about the ‘SQL Plan Stability’, specially situations when the performance of a SQL statement degrades and performs poor in one environment and working absolutely good in other spheres.
I experienced such issues a lot during past few years while working as a performance consultant for many of the projects, but recently i encountered similar issue and would like to discuss about the same.
In short the situation is given below for that problematic SQL

Same request they say takes in SIT 268 sec:
• I run in SIT again –> 500 secs
• I run also in DEV –> 20 secs
• I run in preSIT –> 5 secs

Below are the details for the given Ill-SQL.

SQL Text:
SELECT a.port_inst_id AS mdf_es_port_inst_id, a.status AS mdf_es_port_status, d.port_inst_id AS mdf_ls_port_inst_id, d.status AS mdf_ls_port_status, regexp_substr( c.descr, :"SYS_B_00", :"SYS_B_01", :"SYS_B_02" ) AS naming_area, a.status AS mdf_es_port_status2, DECODE( a.reservation_inst_id, NULL, :"SYS_B_03", :"SYS_B_04" ) AS mdf_es_active_reservation, a.port_access_id AS lic_identity, egi_get_uda_value( a.equip_inst_id, :"SYS_B_05", :"SYS_B_06", :"SYS_B_07" ) AS kc, b.name AS coordinate, a.port_hum_id egi_port_name, egi_get_uda_value( d.equip_inst_id, :"SYS_B_08", :"SYS_B_09", :"SYS_B_10" ) AS cabinet_name, e.name AS termination_block_name, d.port_access_ id AS pair_name, egi_get_uda_value( d.equip_inst_id, :"SYS_B_11", :"SYS_B_12", :"SYS_B_13" ) AS signing_system, egi_get_uda_value( a.port_inst_id, :"SYS_B_14", :"SYS_B_15", :"SYS_B_16" ) AS processor_exchange, a.circ_path_inst_id AS es_circ_path_inst_id, a.next_path_inst_id AS es_next_path_inst_id, d.circ_path_inst_id AS ls_circ_path_inst_id, d.next_path_inst_id AS ls_next_path_inst_id, DECODE( d.role, :"SYS_B_17", :"SYS_B_18", :"SYS_B_19", :"SYS_B_20", :"SYS_B_21", :"SYS_B_22", :"SYS_B_23", :"SYS_B_24", NULL ) AS cabling_type FROM epa a, card_inst b, equip_inst c, epa d, card_inst e, xxx_xxxx_xxxxxxxx f WHERE a.port_inst_id = f.port_inst_id AND f.val_attr_inst_id = ( SELECT val_attr_inst_id FROM xxx_xxxx_xxxxxxxx WHERE group_name = :"SYS_B_25" AND attr_name = :"SYS_B_26" ) AND a.card_inst_id = b.card_inst_id AND b.type = :"SYS_B_27" AND a.equip_inst_id = c.equip_inst_id AND a.z_wired_port_inst_id = d.port_inst_id (+) AND d.card_inst_id = e.card_inst_id (+) AND regexp_substr( a.port_access_id, :"SYS_B_28", :"SYS_B_29", :"SYS_B_30" ) = :"SYS_B_31" AND to_number(regexp_substr( a.port_access_id, :"SYS_B_32", :"SYS_B_33", :"SYS_B_34" ) ) >= to_number(: "SYS_B_35") AND to_number(regexp_substr( a.port_access_id, :"SYS_B_36", :"SYS_B_37", :"SYS_B_38" ) ) <= to_number(:"SYS_B_39") AND f.attr_value = :"SYS_B_40" ORDER BY :"SYS_B_41"

I see around 5 different PHV’s or plans coming for this statement and last two 18733286 and 2202641467 are the worst plans here, first one with PHV 1663132818 is the BEST among all others.
1663132818 is with very less elapsed time, CPU Usage, IO time, LIOs, Disk reads, Rows Processed and Cost.
** From DBA_HIST_SQLSTAT view.

Let’s collect the OUTLINE and QUERY Block details of the statement, and this can be collected using
select * from table(dbms_xplan.display_awr(‘SQL_ID’, PHV, format => ‘ADVANCED’));

Query Block Name / Object Alias (identified by operation id):
————————————————————-

1 - SEL$841DDE77
10 - SEL$841DDE77 / VAL_ATTR_NAME@SEL$2
11 - SEL$841DDE77 / VAL_ATTR_NAME@SEL$2
12 - SEL$841DDE77 / F@SEL$1
13 - SEL$841DDE77 / F@SEL$1
14 - SEL$841DDE77 / A@SEL$1
15 - SEL$841DDE77 / A@SEL$1
16 - SEL$841DDE77 / C@SEL$1
17 - SEL$841DDE77 / C@SEL$1
18 - SEL$841DDE77 / D@SEL$1
19 - SEL$841DDE77 / D@SEL$1
20 - SEL$841DDE77 / E@SEL$1
21 - SEL$841DDE77 / E@SEL$1
22 - SEL$841DDE77 / B@SEL$1
23 - SEL$841DDE77 / B@SEL$1

Outline Data
————-

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
DB_VERSION('12.1.0.2')
OPT_PARAM('optimizer_index_cost_adj' 10)
FIRST_ROWS(1)
OUTLINE_LEAF(@"SEL$841DDE77")
MERGE(@"SEL$683B0107")
OUTLINE(@"SEL$C772B8D1")
UNNEST(@"SEL$2")
OUTLINE(@"SEL$683B0107")
OUTLINE(@"SEL$7511BFD2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
INDEX_RS_ASC(@"SEL$841DDE77" "VAL_ATTR_NAME"@"SEL$2" ("VAL_ATTR_NAME"."GROUP_NAME" "VAL_ATTR_NAME"."ATTR_NAME"))
INDEX_RS_ASC(@"SEL$841DDE77" "F"@"SEL$1" ("PORT_ATTR_SETTINGS"."ATTR_VALUE"))
BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$841DDE77" "F"@"SEL$1")
INDEX_RS_ASC(@"SEL$841DDE77" "A"@"SEL$1" ("EPA"."PORT_INST_ID"))
INDEX_RS_ASC(@"SEL$841DDE77" "C"@"SEL$1" ("EQUIP_INST"."EQUIP_INST_ID"))
INDEX_RS_ASC(@"SEL$841DDE77" "D"@"SEL$1" ("EPA"."PORT_INST_ID"))
INDEX_RS_ASC(@"SEL$841DDE77" "E"@"SEL$1" ("CARD_INST"."CARD_INST_ID"))
INDEX(@"SEL$841DDE77" "B"@"SEL$1" ("CARD_INST"."CARD_INST_ID"))
LEADING(@"SEL$841DDE77" "VAL_ATTR_NAME"@"SEL$2" "F"@"SEL$1" "A"@"SEL$1" "C"@"SEL$1" "D"@"SEL$1" "E"@"SEL$1"
"B"@"SEL$1")
USE_NL(@"SEL$841DDE77" "F"@"SEL$1")
USE_NL(@"SEL$841DDE77" "A"@"SEL$1")
USE_NL(@"SEL$841DDE77" "C"@"SEL$1")
USE_NL(@"SEL$841DDE77" "D"@"SEL$1")
USE_NL(@"SEL$841DDE77" "E"@"SEL$1")
USE_NL(@"SEL$841DDE77" "B"@"SEL$1")
NLJ_BATCHING(@"SEL$841DDE77" "B"@"SEL$1")
END_OUTLINE_DATA
*/

Let’s use the outline of the good plan as a hint to force the good execution plan to be used and check the execution time for the query.
Run the query from SQLPLUS using the hint:

SELECT
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE(‘12.1.0.2’)
DB_VERSION(‘12.1.0.2’)
OPT_PARAM(‘optimizer_index_cost_adj’ 10)
FIRST_ROWS(1)
OUTLINE_LEAF(@”SEL$841DDE77″)
MERGE(@”SEL$683B0107″)
OUTLINE(@”SEL$C772B8D1″)
UNNEST(@”SEL$2″)
OUTLINE(@”SEL$683B0107″)
OUTLINE(@”SEL$7511BFD2″)
OUTLINE(@”SEL$2″)
OUTLINE(@”SEL$1″)
INDEX_RS_ASC(@”SEL$841DDE77″ “VAL_ATTR_NAME”@”SEL$2” (“VAL_ATTR_NAME”.”GROUP_NAME” “VAL_ATTR_NAME”.”ATTR_NAME”))
INDEX_RS_ASC(@”SEL$841DDE77″ “F”@”SEL$1” (“PORT_ATTR_SETTINGS”.”ATTR_VALUE”))
BATCH_TABLE_ACCESS_BY_ROWID(@”SEL$841DDE77″ “F”@”SEL$1″)
INDEX_RS_ASC(@”SEL$841DDE77” “A”@”SEL$1” (“EPA”.”PORT_INST_ID”))
INDEX_RS_ASC(@”SEL$841DDE77″ “C”@”SEL$1” (“EQUIP_INST”.”EQUIP_INST_ID”))
INDEX_RS_ASC(@”SEL$841DDE77″ “D”@”SEL$1” (“EPA”.”PORT_INST_ID”))
INDEX_RS_ASC(@”SEL$841DDE77″ “E”@”SEL$1” (“CARD_INST”.”CARD_INST_ID”))
INDEX(@”SEL$841DDE77″ “B”@”SEL$1” (“CARD_INST”.”CARD_INST_ID”))
LEADING(@”SEL$841DDE77″ “VAL_ATTR_NAME”@”SEL$2” “F”@”SEL$1” “A”@”SEL$1” “C”@”SEL$1” “D”@”SEL$1” “E”@”SEL$1”
“B”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “F”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “A”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “C”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “D”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “E”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “B”@”SEL$1″)
NLJ_BATCHING(@”SEL$841DDE77” “B”@”SEL$1”)
END_OUTLINE_DATA
*/

A.PORT_INST_ID AS MDF_ES_PORT_INST_ID, A.STATUS AS MDF_ES_PORT_STATUS, D.PORT_INST_ID AS MDF_LS_PORT_INST_ID, D.STATUS AS MDF_LS_PORT_STATUS, REGEXP_SUBSTR(C.DESCR,'[^ ]+’, 1,1) AS NAMING_AREA, A.STATUS AS MDF_ES_PORT_STATUS2, DECODE(A.RESERVATION_INST_ID, NULL, ‘N’, ‘Y’) AS MDF_ES_ACTIVE_RESERVATION, A.PORT_ACCESS_ID AS LIC_IDENTITY, EGI_GET_UDA_VALUE(A.EQUIP_INST_ID,’EQUIPMENT’,’Eqp Legacy Names’,’TAD-N’) AS KC, B.NAME AS COORDINATE, A.PORT_HUM_ID EGI_PORT_NAME, EGI_GET_UDA_VALUE(D.EQUIP_INST_ID,’EQUIPMENT’,’Eqp Legacy Names’,’TAD-N’) AS CABINET_NAME, E.NAME AS TERMINATION_BLOCK_NAME, D.PORT_ACCESS_ID AS PAIR_NAME, EGI_GET_UDA_VALUE(D.EQUIP_INST_ID,’EQUIPMENT’,’Eqp Legacy Names’,’SIGNSYST’) AS SIGNING_SYSTEM, EGI_GET_UDA_VALUE(A.PORT_INST_ID,’PORT’,’Copper Port Info’,’Processor Exchange’) AS PROCESSOR_EXCHANGE, A.CIRC_PATH_INST_ID AS ES_CIRC_PATH_INST_ID, A.NEXT_PATH_INST_ID AS ES_NEXT_PATH_INST_ID, D.CIRC_PATH_INST_ID AS LS_CIRC_PATH_INST_ID, D.NEXT_PATH_INST_ID AS LS_NEXT_PATH_INST_ID, DECODE(D.ROLE,’Primary’,’P’,’Secondary’,’S’,’Spare’,’R’,’Junction’,’F’,NULL) AS CABLING_TYPE FROM EPA A, CARD_INST B, EQUIP_INST C, EPA D, CARD_INST E, PORT_ATTR_SETTINGS F WHERE A.PORT_INST_ID =F.PORT_INST_ID AND F.VAL_ATTR_INST_ID= (SELECT VAL_ATTR_INST_ID FROM VAL_ATTR_NAME WHERE GROUP_NAME=’Copper Port Info’ AND ATTR_NAME =’Processor Exchange’ ) AND A.CARD_INST_ID =B.CARD_INST_ID AND B.TYPE =’ES’ AND A.EQUIP_INST_ID =C.EQUIP_INST_ID AND A.Z_WIRED_PORT_INST_ID =D.PORT_INST_ID(+) AND D.CARD_INST_ID =E.CARD_INST_ID(+) AND REGEXP_SUBSTR(A.PORT_ACCESS_ID,'[^-]+’, 1,1) =:B4 AND TO_NUMBER(REGEXP_SUBSTR(A.PORT_ACCESS_ID,'[^-]+’, 1,2))>=TO_NUMBER(:B3 ) AND TO_NUMBER(REGEXP_SUBSTR(A.PORT_ACCESS_ID,'[^-]+’, 1,2))<=TO_NUMBER(:B2 ) AND F.ATTR_VALUE =:B1 ORDER BY 8

I’ve executed the query in SIT and it’s taking no more than 5 seconds now with those OUTLINES, that is reeeeally good 😊

Now when the best plan is identified and tested, Question comes “HOW TO FIX THIS PERMANENTLY ??”

This can be done using one of the Oracle provided script named “coe_xfr_sql_profile.sql” which is part of popular SQLTXPLAIN/SQLT tool.
Download it from Metalink website (Direct Link: https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=458593899434965&parent=DOCUMENT&sourceId=1955195.1&id=215187.1&_afrWindowMode=0&_adf.ctrl-state=11co9htqxp_387#aref_section13)

In order to fix or correct the cost estimates for CBO to adopt a good execution plan we can follow below steps.
• Run the script coe_xfr_sql_profile.sql as SYSDBA user providing the sql_id and the good Plan Hash Value (PHV)
SQL> START coe_xfr_sql_profile.sql SQL_ID PHV

• Above step generates a script named in the format: “coe_xfr_sql_profile_SQL_ID_PLAN_HASH_VALUE.sql” i.e. including the sql_id and plan hash value.
Example: “coe_xfr_sql_profile_ SQLID_PHV.sql”

• If you like to use the profile for all literals, you can modify the force_match parameter in the generated script from false to true: force_match => TRUE
This will ensure the profile will be enabled even when different literals are used in the SQL query.

• Run the generated script as SYSDBA user in order to correct the optimizer cost estimates used in the execution plan for the sql_id and encourage it to use the desired plan obtained from the stated plan hash value.

• Next we can check if profile is accepted successfully
select SQL_ID, SQL_PROFILE,PLAN_HASH_VALUE from V$SQL where SQL_ID=’ SQL ID’;

• In case if profile is required to be disabled/dropped then do following steps.
The profile can be disabled or dropped as follows using the the SQL_PROFILE name returned from the query above:

EXEC DBMS_SQLTUNE.ALTER_SQL_PROFILE(Name => ”, Attribute_Name => ‘STATUS’, Value => ‘DISABLED’);

To Drop the Profile:
EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE(Name => ”);

This way we can test and fix a Plan Stability issues with any SQL statement!

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: , , | 3 Comments »

CKPT process blocking table gather stats session intermittently … Why ?

Posted by FatDBA on November 1, 2017

Hi Folks,
Today i would like to share one of the experience that we had while working in one of the production system with a customer with a weird situation where the Gather stats session getting intermittently blocked by CKPT database background process in database and sometimes stays as it is for more than 30 mins.

We were getting the “enq: RO – fast object reuse” wait contention when gathering schema/table statistics in parallel using DBMS_STATS package with DEGREE>1

During the analysis i’ve generated the System State dump and saw a clear blocking situation on object Enq RO-00010059-00000001 .

Snippet from SS Dump.

Resource Holder State
Enq RO-00010059-00000001 14: waiting for ‘rdbms ipc message’
Enq RO-00010059-00000001 89: 89: is waiting for 14: 89:

Workaround for the problem is either of the two solutions
– We can try flush the Buffer Cache.
Though flushing the buffer cache causes dirty blocks to be written to disk and will have some performance impact.
– Setting the parameter “_db_fast_obj_truncate” to FALSE.
This will revert back to 9i way of invalidating buffers in buffer cache.

Hope That Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

What’s new in coming few days here on FATDBA ….

Posted by FatDBA on October 24, 2017

Some of the performance troubleshooting that i have done using few of the specialized tools/scripts – SYSTEMTAP, CALIPER, STRACE, PTRACE, Few of the Profilers, Representation using PGraph, FlameGraphs and few of the beautifully written scripts/tools from legends like Brendan Gregg (www.brendangregg.com/), Luca Canali (cern.ch/canali) etc.

Posted in Advanced | Tagged: | Leave a Comment »

DB Sick/Hung/Slow :( …. How to troubleshoot using ‘Real Time ADDM’ ??

Posted by FatDBA on October 24, 2017

The new emergency monioring feature of “Real Time ADDM” or RT ADDM (Performance Menu within the EM Console) of Enterprise Manager Cloud provides a new feature which allows the DBAs to connect with the non responsive or hang/stalled database. Now someone might ask that this was earlier achieved using ‘-prelim’ option which allows the execution of ORADEBUG commands but if you remember it was not that flexible and allows only limited arguments like hanganalyze, ashdump etc.

This all new feature of 12c is one step ahead as it directly connects (Using the light weight lock/latch-free connection) with the SGA area and hits the hang Analysis and Active Session History tables and also helps to view the blockers etc. and other in-memory performance statistics.

So this is quite helpful at times when we have …
– Sick Systems
– DB is very slow
– DB Hung due to any contention for resources etc.
– DBAs are unable to login to Database.

So before that moment when we are all set to bounce the database there is this option available to take a look inside the database to understand the system.

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

How to tune the IO contentions related with the Compaction in Cassandra ?

Posted by FatDBA on August 20, 2017

Hi Fellas,
Back and this time with some performance tuning scopes for Cassandra DB during the ‘Compaction’ process.
Before i proceed, would like to explain a bit about the compaction in Cassandra and what exactly is this and why a necessary evil …

Compaction in Cassandra refers to the operation of merging multiple SSTables into a single new one. Typically, compaction is done in a database for two primary reasons:

– To reduce the storage usage.
– To improve read performance by merging keys and obtaining a consolidated index.

For example, in Apache Cassandra, data files are merged periodically to form compacted SSTables.

There is a good chance of contention happening in database due to Compaction activity as the Compaction increases I/O contention on SSTable data read. Writing data in Cassandra database is generally fast and the write impacts may not be seen but reading data from SSTables will be slow in case when I/O contention increases due to compaction activities and degrades the performance of the database.

First would like to discuss how to identify the compaction related contentions on the database.
– We can use the “nodetool tablestats” or the old “nodetool cfstats” command to
monitor or watch-keep SSTables.
Below is a sample result from one of the Cassandra database server, here we need to check
– Check if the count is keep on growing, because that points out that there may be contention between reading SST
and the compaction process.
– Read generally slows down due to an obvious reason of data distributed or fragmented across many SSTs and
Compaction running continuous in the background.

%nodetool tablestats -H dixit.playlist
Keyspace: dixit
Read Count: 182849
Read Latency: 0.11363755339104945 ms.
Write Count: 435355
Write Latency: 0.01956930550929701 ms.
Pending Flushes: 0
Table: standard1
SSTable count: 2
Space used (live): 51.62 MB
Space used (total): 51.62 MB
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 302.36 KB
SSTable Compression Ratio: 0.0
Number of keys (estimate): 376390
Memtable cell count: 200120
Memtable data size: 45.16 MB
Memtable off heap memory used: 0 bytes
Memtable switch count: 2
Local read count: 182849
Local read latency: 0.125 ms
Local write count: 435355
Local write latency: 0.022 ms
Pending flushes: 0
Bloom filter false positives: 11
Bloom filter false ratio: 0.00000
Bloom filter space used: 265.81 KB
Bloom filter off heap memory used: 265.8 KB
Index summary off heap memory used: 36.57 KB
Compression metadata off heap memory used: 0 bytes
Compacted partition minimum bytes: 216 bytes
Compacted partition maximum bytes: 258 bytes
Compacted partition mean bytes: 258 bytes
Average live cells per slice (last five minutes): 1.0
Maximum live cells per slice (last five minutes): 1
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1

Below is the command that can be used to check for compaction statistics, here you need to look at the ‘pending tasks’, and ‘bytes total in progress’.

$ nodetool compactionstats
pending tasks: 5
compaction type keyspace table completed total unit progress
Compaction Keyspace1 Standard1 282310680 302170540 bytes 93.43%
Compaction Keyspace1 Standard1 58457931 307520780 bytes 19.01%
Active compaction remaining time : 0h00m16s

Solution to the problem
1. First one is quite simple – Avoid merging of update/delete requests.
2. Reduce the frequency of in-memory objects (In Memtables) flush.

This can be done by increasing the size of the memtables to avoid or stop database to perform frequent flushes.
– Less number of flushes leads to fewer SSTs compaction.
– Less Compaction reduces the I/Contentions and this in turn improve reads.
– There are couple of parameters that you can adjust in your cassandra.yaml file to control the flushing.
i.e. memtable_flush_after_mins, memtable_throughput_in_mb , memtable_operations_in_millions.

3. One more solution but that only applies on systems where this stress in IO is not much frequent, we can reduce
the “thread priority” which reduces the IOs.
As lowering the priority slows down the compaction writes but only applies if it doesn’t happen frequently.

Add below lines in cassandra-env.conf file (Under /conf folder) to lower the compaction priority.

JVM_OPTS=”$JVM_OPTS -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Dcassandra.compaction.priority=1″

One last line would like to add.
In case when the IO is a genuine problem, you will need to add more nodes or replace disks with better performing one’s or high IO disks.

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , , | Leave a Comment »

CELL_OFFLOAD_PROCESSING value in Non-Exadata Surrounds! — “High ‘ksv master wait’ And ‘ASM File Metadata Operation’ Wait Events

Posted by FatDBA on January 27, 2016

It was a case some time back while performing the performance analysis for one of the database, i encountered a curious wait event ‘ASM file metadata operation’. After further analysis i discovered that it’s happening specially when doing operations such as a DROP TABLESPACE, or in this case when using Data Pump. Though the server was basically not doing anything and much of the CPU cycles were IDLE but there was this strange wait event on ‘ASM file metadata operation’ with high waits on ‘ksv master wait’.

Below is the graphical depiction of the situation.
asm1

Image to highlight one of the major spike during the period.
asm2

Reason:
In an Exadata surrounds this is the suitable setting, nevertheless, in a non-Exadata environment, which it was in this case, this causes performance issues to arise as processes on the RDBMS awaits on a reply from the ASM which is trying to delivery smart-scan results.

Solution:
The following solutions are available for non-Exadata databases:
1. For the quickest solution, use the workaround. The workaround does not negatively impact non-Exadata databases.
This parameter is to be set on the database instance.
alter system set cell_offload_processing = false;
2. Upgrade to 12.1, when available. OR
3. Apply the 11.2.0.3 patch set OR
4. Apply one-off Patch 11800170, if available for your RDBMS and Grid Homes


Hope That Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: