Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘SQL’

Quarantine a SQL ? This can save from a system performance pandemic …

Posted by FatDBA on July 6, 2021

Hi Guys,

Last weekend I was asked to examine a performance problem where customer was observing sporadic resource spikes on CPU & PIOs (Physical IO scans), and had no idea how to handle the situation. They already have identified the problematic SQL and asked me to take a look, while doing initial investigation on the system, I found the resource manager was enabled and was killing the ill SQL every time when it tries to breach the limits set on CPU & other resources.

I spent some time understanding query behavior and performance, and I found this a scenario of SQL plan (PHV) flip where the query optimizer toggles between a good and a worst plan. This being a time sensitive issue and customer needs a quick fix before I identify the reason behind the PHV flip, we have to think of any technique on how we can stop the SQL execution with bad PHVs. This was an Oracle EE 19c system running on Exadata, I immediately proposed for a solution using new feature of ‘SQL Quarantine‘, which helps to prevent reuse of same execution plan which was terminated by resource managers due to resource limits.

Let me show you how to do that, here I will demonstrate how to quarantine a SQL & all its execution plans.

Well this is just a use case, you can also quarantine the SQL for all its PHVs or execution plans or even through the SQL Text. Here I will show how you can quarantine a SQL for all its execution plans.

Note: This feature is currently available only on Enterprise Edition on Engineered Systems, like Exadata and Exadata Cloud Service, but for this demo, I will be enabling the ‘_exadata_feature_on‘ parameter for demo purposes on my sandbox setup.

** Please don’t touch this parameter in your production environments if you’re not running on Oracle EE on the top of Exadata or Exadata Cloud services.

[oracle@canttowinsec ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 3 23:33:41 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           FALSE                     FALSE                     Exadata Feature On

SQL>
SQL>


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>


SQL> startup
ORACLE instance started.

Total System Global Area  922745216 bytes
Fixed Size                  8903040 bytes
Variable Size             843055104 bytes
Database Buffers           67108864 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL>



SQL>
SQL> @hidden
Enter value for param: exadata_feature_on
old   5:    and a.ksppinm like '%&param%'
new   5:    and a.ksppinm like '%exadata_feature_on%'

Parameter                                     Session Value             Instance Value            descr
--------------------------------------------- ------------------------- ------------------------- ------------------------------------------------------------
_exadata_feature_on                           TRUE                      TRUE                      Exadata Feature On

SQL>




-- For this demo I have created one table with 1000000 rows. 

SQL> select count(*) from bigtab;

  COUNT(*)
----------
   1000000

Next check the SQLID, PHV and TEXT of the SQL for which you want to setup this QUARANTINE feature and will create the quarantine configuration for it. You can set it for SQL ID, SQL ID + PHV or for SQL TEXT.

SQL> select sql_id, plan_hash_value, sql_text from v$sqlarea where sql_text like '%select * from bigtab%';

SQL_ID        PLAN_HASH_VALUE
------------- ---------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6fwqzurbc8y7k       441133017
select * from bigtab

SQL>


-- You can create CREATE_QUARANTINE BY SQL_ID 
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id')
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id', PLAN_HASH_VALUE => '&PLAN_HASH_VALUE');

-- You can create CREATE_QUARANTINE_BY_SQL_TEXT.
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_TEXt(SQL_TEXT => to_clob('select count(*) from emp'));



-- This command creates a quarantine configuration for SQL ID '6fwqzurbc8y7k' and all its execution plans.
SQL>
SQL> DECLARE
quarantine_sql VARCHAR2(30);
BEGIN
quarantine_sql :=   DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '6fwqzurbc8y7k');
END;
/  

PL/SQL procedure successfully completed.

SQL>



-- Check if quarantine config/profile is created.

SQL> select NAME, SQL_TEXT, ELAPSED_TIME, cpu_time,CREATED, ENABLED from dba_sql_quarantine;


NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           ALWAYS               ALWAYS                         04-JUL-21 12.09.25.567422 AM   YES


Now next we will use the DBMS_SQLQ.ALTER_QUARANTINE procedure to put limits on resource usage i.e. Elapsed time, CPU, IO (MBs), Number of physical (PIOs) requests, Number of logical (LIOs) requests.

SQL>
SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'CPU_TIME',
       PARAMETER_VALUE => '5');

    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_2akx3bq44wvc5',
       PARAMETER_NAME  => 'ELAPSED_TIME',
       PARAMETER_VALUE => '10');
END;
/  2    3    4    5    6    7    8    9   10   11   12

PL/SQL procedure successfully completed.

SQL>


-- let's verify limits that we have set against this SQL Quarantine profile.

NAME                           SQL_TEXT                       ELAPSED_TIME         CPU_TIME                       CREATED                        ENA
------------------------------ ------------------------------ -------------------- ------------------------------ ------------------------------ ---
SQL_QUARANTINE_2akx3bq44wvc5   select * from bigtab           10                   5                              04-JUL-21 12.12.41.918609 AM   YES

Next we will setup the RESOURCE MANAGER.

-- Create a pending area.
SQL> begin
 dbms_resource_manager.create_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

-- Create a consumer group with name 'DIXITPOC_GROUP'.
SQL> begin
 dbms_resource_manager.create_consumer_group(CONSUMER_GROUP=>'DIXITPOC_GROUP',COMMENT=>'To test SQL quarantine feature for one SQL');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.


-- Now bind the consumer group with the user 'DIXIT', I mean this could be any group through which you will execute the query.
SQL> begin
 dbms_resource_manager.set_consumer_group_mapping(attribute => 'ORACLE_USER',value => 'DIXIT',consumer_group =>'DIXITPOC_GROUP' );
 end;
  /  2    3    4

PL/SQL procedure successfully completed.


-- Create a resource plan with name 'POC_FOR_QUARANTINE'.
SQL> begin
dbms_resource_manager.create_plan(plan => 'POC_FOR_QUARANTINE',comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds');
end;
/  2    3    4

PL/SQL procedure successfully completed.


-- Create a plan directive by allocating resource plan 'POC_FOR_QUARANTINE' to consumer group 'DIXITPOC_GROUP'. I am setting execution limit of 5 seconds for the SQL.
SQL> begin
 dbms_resource_manager.create_plan_directive(
 plan => 'POC_FOR_QUARANTINE',
 group_or_subplan => 'DIXITPOC_GROUP',
 comment => 'Kill SQL and its other execution plans when exceeding CPU and Elapsed seconds',
 switch_group=>'CANCEL_SQL',
 switch_time => 5,
 switch_estimate=>false);
 end;
 /  2    3    4    5    6    7    8    9   10

PL/SQL procedure successfully completed.




-- Allocate full resources to rest of the sessions which are not part of this plan.
SQL> begin
 dbms_resource_manager.create_plan_directive(PLAN=> 'POC_FOR_QUARANTINE', GROUP_OR_SUBPLAN=>'OTHER_GROUPS',COMMENT=>'leave rest as is');
 end;
 /  2    3    4

PL/SQL procedure successfully completed.



-- Validate and submit pending area.
SQL> begin
 dbms_resource_manager.validate_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL> begin
 dbms_resource_manager.submit_pending_area();
 end;
 /  2    3    4

PL/SQL procedure successfully completed.

SQL>


-- Grant switch privilege to the DIXIT user which will switch it to 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager_privs.grant_switch_consumer_group('DIXIT','DIXITPOC_GROUP',false);
end;
/  2    3    4

PL/SQL procedure successfully completed.

-- Create initial consumer group for DIXIT user for 'DIXITPOC_GROUP' group.
SQL> begin
dbms_resource_manager.set_initial_consumer_group('DIXIT','DIXITPOC_GROUP');
end;
/  2    3    4

PL/SQL procedure successfully completed.

SQL>

Next set the resource manager at the database level, we will have to set the RESOURCE_MANAGER_PLAN to value or name of the plan ‘POC_FOR_QUARANTINE’.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string
SQL> 


SQL> alter DIXITtem set RESOURCE_MANAGER_PLAN = 'POC_FOR_QUARANTINE';
DIXITtem altered.

SQL> show parameter resource

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_limit                       boolean     TRUE
resource_manage_goldengate           boolean     FALSE
resource_manager_cpu_allocation      integer     5
resource_manager_plan                string      POC_FOR_QUARANTINE
SQL>

Execute the SQL for the first time, you will get ‘active time limit exceeded’ due to RM.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-00040: active time limit exceeded - call aborted

Now, if you run the SQL for the second time, you will see your quarantine plan/profile will be used and will abort the execution of the SQL.

SQL>
SQL> select * from emp;

ERROR at line 2:
ORA-56955: quarantined plan used

Hope It Helped
Prashant Dixit

Advertisement

Posted in Advanced, troubleshooting | Tagged: , , , , | 1 Comment »

Script to measure the Source & Contribution of any Wait Event within AWR snapshots.

Posted by FatDBA on January 1, 2018

🤘 Happy New Year Everyone! 🤘

So, here i am with the maiden post of this year.
Today i will discuss about a script that might be handy and useful while you are investigating any performance problem with the database.
In fact this is one of my favorite script which i always considers to run at a point when we have identified the leading wait event and the time-frame of the spike or for any transient variation in system behavior. This script helps you to measure the source and contribution or the impact of any specific wait event.

The script is a join between ASH views and historical views (dba_hist_active_sess_history AND dba_hist_snapshot) and provides you stats based on Average Number of Active Sessions (AAS) were waiting on this event during the period and with what total contribution in terms of percentage.

Lets work on a use case.
Suppose you have a system with huge User IOs happening, as conformed by any report or tool (AWR, ASH, Via any Script, OEM etc.)

For example using AWR we found one of the User IO wait class event ‘db file scattered read’ was leading the ‘Top Foreground Wait Events’ charts on a specific time when we have the high User IO load on system.

                                           Total Wait       Avg   % DB Wait
Event                                Waits Time (sec)      Wait   time Class
------------------------------ ----------- ---------- --------- ------ --------
db file scattered read              10,282       82.8    41.50ms  43.9 User I/O

So, now you have identified the leading wait event and the timeframe with maximum intensity or frequency, Lets call the script.

Script Inputs:
Example:

Enter value for event_class:
User I/O
Enter value for event_name: db file scattered read
Enter value for begin_snap: 193
Enter value for end_snap: 200
Enter value for dbid: 2896132084

WITH
events AS (
SELECT /*+ MATERIALIZE NO_MERGE */
       SUBSTR(TRIM(h.sql_id||' '||h.program||' '||
       CASE h.module WHEN h.program THEN NULL ELSE h.module END), 1, 128) source,
       h.dbid,
       COUNT(*) samples
  FROM dba_hist_active_sess_history h,
       dba_hist_snapshot s
 WHERE h.wait_class = TRIM('&Event_Class') AND h.event = TRIM('&Event_Name')
   AND h.snap_id BETWEEN &Begin_Snap AND &End_Snap
   AND h.dbid = &dbid
   AND s.snap_id = h.snap_id
   AND s.dbid = h.dbid
   AND s.instance_number = h.instance_number
 GROUP BY
       h.sql_id,
       h.program,
       h.module,
       h.dbid
 ORDER BY
       3 DESC
),
total AS (
SELECT SUM(samples) samples,
       SUM(CASE WHEN ROWNUM > 15 THEN samples ELSE 0 END) others
  FROM events
)
SELECT e.source,
       e.samples,
       ROUND(100 * e.samples / t.samples, 1) percent,
       (SELECT DBMS_LOB.SUBSTR(s.sql_text, 1000, 1) FROM dba_hist_sqltext s WHERE s.sql_id = SUBSTR(e.source, 1, 13) AND s.dbid = e.dbid AND ROWNUM = 1) sql_text
  FROM events e,
       total t
 WHERE ROWNUM  0.1
 UNION ALL
SELECT 'Others',
       others samples,
       ROUND(100 * others / samples, 1) percent,
       NULL sql_text
  FROM total
 WHERE others > 0
   AND ROUND(100 * others / samples, 1) > 0.1;

Below is the output of the query.

SOURCE                                                                      SAMPLES      PERCENT       SQL_TEXT
------------------------------------------------------------------------  ---------- ---------------  -------------------------------
5av23g8w7f3ka sqlplus@dixitlab.localdomain (TNS V1-V3)                          1          45.23       select * from dixit.bigtab
9a7gbkahasj1a sqlplus@dixitlab.localdomain (TNS V1-V4)                          1           5          select * from dixit.gianttable
OTHERS                                                                          14         49.77

Okay so result shows one of the SQL statement with SQL id 5av23g8w7f3ka was responsible for more than 45% of these waits.
So, now you have the evidence and can start troubleshooting this specific statement to reduce the USER IOs.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , , | 3 Comments »

How to use “SQL Patch” technique to fix Query performance issues.

Posted by FatDBA on November 30, 2017

Hi Guys,

In my last post “How to fix SQL Plan issues using OUTLINE DATA.” where we learned how to fix wrong cost estimates by the CBO using OUTLINE DATA of optimal plan available. But there are few issues with the approach and only worked with ad-hoc statements, as failed to handle situations when

– How to deal with situations when the SQL is coming from bundled applications and cannot be modified ?
– How to look for less number of hints leading to better execution plan?
– How, if it’s not possible to use the SQL BASELINES.

As discussed in the same last post where we added the full list of OUTLINE DATA, it seems little complicated, dirty and disordered to add the full outline set to the SQL statement.
ex: select /*+ …full outline here…*/

So, how to add required set of HINTS to force the better plan for the SQL.

Lets do a test to explain this subject in detail.

SQL TEXT: select count(*) from dixit.test;
Problem Statement: Query taking huge time to complete the COUNT of rows for TEST table and doing a FULL TABLE SCAN despite of Indexes.

Now after querying the DBA_HIST_SQLSTAT view i saw a total of 2 execution plans available and query is somehow picking the wrong plan or the costliest plan.
So, lets grab the OUTLINE data of the better plan first.

select * from table(dbms_xplan.display_awr(‘SQL_ID’, PHV, format => ‘ADVANCED’));
OR
EXPLAIN PLAN for select count(*) from dixit.test;
select * from table(dbms_xplan.display(format=>’+OUTLINE’));

Outline Data
————-
/*+
BEGIN_OUTLINE_DATA
INDEX_RS_ASC(@"SEL$1" "TEST"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.2.0.4')
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Lets create the SQL PATCH for the statement to force the SQL to use it with required number of HINTS.

begin
dbms_sqldiag_internal.i_create_patch(
sql_text => 'select count(*) from dixit.test',
hint_text => 'INDEX_RS_ASC(@"SEL$1" "STAGE"@"SEL$1")',
name => 'test_sql_patch_dixit');
end; /

I ran the query once again and as expected it ran this time with very less elapsed time and used Index Scan in place of expensive FTS.
I also noticed a new NOTE coming just below the execution plan.

Note
--------
- SQL patch "test_parallel_patch" used for this statement

In case if want to delete the patch.

begin
DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'test_sql_patch_dixit');
end;
/


HOPE IT HELPS!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

How to fix SQL Plan issues using OUTLINE DATA.

Posted by FatDBA on November 30, 2017

Hi Mates,
Today would like to discuss about the ‘SQL Plan Stability’, specially situations when the performance of a SQL statement degrades and performs poor in one environment and working absolutely good in other spheres.
I experienced such issues a lot during past few years while working as a performance consultant for many of the projects, but recently i encountered similar issue and would like to discuss about the same.
In short the situation is given below for that problematic SQL

Same request they say takes in SIT 268 sec:
• I run in SIT again –> 500 secs
• I run also in DEV –> 20 secs
• I run in preSIT –> 5 secs

Below are the details for the given Ill-SQL.

SQL Text:
SELECT a.port_inst_id AS mdf_es_port_inst_id, a.status AS mdf_es_port_status, d.port_inst_id AS mdf_ls_port_inst_id, d.status AS mdf_ls_port_status, regexp_substr( c.descr, :"SYS_B_00", :"SYS_B_01", :"SYS_B_02" ) AS naming_area, a.status AS mdf_es_port_status2, DECODE( a.reservation_inst_id, NULL, :"SYS_B_03", :"SYS_B_04" ) AS mdf_es_active_reservation, a.port_access_id AS lic_identity, egi_get_uda_value( a.equip_inst_id, :"SYS_B_05", :"SYS_B_06", :"SYS_B_07" ) AS kc, b.name AS coordinate, a.port_hum_id egi_port_name, egi_get_uda_value( d.equip_inst_id, :"SYS_B_08", :"SYS_B_09", :"SYS_B_10" ) AS cabinet_name, e.name AS termination_block_name, d.port_access_ id AS pair_name, egi_get_uda_value( d.equip_inst_id, :"SYS_B_11", :"SYS_B_12", :"SYS_B_13" ) AS signing_system, egi_get_uda_value( a.port_inst_id, :"SYS_B_14", :"SYS_B_15", :"SYS_B_16" ) AS processor_exchange, a.circ_path_inst_id AS es_circ_path_inst_id, a.next_path_inst_id AS es_next_path_inst_id, d.circ_path_inst_id AS ls_circ_path_inst_id, d.next_path_inst_id AS ls_next_path_inst_id, DECODE( d.role, :"SYS_B_17", :"SYS_B_18", :"SYS_B_19", :"SYS_B_20", :"SYS_B_21", :"SYS_B_22", :"SYS_B_23", :"SYS_B_24", NULL ) AS cabling_type FROM epa a, card_inst b, equip_inst c, epa d, card_inst e, xxx_xxxx_xxxxxxxx f WHERE a.port_inst_id = f.port_inst_id AND f.val_attr_inst_id = ( SELECT val_attr_inst_id FROM xxx_xxxx_xxxxxxxx WHERE group_name = :"SYS_B_25" AND attr_name = :"SYS_B_26" ) AND a.card_inst_id = b.card_inst_id AND b.type = :"SYS_B_27" AND a.equip_inst_id = c.equip_inst_id AND a.z_wired_port_inst_id = d.port_inst_id (+) AND d.card_inst_id = e.card_inst_id (+) AND regexp_substr( a.port_access_id, :"SYS_B_28", :"SYS_B_29", :"SYS_B_30" ) = :"SYS_B_31" AND to_number(regexp_substr( a.port_access_id, :"SYS_B_32", :"SYS_B_33", :"SYS_B_34" ) ) >= to_number(: "SYS_B_35") AND to_number(regexp_substr( a.port_access_id, :"SYS_B_36", :"SYS_B_37", :"SYS_B_38" ) ) <= to_number(:"SYS_B_39") AND f.attr_value = :"SYS_B_40" ORDER BY :"SYS_B_41"

I see around 5 different PHV’s or plans coming for this statement and last two 18733286 and 2202641467 are the worst plans here, first one with PHV 1663132818 is the BEST among all others.
1663132818 is with very less elapsed time, CPU Usage, IO time, LIOs, Disk reads, Rows Processed and Cost.
** From DBA_HIST_SQLSTAT view.

Let’s collect the OUTLINE and QUERY Block details of the statement, and this can be collected using
select * from table(dbms_xplan.display_awr(‘SQL_ID’, PHV, format => ‘ADVANCED’));

Query Block Name / Object Alias (identified by operation id):
————————————————————-

1 - SEL$841DDE77
10 - SEL$841DDE77 / VAL_ATTR_NAME@SEL$2
11 - SEL$841DDE77 / VAL_ATTR_NAME@SEL$2
12 - SEL$841DDE77 / F@SEL$1
13 - SEL$841DDE77 / F@SEL$1
14 - SEL$841DDE77 / A@SEL$1
15 - SEL$841DDE77 / A@SEL$1
16 - SEL$841DDE77 / C@SEL$1
17 - SEL$841DDE77 / C@SEL$1
18 - SEL$841DDE77 / D@SEL$1
19 - SEL$841DDE77 / D@SEL$1
20 - SEL$841DDE77 / E@SEL$1
21 - SEL$841DDE77 / E@SEL$1
22 - SEL$841DDE77 / B@SEL$1
23 - SEL$841DDE77 / B@SEL$1

Outline Data
————-

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
DB_VERSION('12.1.0.2')
OPT_PARAM('optimizer_index_cost_adj' 10)
FIRST_ROWS(1)
OUTLINE_LEAF(@"SEL$841DDE77")
MERGE(@"SEL$683B0107")
OUTLINE(@"SEL$C772B8D1")
UNNEST(@"SEL$2")
OUTLINE(@"SEL$683B0107")
OUTLINE(@"SEL$7511BFD2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
INDEX_RS_ASC(@"SEL$841DDE77" "VAL_ATTR_NAME"@"SEL$2" ("VAL_ATTR_NAME"."GROUP_NAME" "VAL_ATTR_NAME"."ATTR_NAME"))
INDEX_RS_ASC(@"SEL$841DDE77" "F"@"SEL$1" ("PORT_ATTR_SETTINGS"."ATTR_VALUE"))
BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$841DDE77" "F"@"SEL$1")
INDEX_RS_ASC(@"SEL$841DDE77" "A"@"SEL$1" ("EPA"."PORT_INST_ID"))
INDEX_RS_ASC(@"SEL$841DDE77" "C"@"SEL$1" ("EQUIP_INST"."EQUIP_INST_ID"))
INDEX_RS_ASC(@"SEL$841DDE77" "D"@"SEL$1" ("EPA"."PORT_INST_ID"))
INDEX_RS_ASC(@"SEL$841DDE77" "E"@"SEL$1" ("CARD_INST"."CARD_INST_ID"))
INDEX(@"SEL$841DDE77" "B"@"SEL$1" ("CARD_INST"."CARD_INST_ID"))
LEADING(@"SEL$841DDE77" "VAL_ATTR_NAME"@"SEL$2" "F"@"SEL$1" "A"@"SEL$1" "C"@"SEL$1" "D"@"SEL$1" "E"@"SEL$1"
"B"@"SEL$1")
USE_NL(@"SEL$841DDE77" "F"@"SEL$1")
USE_NL(@"SEL$841DDE77" "A"@"SEL$1")
USE_NL(@"SEL$841DDE77" "C"@"SEL$1")
USE_NL(@"SEL$841DDE77" "D"@"SEL$1")
USE_NL(@"SEL$841DDE77" "E"@"SEL$1")
USE_NL(@"SEL$841DDE77" "B"@"SEL$1")
NLJ_BATCHING(@"SEL$841DDE77" "B"@"SEL$1")
END_OUTLINE_DATA
*/

Let’s use the outline of the good plan as a hint to force the good execution plan to be used and check the execution time for the query.
Run the query from SQLPLUS using the hint:

SELECT
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE(‘12.1.0.2’)
DB_VERSION(‘12.1.0.2’)
OPT_PARAM(‘optimizer_index_cost_adj’ 10)
FIRST_ROWS(1)
OUTLINE_LEAF(@”SEL$841DDE77″)
MERGE(@”SEL$683B0107″)
OUTLINE(@”SEL$C772B8D1″)
UNNEST(@”SEL$2″)
OUTLINE(@”SEL$683B0107″)
OUTLINE(@”SEL$7511BFD2″)
OUTLINE(@”SEL$2″)
OUTLINE(@”SEL$1″)
INDEX_RS_ASC(@”SEL$841DDE77″ “VAL_ATTR_NAME”@”SEL$2” (“VAL_ATTR_NAME”.”GROUP_NAME” “VAL_ATTR_NAME”.”ATTR_NAME”))
INDEX_RS_ASC(@”SEL$841DDE77″ “F”@”SEL$1” (“PORT_ATTR_SETTINGS”.”ATTR_VALUE”))
BATCH_TABLE_ACCESS_BY_ROWID(@”SEL$841DDE77″ “F”@”SEL$1″)
INDEX_RS_ASC(@”SEL$841DDE77” “A”@”SEL$1” (“EPA”.”PORT_INST_ID”))
INDEX_RS_ASC(@”SEL$841DDE77″ “C”@”SEL$1” (“EQUIP_INST”.”EQUIP_INST_ID”))
INDEX_RS_ASC(@”SEL$841DDE77″ “D”@”SEL$1” (“EPA”.”PORT_INST_ID”))
INDEX_RS_ASC(@”SEL$841DDE77″ “E”@”SEL$1” (“CARD_INST”.”CARD_INST_ID”))
INDEX(@”SEL$841DDE77″ “B”@”SEL$1” (“CARD_INST”.”CARD_INST_ID”))
LEADING(@”SEL$841DDE77″ “VAL_ATTR_NAME”@”SEL$2” “F”@”SEL$1” “A”@”SEL$1” “C”@”SEL$1” “D”@”SEL$1” “E”@”SEL$1”
“B”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “F”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “A”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “C”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “D”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “E”@”SEL$1″)
USE_NL(@”SEL$841DDE77” “B”@”SEL$1″)
NLJ_BATCHING(@”SEL$841DDE77” “B”@”SEL$1”)
END_OUTLINE_DATA
*/

A.PORT_INST_ID AS MDF_ES_PORT_INST_ID, A.STATUS AS MDF_ES_PORT_STATUS, D.PORT_INST_ID AS MDF_LS_PORT_INST_ID, D.STATUS AS MDF_LS_PORT_STATUS, REGEXP_SUBSTR(C.DESCR,'[^ ]+’, 1,1) AS NAMING_AREA, A.STATUS AS MDF_ES_PORT_STATUS2, DECODE(A.RESERVATION_INST_ID, NULL, ‘N’, ‘Y’) AS MDF_ES_ACTIVE_RESERVATION, A.PORT_ACCESS_ID AS LIC_IDENTITY, EGI_GET_UDA_VALUE(A.EQUIP_INST_ID,’EQUIPMENT’,’Eqp Legacy Names’,’TAD-N’) AS KC, B.NAME AS COORDINATE, A.PORT_HUM_ID EGI_PORT_NAME, EGI_GET_UDA_VALUE(D.EQUIP_INST_ID,’EQUIPMENT’,’Eqp Legacy Names’,’TAD-N’) AS CABINET_NAME, E.NAME AS TERMINATION_BLOCK_NAME, D.PORT_ACCESS_ID AS PAIR_NAME, EGI_GET_UDA_VALUE(D.EQUIP_INST_ID,’EQUIPMENT’,’Eqp Legacy Names’,’SIGNSYST’) AS SIGNING_SYSTEM, EGI_GET_UDA_VALUE(A.PORT_INST_ID,’PORT’,’Copper Port Info’,’Processor Exchange’) AS PROCESSOR_EXCHANGE, A.CIRC_PATH_INST_ID AS ES_CIRC_PATH_INST_ID, A.NEXT_PATH_INST_ID AS ES_NEXT_PATH_INST_ID, D.CIRC_PATH_INST_ID AS LS_CIRC_PATH_INST_ID, D.NEXT_PATH_INST_ID AS LS_NEXT_PATH_INST_ID, DECODE(D.ROLE,’Primary’,’P’,’Secondary’,’S’,’Spare’,’R’,’Junction’,’F’,NULL) AS CABLING_TYPE FROM EPA A, CARD_INST B, EQUIP_INST C, EPA D, CARD_INST E, PORT_ATTR_SETTINGS F WHERE A.PORT_INST_ID =F.PORT_INST_ID AND F.VAL_ATTR_INST_ID= (SELECT VAL_ATTR_INST_ID FROM VAL_ATTR_NAME WHERE GROUP_NAME=’Copper Port Info’ AND ATTR_NAME =’Processor Exchange’ ) AND A.CARD_INST_ID =B.CARD_INST_ID AND B.TYPE =’ES’ AND A.EQUIP_INST_ID =C.EQUIP_INST_ID AND A.Z_WIRED_PORT_INST_ID =D.PORT_INST_ID(+) AND D.CARD_INST_ID =E.CARD_INST_ID(+) AND REGEXP_SUBSTR(A.PORT_ACCESS_ID,'[^-]+’, 1,1) =:B4 AND TO_NUMBER(REGEXP_SUBSTR(A.PORT_ACCESS_ID,'[^-]+’, 1,2))>=TO_NUMBER(:B3 ) AND TO_NUMBER(REGEXP_SUBSTR(A.PORT_ACCESS_ID,'[^-]+’, 1,2))<=TO_NUMBER(:B2 ) AND F.ATTR_VALUE =:B1 ORDER BY 8

I’ve executed the query in SIT and it’s taking no more than 5 seconds now with those OUTLINES, that is reeeeally good 😊

Now when the best plan is identified and tested, Question comes “HOW TO FIX THIS PERMANENTLY ??”

This can be done using one of the Oracle provided script named “coe_xfr_sql_profile.sql” which is part of popular SQLTXPLAIN/SQLT tool.
Download it from Metalink website (Direct Link: https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=458593899434965&parent=DOCUMENT&sourceId=1955195.1&id=215187.1&_afrWindowMode=0&_adf.ctrl-state=11co9htqxp_387#aref_section13)

In order to fix or correct the cost estimates for CBO to adopt a good execution plan we can follow below steps.
• Run the script coe_xfr_sql_profile.sql as SYSDBA user providing the sql_id and the good Plan Hash Value (PHV)
SQL> START coe_xfr_sql_profile.sql SQL_ID PHV

• Above step generates a script named in the format: “coe_xfr_sql_profile_SQL_ID_PLAN_HASH_VALUE.sql” i.e. including the sql_id and plan hash value.
Example: “coe_xfr_sql_profile_ SQLID_PHV.sql”

• If you like to use the profile for all literals, you can modify the force_match parameter in the generated script from false to true: force_match => TRUE
This will ensure the profile will be enabled even when different literals are used in the SQL query.

• Run the generated script as SYSDBA user in order to correct the optimizer cost estimates used in the execution plan for the sql_id and encourage it to use the desired plan obtained from the stated plan hash value.

• Next we can check if profile is accepted successfully
select SQL_ID, SQL_PROFILE,PLAN_HASH_VALUE from V$SQL where SQL_ID=’ SQL ID’;

• In case if profile is required to be disabled/dropped then do following steps.
The profile can be disabled or dropped as follows using the the SQL_PROFILE name returned from the query above:

EXEC DBMS_SQLTUNE.ALTER_SQL_PROFILE(Name => ”, Attribute_Name => ‘STATUS’, Value => ‘DISABLED’);

To Drop the Profile:
EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE(Name => ”);

This way we can test and fix a Plan Stability issues with any SQL statement!

Hope It Helps!
Prashant Dixit

Posted in Advanced | Tagged: , , | 4 Comments »

Worst Execution Plan ever … Elapsed time of 999:59:59 :(

Posted by FatDBA on July 12, 2014

few days back i have came across with one of the worst execution plan that i have ever seen in whole life. Query talking 999 Hours – 59 Minutes and 59 Seconds to complete.

Look at this shock below.
1. Elapsed Time to complete PARENT Operation ‘UPDATE’ – 999:59:59
2. With only 167k rows to be processed.
3. Cost is coming huge too – 32G

ORIGINALPLAN:
——————
Plan hash value: 2191276670

--------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT             |                   |   167K|  7685K|    32G  (1)|999:59:59 |
|   1 |  UPDATE                      | XXXXX             |       |       |            |          |
|   2 |   NESTED LOOPS OUTER         |                   |   167K|  7685K| 16333   (1)| 00:03:49 |
|   3 |    TABLE ACCESS FULL         | XXXXX             |   167K|  5069K| 16317   (1)| 00:03:49 |
|*  4 |    INDEX UNIQUE SCAN         | IBXXXX            |     1 |    16 |     0   (0)| 00:00:01 |
|*  5 |   TABLE ACCESS BY INDEX ROWID| TXXXX             |     1 |    43 |   194K  (1)| 00:45:17 |
|*  6 |    INDEX SKIP SCAN           | IXXXX_490008000_1 |   191K|       |  2261   (1)| 00:00:32 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
—————————————————

4 – access(“H”.”C1″=”BXXXX”.”C1″(+))
5 – filter(“XXXX01238500″=:B1 AND “XXXX00426800″=’Incident Resolution Time’)
6 – access(“XXXX00365100″=4)
filter(“XXXX00365100″=4)

I tried below mentioned steps to eradicate the problem.
1. Tried to restructure SQL statement but was of no use.

2. Made a composite index on the impacted table XXXXX, But with that i has only reduced to 2 hours and 40 minutes. Although we have considered the execution time but team still want to reduce the CHANGE WINDOW of as low as possible because in total it would take 5 hours 20 minutes during the run of 2 queries.

2. With a thought in our mind and after reading few of the performance documents we tried to apply some fixes in case if we have any bug during the run of queries.
After applying all possible fixes problems endures.

SQL> select optimizer_feature_enable,count(*) from v$system_fix_control group by optimizer_feature_enable order by 1 asc;

OPTIMIZER_FEATURE_ENABLE    COUNT(*)
------------------------- ----------
10.1.0                             3
10.1.0.3                           1
10.1.0.5                           2
10.2.0.1                           7
10.2.0.2                          12
10.2.0.3                          11
10.2.0.4                          73
10.2.0.5                         112
11.1.0.6                          40
11.1.0.7                          19
11.2.0.1                          67

OPTIMIZER_FEATURE_ENABLE    COUNT(*)
------------------------- ----------
11.2.0.2                         106
11.2.0.3                          75
8.0.0                             63
8.1.6                              1
8.1.7                              2
9.2.0                              8
9.2.0.8                            3
47

19 rows selected.

I tried some  blind-applies to get the exact bug and have ended applying few of the bug fixes.
Note: I advocate always execute the same on your test servers first or perform impact analysis before using hidden parameters.

SQL> select * from v$system_fix_control where optimizer_feature_enable=’11.2.0.3′ and description like ‘%skip%’;

BUGNO      VALUE SQL_FEATURE                              DESCRIPTION                                                      OPTIMIZER_FEATURE_ENABLE       EVENT IS_DEFAULT
---------- ---------- ---------------------------------------- ---------------------------------------------------------------- ------------------------- ---------- ----------
10080014          1 QKSFM_CBO_10080014                       allow skip scan costing for PRIOR join in CONNECT BY query       11.2.0.3                           0         1
9569678          1 QKSFM_CBO_9569678                        skip identity operator when generating NULL IS NOT NULL          11.2.0.3                           0         1
11744086          1 QKSFM_PLACE_GROUP_BY_11744086            skip SCG for query block with no aggregating columns             11.2.0.3                           0         1
9227576          1 QKSFM_CBO_9227576                        allow skip scan costing for semi/anti-join                       11.2.0.3                           0         1

SQL> alter session set “_fix_control”=’9814067:OFF’;
Session altered.

SQL> alter session set “_fix_control”=’10038373:OFF’;
Session altered.

SQL> alter session set “_fix_control”=’8893626:OFF’;
Session altered.

Ended up with no impact on the queries.

3.  Used one SQL HINT — optimizer_features_enable(‘10.1.0’)
Forcing the query optimizer to use compatible old version while creating the plan — version 10.1.0

Look at the plan soon after the change.

-------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 728359962

------------------------------------------------------------------------------
| Id  | Operation           | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT    |        |   167K|  7685K| 16361   (1)| 00:03:50 |
|   1 |  UPDATE             | XXXXX  |       |       |            |          |
|   2 |   NESTED LOOPS OUTER|        |   167K|  7685K| 16361   (1)| 00:03:50 |
|   3 |    TABLE ACCESS FULL| XXXXX  |   167K|  5069K| 16268   (1)| 00:03:48 |
|*  4 |    INDEX UNIQUE SCAN| IBXXXX |     1 |    16 |     0   (0)| 00:00:01 |
|*  5 |   TABLE ACCESS FULL | TXXXX  |     1 |    43 |  7099   (1)| 00:01:40 |
------------------------------------------------------------------------------

WHOA!!! execution time with some realistic values — and the query got completed in only 3 minutes after using the HINT.

Thanks
Prashant Dixit

Posted in Advanced | Tagged: , | 3 Comments »

TKProf Utility.

Posted by FatDBA on January 17, 2013

TKPROF:
————————–
TKProf is oracle provided trace reading utility which converts results in to more human readable format.
It required one to Enable SQL_Trace and set TIMED_Statistics which helps Oracle to create trace files with extra information.

Example:
SQL> alter session set timed_statistics = true;
Session altered.

SQL> alter session set sql_trace = true;
Session altered.

I’ve created a query to find my Oracle Server process id:
SELECT p.spid oracle_dedicated_process, s.process clientpid FROM v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = (select sid from v$mystat where rownum = 1);

ORACLE_DEDIC CLIENTPID
———— ————
11653        11636

SQL> select * from scott.emp where job=’CLERK’;

EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
———- ———- ——— ———- ——— ———- ———- ———-
7369 SMITH      CLERK           7902 17-DEC-80        800                    20
7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
7900 JAMES      CLERK           7698 03-DEC-81        950                    30
7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

[oracle@localhost udump]$ ls -ltr
total 128

-rw-r—– 1 oracle oinstall   635 Jan 17 23:05 orcl_ora_11331.trc
-rw-r—– 1 oracle oinstall   692 Jan 17 23:05 orcl_ora_11358.trc
-rw-r—– 1 oracle oinstall 21454 Jan 17 23:08 orcl_ora_11359.trc
-rw-r—– 1 oracle oinstall  2862 Jan 17 23:31 orcl_ora_11653.trc

[oracle@localhost udump]$ tkprof orcl_ora_11653.trc Prashant_tkprof_study.txt

TKPROF: Release 10.2.0.1.0 – Production on Thu Jan 17 23:32:41 2013
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

[oracle@localhost udump]$ ls -ltr
total 145

-rw-r–r– 1 oracle oinstall  5925 Jan 17 23:32 Prashant_tkprof_study.txt

 
[oracle@localhost udump]$ cat Prashant_tkprof_study.txt

TKPROF: Release 10.2.0.1.0 – Production on Thu Jan 17 23:32:41 2013

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Trace file: orcl_ora_11653.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

alter session set sql_trace = true

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
——- ——  ——– ———- ———- ———- ———-  ———-
total        1      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
********************************************************************************

SELECT p.spid oracle_dedicated_process, s.process clientpid
FROM
v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = (select sid from
v$mystat where rownum = 1)

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          0          0           1
——- ——  ——– ———- ———- ———- ———-  ———-
total        4      0.01       0.01          0          0          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
——-  —————————————————
1  NESTED LOOPS  (cr=0 pr=0 pw=0 time=2133 us)
1   HASH JOIN  (cr=0 pr=0 pw=0 time=2511 us)
1    FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=643 us)
1     COUNT STOPKEY (cr=0 pr=0 pw=0 time=20 us)
1      FIXED TABLE FULL X$KSUMYSTA (cr=0 pr=0 pw=0 time=14 us)
1       FIXED TABLE FULL X$KSUSGIF (cr=0 pr=0 pw=0 time=4 us)
20    FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=43 us)
1   FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=6 us)

********************************************************************************

select *
from
scott.emp where job=’CLERK’

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          8          0           4
——- ——  ——– ———- ———- ———- ———-  ———-
total        4      0.00       0.00          0          8          0           4

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
——-  —————————————————
4  TABLE ACCESS FULL EMP (cr=8 pr=0 pw=0 time=388 us)

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        2      0.01       0.01          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        4      0.00       0.00          0          8          0           5
——- ——  ——– ———- ———- ———- ———-  ———-
total        9      0.01       0.02          0          8          0           5

Misses in library cache during parse: 2
Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
——- ——  ——– ———- ———- ———- ———-  ———-
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
——- ——  ——– ———- ———- ———- ———-  ———-
total        0      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0

3  user  SQL statements in session.
0  internal SQL statements in session.
3  SQL statements in session.
********************************************************************************
Trace file: orcl_ora_11653.trc
Trace file compatibility: 10.01.00
Sort options: default

1  session in tracefile.
3  user  SQL statements in trace file.
0  internal SQL statements in trace file.
3  SQL statements in trace file.
3  unique SQL statements in trace file.
48  lines in trace file.
35  elapsed seconds in trace file.

Posted in Advanced | Tagged: , , | Leave a Comment »

ALTER SYSTEM SWITCH LOGFILE vs ALTER SYSTEM ARCHIVELOG CURRENT vs ALTER SYSTEM ARCHIVE LOG ALL

Posted by FatDBA on November 18, 2012

Both ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVELOG CURRENT will force a log switch, but they do it in different ways.

ARCHIVELOG CURRENT waits for the writing to complete.  This can take several minutes for multi-gigabyte redo logs.  Conversely, the ALTER SYSTEM SWITCH LOGFILE command is very fast and returns control to the caller in less than a second while ALTER SYSTEM ARCHIVELOG CURRENT pauses.

  • ALTER SYSTEM  SWITCH LOGFILE is asynchronous:  This command is fast to return to the invoking program because the writing of the redo log to the OS filesystem is done in the background.  There is a very small risk in cases where the ARCH process cannot complete writing the redo log, such as cases where the OS archivelog file directory is out of space.  It is also risky because the calling script may move on to a subsequent step, assuming that the redo has been written.  Some scripts will place a SLEEP 60 command in their backup script to allow time for the redo to complete writing, but this is not a best practice.
  • ALTER SYSTEM ARCHIVELOG CURRENT is synchronous:  This is faster to return because this command waits until the online redo log has completed the writing of the redo log file to the filesystem.  This command is safer because it waits for the OS to acknowledge (ACK) that the redo log has been successfully written.  Hence, ALTER SYSTEM ARCHIVELOG CURRENT is the best practice for production backup scripts with RMAN.
  • ALTER SYSTEM ARCHIVE LOG ALL triggers an archive process if your DB (System) has archive logs that needs to be apply if still you try using the command you’ll receive below provided error message.

*
ERROR at line 1:
ORA-00271: there are no logs that need archiving

Posted in Advanced | Tagged: | 1 Comment »

SQLPLUS: Command Prompt customizied.

Posted by FatDBA on November 15, 2012

Some tips and tricks for SQLPLUS terminal:

———————————————————-
SQL Prompt With Time
———————————————————-
SQL> set time on
09:50:16 SQL>
09:50:19 SQL> select sysdate from dual;

SYSDATE
———
20-APR-11

09:50:34 SQL>

———————————————————-
SQL Prompt With USER
———————————————————-
SQL> set sqlprompt “_USER> ”
SCOTT>
SCOTT>
SCOTT> select sysdate from dual;

SYSDATE
———
20-APR-11

SCOTT>

———————————————————-
SQL Prompt With SID
———————————————————-
SQL> set sqlprompt “_CONNECT_IDENTIFIER> ”
mydb>
mydb>
mydb> select sysdate from dual;

SYSDATE
———
20-APR-11

mydb>

———————————————————-
SQL Prompt With USER@SID
———————————————————-
SQL> set sqlprompt “_USER’@’_CONNECT_IDENTIFIER SQL> ”
SCOTT@mydb SQL>
SCOTT@mydb SQL>
SCOTT@mydb SQL> select sysdate from dual;

SYSDATE
———
20-APR-11

SCOTT@mydb SQL>

———————————————————-
SQL Prompt With DATE
———————————————————-
SQL> set sqlprompt “_DATE SQL> ”
20-APR-11 SQL>
20-APR-11 SQL>
20-APR-11 SQL> select sysdate from dual;

SYSDATE
———
20-APR-11

20-APR-11 SQL>

 
Many More Soon!!!

Posted in Basics | Tagged: | Leave a Comment »

What are Oracle Managed File (OMF) ?

Posted by FatDBA on September 11, 2012

What are Oracle-Managed Files?
Using Oracle-managed files simplifies the administration of an Oracle database. Oracle-managed files eliminate the need for you, the DBA, to directly manage the operating system files comprising an Oracle database. You specify operations in terms of database objects rather than filenames.

– Provides default location, name and size
– OMF is still optional. Normal file creation techniques still available

Oracle internally uses standard file system interfaces to create and delete files as needed for the following database structures:

Tablespaces
Online redo log files
Control files

First you have to enable OMF by altering parameter db_create_file_dest.
Example:
SQL> show parameter db_create

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_file_dest                  string

Here i’m trying to create a tablespace with name ‘tb1’ but you’ll recieve error message asking you to provide DATAFILE/TEMPFILE clause.

SQL> create tablespace tbi;
create tablespace tbi
                    *
ERROR at line 1:
ORA-02199: missing DATAFILE/TEMPFILE clause
Alter parameter db_create_file_dest.
SQL> alter system set db_create_file_dest=’/u01/app/oracle/oradata/’ scope=both;

System altered.
SQL> show parameter db_create_file_dest;

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_file_dest                  string      /u01/app/oracle/oradata/

Let’s try to create Tablespace in same fashion we tried earlier.
SQL> create tablespace tb1;

Tablespace created.
Done.

Posted in Advanced | Tagged: , , | 2 Comments »

Explain Plan.

Posted by FatDBA on August 23, 2012

Explain Plan is a great way to tune your queries.
As a bonus for using Explain Plan, you will learn more about how the DBMS works “behind the scenes”, enabling you to write efficient queries the first time around.

Explain Plan executes your query and records the “plan” that Oracle devises to execute your query. By examining this plan, you can find out if Oracle is picking the right indexes and joining your tables in the most efficient manner. There are a few different ways to utilize Explain Plan.

The first thing you will need to do is make sure you have a table called plan_table available in your schema.

If this table is not there run this script provided by oracle to create that table
ORACLE_HOME/rdbms/admin/utlxplan.sql .. for UNIX plat formas and
ORACLE_HOME\rdbms\admin\utlxplan.sql .. for WINDOWS platforms

SQL> select * from plan_table;

no rows selected

SQL> explain plan for select * from etr where team=’cis’;
Explained.

SQL>  select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
——————————————————————————–
Plan hash value: 3490786915

—————————————————————————-
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
—————————————————————————-
|   0 | SELECT STATEMENT  |        |     2 |    30 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| ETR |     2 |    30 |     3   (0)| 00:00:01 |
—————————————————————————-

Predicate Information (identified by operation id):
—————————————————

PLAN_TABLE_OUTPUT
——————————————————————————–

1 – filter(“TEAM”=’cis’)

13 rows selected.
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
——————————————————————————–
Plan hash value: 3490786915

—————————————————————————-
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
—————————————————————————-
|   0 | SELECT STATEMENT  |        |     2 |    30 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| ETR |     2 |    30 |     3   (0)| 00:00:01 |
—————————————————————————-

Predicate Information (identified by operation id):
—————————————————

PLAN_TABLE_OUTPUT
——————————————————————————–

1 – filter(“TEAM”=’cis’)

13 rows selected.

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: