Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for November, 2025

How Oracle 23ai Fixes Bad Plans Instantly.. A Live Demo of Real-Time SPM

Posted by FatDBA on November 23, 2025

Recently, I did a quick round of testing in my lab with Oracle Database 23ai, focusing on its new SQL Plan Management (SPM) enhancements and the results were impressive.

In 23ai, SPM can actually detect and fix bad execution plans in real time. It monitors a running query, compares the performance of the newly generated plan against the best ones it has seen before (tracked in the Automatic SQL Tuning Set), and if the new plan performs worse, it automatically switches back to the proven baseline.

No evolve task. No manual verification. No DBA intervention.
This feature Real-Time SQL Plan Management was tested on an Oracle 23ai lab build (pre-release environment). While 23ai itself isn’t generally available on-prem (its successor Oracle 26ai now carries these capabilities), the behavior shown here is identical in both 23ai and 26ai, and is already live and enabled by default in Oracle Autonomous Database.

Let’s see it live!!!

SET ECHO ON FEEDBACK ON HEADING ON LINES 300 PAGES 500 TIMING ON
SET SERVEROUTPUT ON
SET LONG 2000000 LONGCHUNKSIZE 2000000 TRIMSPOOL ON

PROMPT === Create tables
CREATE TABLE fact_sales (
  sales_id     NUMBER GENERATED BY DEFAULT AS IDENTITY,
  prod_id      NUMBER NOT NULL,
  cust_id      NUMBER NOT NULL,
  date_id      DATE   NOT NULL,
  channel_id   NUMBER NOT NULL,
  qty          NUMBER NOT NULL,
  amount       NUMBER NOT NULL
);

CREATE TABLE dim_product  ( prod_id NUMBER PRIMARY KEY, category VARCHAR2(40), brand VARCHAR2(40) );
CREATE TABLE dim_customer ( cust_id NUMBER PRIMARY KEY, region   VARCHAR2(30), segment VARCHAR2(30) );
CREATE TABLE dim_channel  ( channel_id NUMBER PRIMARY KEY, channel_name VARCHAR2(30) );

Table created.
Table created.
Table created.
Table created.



PROMPT === Load dimensions
DECLARE
BEGIN
  FOR i IN 1..100000 LOOP
    INSERT INTO dim_product VALUES(i, CASE WHEN MOD(i,10)=0 THEN 'Premium' ELSE 'Standard' END, 'Brand_'||MOD(i,200));
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_product rows inserted: '||TO_CHAR(100000));

  FOR i IN 1..500000 LOOP
    INSERT INTO dim_customer VALUES(i,
      CASE MOD(i,5) WHEN 0 THEN 'NA' WHEN 1 THEN 'EU' WHEN 2 THEN 'APAC' WHEN 3 THEN 'ME' ELSE 'LATAM' END,
      CASE MOD(i,4) WHEN 0 THEN 'ENT' WHEN 1 THEN 'SMB' WHEN 2 THEN 'MID' ELSE 'CONSUMER' END);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_customer rows inserted: '||TO_CHAR(500000));

  FOR i IN 1..6 LOOP
    INSERT INTO dim_channel VALUES(i, 'CH_'||i);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_channel rows inserted: 6');

  COMMIT;
  DBMS_OUTPUT.PUT_LINE('Commit complete.');
END;
/
SHOW ERRORS

dim_product rows inserted: 100000
dim_customer rows inserted: 500000
dim_channel rows inserted: 6
Commit complete.
PL/SQL procedure successfully completed.





PROMPT === Load fact 
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 0..59 LOOP  -- 60 days x 80k = ~4.8M
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/
SHOW ERRORS

fact_sales rows inserted:  4,800,000
PL/SQL procedure successfully completed.






PROMPT === Indexes + Stats
CREATE INDEX fs_prod    ON fact_sales(prod_id);
CREATE INDEX fs_cust    ON fact_sales(cust_id);
CREATE INDEX fs_date    ON fact_sales(date_id);
CREATE INDEX fs_channel ON fact_sales(channel_id);

Index created.
Index created.
Index created.
Index created.




EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_PRODUCT',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CUSTOMER',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CHANNEL',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'FACT_SALES',CASCADE=>TRUE);

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

In this first stage of my testing, I have created a mini data warehouse environment inside Oracle 23ai to simulate a realistic analytical workload. I built three dimension tables (DIM_PRODUCT, DIM_CUSTOMER, DIM_CHANNEL) and one large fact table (FACT_SALES) containing roughly 4.8 million rows of transactional data spread across 60 business days.

After loading the data, I have collected fresh optimizer statistics on all tables and their indexes to ensure that the Oracle Optimizer starts with a clear and accurate view of data distribution before I begin the query and plan testing phase. This baseline setup is essential because, in the next steps, I’ll demonstrate how Real-Time SQL Plan Management (SPM) reacts when execution plans change both for the better and for the worse.


-- What date range do we have?
SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 01-MAR-22



-- Rows per month (quick sanity check)
SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;
SQL> 

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03      80000

3 rows selected.



-- Lets addd the remaining days so Apr to Jun exists (60..180 --> Mar 2 .. Jun 30)
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 60..180 LOOP
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('Additional fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/

SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 30-JUN-22


SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03    2480000
2022-04    2400000
2022-05    2480000
2022-06    2400000







VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- Optional: flush shared pool to force a fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;

-- Run the query
SELECT p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id = f.prod_id
JOIN   dim_customer c ON c.cust_id = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4

2 rows selected.

Elapsed: 00:00:21.33


-- 0) (optional but recommended) collect row-source stats
ALTER SESSION SET statistics_level = ALL;

-- 1) run the query with a gather hint (guarantees ALLSTATS LAST works)
VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)
EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

ALTER SYSTEM FLUSH SHARED_POOL;

SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4




-- get the SQL_ID & CHILD_NUMBER of the query you just ran
COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
2t8da8zxfzp50            0      1151134654 24-NOV-25


SQL> SELECT *
  2  FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id        => '2t8da8zxfzp50',
         cursor_child_no => 0,            
         format        => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));  

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2t8da8zxfzp50, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category,
c.region,        SUM(f.amount) AS rev FROM   fact_sales f JOIN
dim_product  p ON p.prod_id  = f.prod_id JOIN   dim_customer c ON
c.cust_id  = f.cust_id JOIN   dim_channel  ch ON ch.channel_id =
f.channel_id WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                 AND TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category =
'Premium'   AND  c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 1151134654

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |              |      1 |        |       |      2 |00:00:05.79 |   59198 |       |       |          |
|   1 |  HASH GROUP BY                          |              |      1 |      2 |   102 |      2 |00:00:05.79 |   59198 |  1200K|  1200K|  654K (0)|
|*  2 |   HASH JOIN                             |              |      1 |  36018 |  1793K|    145K|00:00:05.75 |   59198 |    23M|  4013K|   26M (0)|
|*  3 |    HASH JOIN                            |              |      1 |  36018 |  1477K|    363K|00:00:05.26 |   57865 |  1995K|  1995K| 3065K (0)|
|   4 |     VIEW                                | VW_GBF_17    |      1 |  50000 |   390K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  5 |      FILTER                             |              |      1 |        |       |  10000 |00:00:00.01 |     396 |       |       |          |
|*  6 |       TABLE ACCESS FULL                 | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  7 |     HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.33 |   57469 |  1922K|  1922K| 1411K (0)|
|*  8 |      TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |       |       |          |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.70 |   57467 |       |       |          |
|* 10 |       INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:00.97 |   17336 |       |       |          |
|* 11 |    TABLE ACCESS FULL                    | DIM_CUSTOMER |      1 |    200K|  1757K|    200K|00:00:00.07 |    1333 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$B734A6F8
   4 - SEL$FB6C8AF9 / VW_GBF_17@SEL$393E5B1C
   5 - SEL$FB6C8AF9
   6 - SEL$FB6C8AF9 / P@SEL$1
   8 - SEL$B734A6F8 / CH@SEL$3
   9 - SEL$B734A6F8 / F@SEL$1
  10 - SEL$B734A6F8 / F@SEL$1
  11 - SEL$B734A6F8 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$FB6C8AF9")
      ELIM_GROUPBY(@"SEL$2B4A2561")
      OUTLINE_LEAF(@"SEL$B734A6F8")
      PLACE_GROUP_BY(@"SEL$EE94F965" ( "P"@"SEL$1" ) 17)
      OUTLINE(@"SEL$2B4A2561")
      ELIM_GROUPBY(@"SEL$85CD04AE")
      OUTLINE(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$85CD04AE")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$393E5B1C")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$B734A6F8" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$B734A6F8" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$B734A6F8" "F"@"SEL$1")
      NO_ACCESS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      FULL(@"SEL$B734A6F8" "C"@"SEL$2")
      LEADING(@"SEL$B734A6F8" "CH"@"SEL$3" "F"@"SEL$1" "VW_GBF_17"@"SEL$393E5B1C" "C"@"SEL$2")
      USE_HASH(@"SEL$B734A6F8" "F"@"SEL$1")
      USE_HASH(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH(@"SEL$B734A6F8" "C"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH_AGGREGATION(@"SEL$B734A6F8")
      FULL(@"SEL$FB6C8AF9" "P"@"SEL$1")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C"."CUST_ID"="F"."CUST_ID")
   3 - access("ITEM_1"="F"."PROD_ID")
   5 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   6 - filter("P"."CATEGORY"='Premium')
   7 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   8 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
  10 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  11 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT"*"ITEM_2")[22]
   2 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   4 - (rowset=256) "ITEM_1"[NUMBER,22], "ITEM_2"[NUMBER,2]
   5 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   6 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   7 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22],
       "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   8 - "CH"."CHANNEL_ID"[NUMBER,22]
   9 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
  10 - "F".ROWID[ROWID,10]
  11 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - this is an adaptive plan


122 rows selected.



Here I just set the reporting window using bind variables and flushed the shared pool to make sure Oracle parses the query fresh. The optimizer picked a clean and efficient plan using an index range scan on FS_DATE to grab only the required time slice from FACT_SALES, then performing hash joins to the small dimension tables and a hash group by at the end.

All the filters from the WHERE clause show up correctly in the predicates, and the stats (about 95k buffers, ~3 seconds) confirm it’s a solid in-memory run. The note at the bottom clearly says SQL plan baseline accepted, which means this is now my “good plan” the one Real-Time SPM will lock onto and automatically revert to if a slower plan shows up later.

VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- 1) regress stats on DIM_CUSTOMER
EXEC DBMS_STATS.DELETE_TABLE_STATS(USER,'DIM_CUSTOMER');

PL/SQL procedure successfully completed.


-- 2) force fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;
System altered.


-- 3) run the query (convert binds to DATEs)
SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category, c.region, SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4



SQL> COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;SQL>   

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
7v44tqagjad7k            0      2441995847 24-NOV-25


SELECT *
FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id => '7v44tqagjad7k', 
         cursor_child_no => 0,
         format => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7v44tqagjad7k, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category, c.region,
SUM(f.amount) AS rev FROM   fact_sales f JOIN   dim_product  p ON
p.prod_id  = f.prod_id JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id WHERE  f.date_id
BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')                      AND
TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category = 'Premium'   AND
c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 2441995847

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |              |      1 |        |       |      2 |00:00:09.07 |   59199 |  14384 |  14384 |       |       |          |         |
|   1 |  HASH GROUP BY                           |              |      1 |  36018 |  2743K|      2 |00:00:09.07 |   59199 |  14384 |  14384 |  1200K|  1200K|  661K (0)|         |
|*  2 |   FILTER                                 |              |      1 |        |       |    145K|00:00:09.04 |   59199 |  14384 |  14384 |       |       |          |         |
|*  3 |    HASH JOIN                             |              |      1 |  36018 |  2743K|    145K|00:00:09.02 |   59199 |  14384 |  14384 |  2402K|  2402K| 3124K (0)|         |
|*  4 |     TABLE ACCESS FULL                    | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN                            |              |      1 |  40000 |  2500K|   1454K|00:00:08.67 |   58803 |  14384 |  14384 |   234M|    14M|   40M (1)|     116M|
|*  6 |      HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.60 |   57469 |      0 |      0 |  1922K|  1922K| 1348K (0)|         |
|*  7 |       TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |      0 |      0 |       |       |          |         |
|   8 |       TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.83 |   57467 |      0 |      0 |       |       |          |         |
|*  9 |        INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:01.04 |   17336 |      0 |      0 |       |       |          |         |
|* 10 |      TABLE ACCESS FULL                   | DIM_CUSTOMER |      1 |    212K|  6229K|    200K|00:00:00.05 |    1334 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$EE94F965
   4 - SEL$EE94F965 / P@SEL$1
   7 - SEL$EE94F965 / CH@SEL$3
   8 - SEL$EE94F965 / F@SEL$1
   9 - SEL$EE94F965 / F@SEL$1
  10 - SEL$EE94F965 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$EE94F965" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$EE94F965" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$EE94F965" "F"@"SEL$1")
      FULL(@"SEL$EE94F965" "C"@"SEL$2")
      FULL(@"SEL$EE94F965" "P"@"SEL$1")
      LEADING(@"SEL$EE94F965" "CH"@"SEL$3" "F"@"SEL$1" "C"@"SEL$2" "P"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "F"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "C"@"SEL$2")
      USE_HASH(@"SEL$EE94F965" "P"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$EE94F965" "P"@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$EE94F965")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   3 - access("P"."PROD_ID"="F"."PROD_ID")
   4 - filter("P"."CATEGORY"='Premium')
   5 - access("C"."CUST_ID"="F"."CUST_ID")
   6 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   7 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
   9 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  10 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT")[22]
   2 - "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   4 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   5 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30],
       "C"."REGION"[VARCHAR2,30]
   6 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   7 - "CH"."CHANNEL_ID"[NUMBER,22]
   8 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   9 - "F".ROWID[ROWID,10]
  10 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan


110 rows selected.







-- Real-Time SPM evidence
-- Real-Time SPM Evidence
ALTER SESSION SET optimizer_capture_sql_plan_baselines = TRUE;
ALTER SESSION SET optimizer_use_sql_plan_baselines     = TRUE;

COLUMN last_verified            FORMAT A19
COLUMN foreground_last_verified FORMAT A19
COLUMN accepted                 FORMAT A8
COLUMN enabled                  FORMAT A8
COLUMN origin                   FORMAT A20
COLUMN sql_handle               FORMAT A20
COLUMN plan_name                FORMAT A25

SELECT sql_handle,
       plan_name,
       origin,
       accepted,
       enabled,
       TO_CHAR(last_verified,'YYYY-MM-DD HH24:MI:SS')            AS last_verified,
       TO_CHAR(foreground_last_verified,'YYYY-MM-DD HH24:MI:SS') AS fg_verified
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SUM(f.amount)%'
ORDER  BY last_verified DESC;

SQL_HANDLE           PLAN_NAME                 ORIGIN               ACCEPTED ENABLED LAST_VERIFIED        FG_VERIFIED
-------------------- ------------------------- -------------------- -------- ------- -------------------- --------------------
SQL_bf3d9f8a42       SQL_PLAN_2t8da8zxfzp50    AUTO-CAPTURE         YES      YES     2025-11-24 14:06:19  2025-11-24 14:06:19
SQL_bf3d9f8a42       SQL_PLAN_7v44tqagjad7k    FOREGROUND-CAPTURE   NO       NO      2025-11-24 14:06:18  2025-11-24 14:06:18

-- SQL_PLAN_2t8da8zxfzp50 ---> the good baseline plan, automatically captured, accepted and enabled by Real-Time SPM.

-- SQL_PLAN_7v44tqagjad7k ----> the regressed plan, captured during foreground execution but rejected (ACCEPTED=NO) as it underperformed compared to the verified baseline.




In the good baseline plan (SQL_ID 2t8da8zxfzp50, plan hash 1151134654), the optimizer uses an FS_DATE index range scan with batched ROWID fetches from FACT_SALES, then hash-joins (F ↔ VW_GBF_17(P) ↔ C) with a tiny full scan of CH (3 rows) and finishes with a hash group by. It’s fully in-memory about 59,198 buffers, ~5.79s elapsed, 0 reads, 0 temp with predicates correctly slicing DATE_ID; this is the efficient baseline. After deleting stats on DIM_CUSTOMER, the regressed plan (SQL_ID 7v44tqagjad7k, plan hash 2441995847) keeps the same general shape and the FS_DATE index range scan, but row-source sizing changes due to misestimates; it still hash-joins and groups, yet now needs ~59,199 buffers, ~9.07s, ~14,384 reads, and ~116MB temp (dynamic sampling level=2), i.e., the same outline but degraded cardinality drives extra I/O and temp usage aabout a 56% slowdown …

The numbers tell the story: Oracle tried the new plan once, learned it was bad, and reverted. This isn’t magic or marketing. It’s just Oracle 23ai doing what every good DBA does watching, measuring, and reverting fast when things go south. Real-Time SPM uses the same cost and runtime data we always trusted from AWR, but now the optimizer reacts within seconds instead of waiting for us.

On large production systems, it won’t prevent every performance issue, but it dramatically reduces the window where a regression hurts. It’s safe to enable, requires no extra license beyond Enterprise Edition, and it works right out of the box in Autonomous DB.

In Short

  • 23ai on-prem Enterprise Edition and Autonomous DB both support Real-Time SPM.
  • It compares runtime metrics from the Automatic SQL Tuning Set.
  • When a new plan runs slower, Oracle reinstates the previous good baseline automatically.
  • You’ll see it immediately in DBA_SQL_PLAN_BASELINES with ORIGIN = 'FOREGROUND-CAPTURE'.
  • You can watch it happen live through execution plans and elapsed times.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | 1 Comment »

From Painful Manual LOB Shrink to Automatic SecureFiles Shrink

Posted by FatDBA on November 15, 2025

I’ve been working with LOBs for years now, and trust me, shrinking them has always been a headache. Anyone who has ever tried ALTER TABLE … SHRINK SPACE on a big SecureFiles LOB knows the pain … blocking sessions, unexpected waits, and sometimes that lovely ORA-1555 popping up at the worst time. Every DBA eventually gets into that situation where a LOB segment is 200 GB on disk but only 10 GB of real data remains. You delete rows… but the space never comes back unless you manually shrink it, which itself can cause more issues.

With the introduction of Automatic SecureFiles Shrink, Oracle really made a DBA’s life easier. This feature, which first came out in 23ai, quietly frees up unused LOB space in the background without disrupting your workload. I wanted to see how it behaves in a real scenario, so I set up a small lab and tested it out. Here’s the whole experiment, raw and simple.

Lets do a demo and understand how this new feature works … I spun up a fresh PDB and made a small tablespace just for this test. Nothing fancy.

CREATE TABLESPACE lobts
  DATAFILE '/u02/oradata/LOBTS01.dbf'
  SIZE 1G AUTOEXTEND ON NEXT 256M;

Tablespace created.



-- Table with a securefile LOB based column.
CREATE TABLE tst_securefile_lob
(
    id NUMBER,
    lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE (
    TABLESPACE lobts
    CACHE
);

Table created.




SELECT table_name, column_name, securefile
FROM   user_lobs
WHERE  table_name='TST_SECUREFILE_LOB';

TABLE_NAME           COLUMN_NAME   SECUREFILE
-------------------  ------------  ----------
TST_SECUREFILE_LOB   LOB_DATA      YES


Next, I inserted a good amount of junk data around 10,000 rows of random CLOB strings. I wanted the LOB segment to be big enough to see clear differences after shrink.

BEGIN
  FOR r IN 1 .. 10 LOOP
    INSERT INTO tst_securefile_lob (id, lob_data)
    SELECT r*100000 + level,
           TO_CLOB(DBMS_RANDOM.STRING('x', 32767))
    FROM dual
    CONNECT BY level <= 1000;
    COMMIT;
  END LOOP;
END;
/

PL/SQL procedure successfully completed.



SELECT COUNT(*) FROM tst_securefile_lob;

  COUNT(*)
----------
     10000




SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE  ul.table_name = 'TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$     131072     1024.00




-- After stats and a quick check in USER_SEGMENTS, the LOB segment was showing a nice chunky size. 
-- Then I deleted almost everything
-- Now the table will have very few rows left, but the LOB segment was still the same size. As usual.
DELETE FROM tst_securefile_lob
WHERE id < 900000;
COMMIT;

9990 rows deleted. 
Commit complete.


-- Checking LOB Internal Usage (Before Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 131072  bytes=1073741824
Used blocks         =  10240  bytes=83886080
Expired blocks      = 110592  bytes=905969664
Unexpired blocks    =  10240  bytes=83886080
-- This clearly shows almost the entire segment is expired/free but not reclaimed.


-- Checking Auto Shrink Statistics (Before Enabling)
SELECT name, value
FROM   v$sysstat
WHERE  name LIKE '%Auto SF SHK%'
ORDER  BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               0
Auto SF SHK successful                       0
Auto SF SHK total number of tasks            0

Turning on Automatic Shrink — By default this feature is OFF, so I enabled it:

EXEC DBMS_SPACE.SECUREFILE_SHRINK_ENABLE;

PL/SQL procedure successfully completed.

That’s literally it. No parameters, no tuning, nothing else. Just enable.

Automatic SecureFiles Shrink does not run immediately after you delete data. Oracle requires that a SecureFiles LOB segment be idle for a specific amount of time before it becomes eligible for shrinking, and the default idle-time limit is 1,440 minutes (24 hours). “Idle” means that no DML or preallocation activity has occurred on that LOB during that period. Once the segment meets this condition, Oracle considers it during its automatic background shrink task, which is part of AutoTask and runs every 30 minutes with a defined processing window.

When the task executes, it attempts to shrink eligible segments, but it does so gradually and in small increments using a trickle-based approach, rather than reclaiming all possible space in a single operation. This incremental behavior is deliberate: it reduces impact on running workloads and avoids heavy reorganization all at once. Only segments that meet all selection criteria .. such as having sufficient free space above the defined thresholds and not using RETENTION MAX … are processed. Because of this incremental design and the eligibility rules, the full space reclamation process can span multiple background cycles.

Some of the internal hidden/underscore params that can be used to control these limits (not unless support asked you to do or you are dealing a lab system)

Parameter                                    Default_Value    Session_Value    Instance_Value   IS_SESSION_MODIFIABLE   IS_SYSTEM_MODIFIABLE
------------------------------------------- ---------------  ---------------- ---------------- ----------------------- ---------------------
_ktsls_autoshrink_seg_idle_seconds            86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_seg_pen_seconds             86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_trickle_mb                  5                5                5                FALSE                   IMMEDIATE

Okay, lets check out post chnaghe outputs, what automatic LOB shrink does to our test object.

SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE ul.table_name='TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$      40960      320.00

Note: From ~1024 MB down to ~320 MB. Auto shrink worked     🙂



-- DBMS_SPACE Usage (After Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 40960  bytes=335544320
Used blocks         =  9216  bytes=75497472
Expired blocks      =  6144  bytes=50331648
Unexpired blocks    =  9216  bytes=75497472

Notee:  Expired blocks dropped from 110k -->  6k. This confirms auto shrink freed most of the fragmented space.






-- After the task is run.
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%Auto SF SHK%'
ORDER BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               1
Auto SF SHK successful                       1
Auto SF SHK total number of tasks            1




SELECT owner,
       segment_name,
       shrunk_bytes,
       attempts,
       last_shrink_time
FROM v$securefile_shrink;

OWNER      SEGMENT_NAME               SHRUNK_BYTES    ATTEMPTS   LAST_SHRINK_TIME
---------  ------------------------   -------------   ---------  ---------------------------
PRASHANT   SYS_LOB0001234567C00002$      744947712          1     14-NOV-2025 06:32:15

Note:  Oracle automatically reclaimed ~710 MB of wasted LOB space.

This feature, It’s simple, it’s safe, and it saves DBAs from doing manual shrink maintenance again and again. It’s not a fast feature it’s slow and polite on purpose but it works exactly as expected.

If your system has LOBs (EBS attachments, documents, JSON, logs, media files, etc.), you should absolutely enable this. Let Oracle handle the boring part.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »