Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘23ai’

How Oracle 23ai Fixes Bad Plans Instantly.. A Live Demo of Real-Time SPM

Posted by FatDBA on November 23, 2025

Recently, I did a quick round of testing in my lab with Oracle Database 23ai, focusing on its new SQL Plan Management (SPM) enhancements and the results were impressive.

In 23ai, SPM can actually detect and fix bad execution plans in real time. It monitors a running query, compares the performance of the newly generated plan against the best ones it has seen before (tracked in the Automatic SQL Tuning Set), and if the new plan performs worse, it automatically switches back to the proven baseline.

No evolve task. No manual verification. No DBA intervention.
This feature Real-Time SQL Plan Management was tested on an Oracle 23ai lab build (pre-release environment). While 23ai itself isn’t generally available on-prem (its successor Oracle 26ai now carries these capabilities), the behavior shown here is identical in both 23ai and 26ai, and is already live and enabled by default in Oracle Autonomous Database.

Let’s see it live!!!

SET ECHO ON FEEDBACK ON HEADING ON LINES 300 PAGES 500 TIMING ON
SET SERVEROUTPUT ON
SET LONG 2000000 LONGCHUNKSIZE 2000000 TRIMSPOOL ON

PROMPT === Create tables
CREATE TABLE fact_sales (
  sales_id     NUMBER GENERATED BY DEFAULT AS IDENTITY,
  prod_id      NUMBER NOT NULL,
  cust_id      NUMBER NOT NULL,
  date_id      DATE   NOT NULL,
  channel_id   NUMBER NOT NULL,
  qty          NUMBER NOT NULL,
  amount       NUMBER NOT NULL
);

CREATE TABLE dim_product  ( prod_id NUMBER PRIMARY KEY, category VARCHAR2(40), brand VARCHAR2(40) );
CREATE TABLE dim_customer ( cust_id NUMBER PRIMARY KEY, region   VARCHAR2(30), segment VARCHAR2(30) );
CREATE TABLE dim_channel  ( channel_id NUMBER PRIMARY KEY, channel_name VARCHAR2(30) );

Table created.
Table created.
Table created.
Table created.



PROMPT === Load dimensions
DECLARE
BEGIN
  FOR i IN 1..100000 LOOP
    INSERT INTO dim_product VALUES(i, CASE WHEN MOD(i,10)=0 THEN 'Premium' ELSE 'Standard' END, 'Brand_'||MOD(i,200));
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_product rows inserted: '||TO_CHAR(100000));

  FOR i IN 1..500000 LOOP
    INSERT INTO dim_customer VALUES(i,
      CASE MOD(i,5) WHEN 0 THEN 'NA' WHEN 1 THEN 'EU' WHEN 2 THEN 'APAC' WHEN 3 THEN 'ME' ELSE 'LATAM' END,
      CASE MOD(i,4) WHEN 0 THEN 'ENT' WHEN 1 THEN 'SMB' WHEN 2 THEN 'MID' ELSE 'CONSUMER' END);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_customer rows inserted: '||TO_CHAR(500000));

  FOR i IN 1..6 LOOP
    INSERT INTO dim_channel VALUES(i, 'CH_'||i);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_channel rows inserted: 6');

  COMMIT;
  DBMS_OUTPUT.PUT_LINE('Commit complete.');
END;
/
SHOW ERRORS

dim_product rows inserted: 100000
dim_customer rows inserted: 500000
dim_channel rows inserted: 6
Commit complete.
PL/SQL procedure successfully completed.





PROMPT === Load fact 
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 0..59 LOOP  -- 60 days x 80k = ~4.8M
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/
SHOW ERRORS

fact_sales rows inserted:  4,800,000
PL/SQL procedure successfully completed.






PROMPT === Indexes + Stats
CREATE INDEX fs_prod    ON fact_sales(prod_id);
CREATE INDEX fs_cust    ON fact_sales(cust_id);
CREATE INDEX fs_date    ON fact_sales(date_id);
CREATE INDEX fs_channel ON fact_sales(channel_id);

Index created.
Index created.
Index created.
Index created.




EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_PRODUCT',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CUSTOMER',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CHANNEL',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'FACT_SALES',CASCADE=>TRUE);

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

In this first stage of my testing, I have created a mini data warehouse environment inside Oracle 23ai to simulate a realistic analytical workload. I built three dimension tables (DIM_PRODUCT, DIM_CUSTOMER, DIM_CHANNEL) and one large fact table (FACT_SALES) containing roughly 4.8 million rows of transactional data spread across 60 business days.

After loading the data, I have collected fresh optimizer statistics on all tables and their indexes to ensure that the Oracle Optimizer starts with a clear and accurate view of data distribution before I begin the query and plan testing phase. This baseline setup is essential because, in the next steps, I’ll demonstrate how Real-Time SQL Plan Management (SPM) reacts when execution plans change both for the better and for the worse.


-- What date range do we have?
SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 01-MAR-22



-- Rows per month (quick sanity check)
SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;
SQL> 

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03      80000

3 rows selected.



-- Lets addd the remaining days so Apr to Jun exists (60..180 --> Mar 2 .. Jun 30)
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 60..180 LOOP
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('Additional fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/

SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 30-JUN-22


SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03    2480000
2022-04    2400000
2022-05    2480000
2022-06    2400000







VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- Optional: flush shared pool to force a fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;

-- Run the query
SELECT p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id = f.prod_id
JOIN   dim_customer c ON c.cust_id = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4

2 rows selected.

Elapsed: 00:00:21.33


-- 0) (optional but recommended) collect row-source stats
ALTER SESSION SET statistics_level = ALL;

-- 1) run the query with a gather hint (guarantees ALLSTATS LAST works)
VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)
EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

ALTER SYSTEM FLUSH SHARED_POOL;

SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4




-- get the SQL_ID & CHILD_NUMBER of the query you just ran
COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
2t8da8zxfzp50            0      1151134654 24-NOV-25


SQL> SELECT *
  2  FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id        => '2t8da8zxfzp50',
         cursor_child_no => 0,            
         format        => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));  

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2t8da8zxfzp50, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category,
c.region,        SUM(f.amount) AS rev FROM   fact_sales f JOIN
dim_product  p ON p.prod_id  = f.prod_id JOIN   dim_customer c ON
c.cust_id  = f.cust_id JOIN   dim_channel  ch ON ch.channel_id =
f.channel_id WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                 AND TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category =
'Premium'   AND  c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 1151134654

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |              |      1 |        |       |      2 |00:00:05.79 |   59198 |       |       |          |
|   1 |  HASH GROUP BY                          |              |      1 |      2 |   102 |      2 |00:00:05.79 |   59198 |  1200K|  1200K|  654K (0)|
|*  2 |   HASH JOIN                             |              |      1 |  36018 |  1793K|    145K|00:00:05.75 |   59198 |    23M|  4013K|   26M (0)|
|*  3 |    HASH JOIN                            |              |      1 |  36018 |  1477K|    363K|00:00:05.26 |   57865 |  1995K|  1995K| 3065K (0)|
|   4 |     VIEW                                | VW_GBF_17    |      1 |  50000 |   390K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  5 |      FILTER                             |              |      1 |        |       |  10000 |00:00:00.01 |     396 |       |       |          |
|*  6 |       TABLE ACCESS FULL                 | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  7 |     HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.33 |   57469 |  1922K|  1922K| 1411K (0)|
|*  8 |      TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |       |       |          |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.70 |   57467 |       |       |          |
|* 10 |       INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:00.97 |   17336 |       |       |          |
|* 11 |    TABLE ACCESS FULL                    | DIM_CUSTOMER |      1 |    200K|  1757K|    200K|00:00:00.07 |    1333 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$B734A6F8
   4 - SEL$FB6C8AF9 / VW_GBF_17@SEL$393E5B1C
   5 - SEL$FB6C8AF9
   6 - SEL$FB6C8AF9 / P@SEL$1
   8 - SEL$B734A6F8 / CH@SEL$3
   9 - SEL$B734A6F8 / F@SEL$1
  10 - SEL$B734A6F8 / F@SEL$1
  11 - SEL$B734A6F8 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$FB6C8AF9")
      ELIM_GROUPBY(@"SEL$2B4A2561")
      OUTLINE_LEAF(@"SEL$B734A6F8")
      PLACE_GROUP_BY(@"SEL$EE94F965" ( "P"@"SEL$1" ) 17)
      OUTLINE(@"SEL$2B4A2561")
      ELIM_GROUPBY(@"SEL$85CD04AE")
      OUTLINE(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$85CD04AE")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$393E5B1C")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$B734A6F8" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$B734A6F8" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$B734A6F8" "F"@"SEL$1")
      NO_ACCESS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      FULL(@"SEL$B734A6F8" "C"@"SEL$2")
      LEADING(@"SEL$B734A6F8" "CH"@"SEL$3" "F"@"SEL$1" "VW_GBF_17"@"SEL$393E5B1C" "C"@"SEL$2")
      USE_HASH(@"SEL$B734A6F8" "F"@"SEL$1")
      USE_HASH(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH(@"SEL$B734A6F8" "C"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH_AGGREGATION(@"SEL$B734A6F8")
      FULL(@"SEL$FB6C8AF9" "P"@"SEL$1")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C"."CUST_ID"="F"."CUST_ID")
   3 - access("ITEM_1"="F"."PROD_ID")
   5 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   6 - filter("P"."CATEGORY"='Premium')
   7 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   8 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
  10 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  11 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT"*"ITEM_2")[22]
   2 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   4 - (rowset=256) "ITEM_1"[NUMBER,22], "ITEM_2"[NUMBER,2]
   5 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   6 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   7 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22],
       "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   8 - "CH"."CHANNEL_ID"[NUMBER,22]
   9 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
  10 - "F".ROWID[ROWID,10]
  11 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - this is an adaptive plan


122 rows selected.



Here I just set the reporting window using bind variables and flushed the shared pool to make sure Oracle parses the query fresh. The optimizer picked a clean and efficient plan using an index range scan on FS_DATE to grab only the required time slice from FACT_SALES, then performing hash joins to the small dimension tables and a hash group by at the end.

All the filters from the WHERE clause show up correctly in the predicates, and the stats (about 95k buffers, ~3 seconds) confirm it’s a solid in-memory run. The note at the bottom clearly says SQL plan baseline accepted, which means this is now my “good plan” the one Real-Time SPM will lock onto and automatically revert to if a slower plan shows up later.

VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- 1) regress stats on DIM_CUSTOMER
EXEC DBMS_STATS.DELETE_TABLE_STATS(USER,'DIM_CUSTOMER');

PL/SQL procedure successfully completed.


-- 2) force fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;
System altered.


-- 3) run the query (convert binds to DATEs)
SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category, c.region, SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4



SQL> COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;SQL>   

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
7v44tqagjad7k            0      2441995847 24-NOV-25


SELECT *
FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id => '7v44tqagjad7k', 
         cursor_child_no => 0,
         format => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7v44tqagjad7k, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category, c.region,
SUM(f.amount) AS rev FROM   fact_sales f JOIN   dim_product  p ON
p.prod_id  = f.prod_id JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id WHERE  f.date_id
BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')                      AND
TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category = 'Premium'   AND
c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 2441995847

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |              |      1 |        |       |      2 |00:00:09.07 |   59199 |  14384 |  14384 |       |       |          |         |
|   1 |  HASH GROUP BY                           |              |      1 |  36018 |  2743K|      2 |00:00:09.07 |   59199 |  14384 |  14384 |  1200K|  1200K|  661K (0)|         |
|*  2 |   FILTER                                 |              |      1 |        |       |    145K|00:00:09.04 |   59199 |  14384 |  14384 |       |       |          |         |
|*  3 |    HASH JOIN                             |              |      1 |  36018 |  2743K|    145K|00:00:09.02 |   59199 |  14384 |  14384 |  2402K|  2402K| 3124K (0)|         |
|*  4 |     TABLE ACCESS FULL                    | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN                            |              |      1 |  40000 |  2500K|   1454K|00:00:08.67 |   58803 |  14384 |  14384 |   234M|    14M|   40M (1)|     116M|
|*  6 |      HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.60 |   57469 |      0 |      0 |  1922K|  1922K| 1348K (0)|         |
|*  7 |       TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |      0 |      0 |       |       |          |         |
|   8 |       TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.83 |   57467 |      0 |      0 |       |       |          |         |
|*  9 |        INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:01.04 |   17336 |      0 |      0 |       |       |          |         |
|* 10 |      TABLE ACCESS FULL                   | DIM_CUSTOMER |      1 |    212K|  6229K|    200K|00:00:00.05 |    1334 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$EE94F965
   4 - SEL$EE94F965 / P@SEL$1
   7 - SEL$EE94F965 / CH@SEL$3
   8 - SEL$EE94F965 / F@SEL$1
   9 - SEL$EE94F965 / F@SEL$1
  10 - SEL$EE94F965 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$EE94F965" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$EE94F965" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$EE94F965" "F"@"SEL$1")
      FULL(@"SEL$EE94F965" "C"@"SEL$2")
      FULL(@"SEL$EE94F965" "P"@"SEL$1")
      LEADING(@"SEL$EE94F965" "CH"@"SEL$3" "F"@"SEL$1" "C"@"SEL$2" "P"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "F"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "C"@"SEL$2")
      USE_HASH(@"SEL$EE94F965" "P"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$EE94F965" "P"@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$EE94F965")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   3 - access("P"."PROD_ID"="F"."PROD_ID")
   4 - filter("P"."CATEGORY"='Premium')
   5 - access("C"."CUST_ID"="F"."CUST_ID")
   6 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   7 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
   9 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  10 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT")[22]
   2 - "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   4 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   5 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30],
       "C"."REGION"[VARCHAR2,30]
   6 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   7 - "CH"."CHANNEL_ID"[NUMBER,22]
   8 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   9 - "F".ROWID[ROWID,10]
  10 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan


110 rows selected.







-- Real-Time SPM evidence
-- Real-Time SPM Evidence
ALTER SESSION SET optimizer_capture_sql_plan_baselines = TRUE;
ALTER SESSION SET optimizer_use_sql_plan_baselines     = TRUE;

COLUMN last_verified            FORMAT A19
COLUMN foreground_last_verified FORMAT A19
COLUMN accepted                 FORMAT A8
COLUMN enabled                  FORMAT A8
COLUMN origin                   FORMAT A20
COLUMN sql_handle               FORMAT A20
COLUMN plan_name                FORMAT A25

SELECT sql_handle,
       plan_name,
       origin,
       accepted,
       enabled,
       TO_CHAR(last_verified,'YYYY-MM-DD HH24:MI:SS')            AS last_verified,
       TO_CHAR(foreground_last_verified,'YYYY-MM-DD HH24:MI:SS') AS fg_verified
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SUM(f.amount)%'
ORDER  BY last_verified DESC;

SQL_HANDLE           PLAN_NAME                 ORIGIN               ACCEPTED ENABLED LAST_VERIFIED        FG_VERIFIED
-------------------- ------------------------- -------------------- -------- ------- -------------------- --------------------
SQL_bf3d9f8a42       SQL_PLAN_2t8da8zxfzp50    AUTO-CAPTURE         YES      YES     2025-11-24 14:06:19  2025-11-24 14:06:19
SQL_bf3d9f8a42       SQL_PLAN_7v44tqagjad7k    FOREGROUND-CAPTURE   NO       NO      2025-11-24 14:06:18  2025-11-24 14:06:18

-- SQL_PLAN_2t8da8zxfzp50 ---> the good baseline plan, automatically captured, accepted and enabled by Real-Time SPM.

-- SQL_PLAN_7v44tqagjad7k ----> the regressed plan, captured during foreground execution but rejected (ACCEPTED=NO) as it underperformed compared to the verified baseline.




In the good baseline plan (SQL_ID 2t8da8zxfzp50, plan hash 1151134654), the optimizer uses an FS_DATE index range scan with batched ROWID fetches from FACT_SALES, then hash-joins (F ↔ VW_GBF_17(P) ↔ C) with a tiny full scan of CH (3 rows) and finishes with a hash group by. It’s fully in-memory about 59,198 buffers, ~5.79s elapsed, 0 reads, 0 temp with predicates correctly slicing DATE_ID; this is the efficient baseline. After deleting stats on DIM_CUSTOMER, the regressed plan (SQL_ID 7v44tqagjad7k, plan hash 2441995847) keeps the same general shape and the FS_DATE index range scan, but row-source sizing changes due to misestimates; it still hash-joins and groups, yet now needs ~59,199 buffers, ~9.07s, ~14,384 reads, and ~116MB temp (dynamic sampling level=2), i.e., the same outline but degraded cardinality drives extra I/O and temp usage aabout a 56% slowdown …

The numbers tell the story: Oracle tried the new plan once, learned it was bad, and reverted. This isn’t magic or marketing. It’s just Oracle 23ai doing what every good DBA does watching, measuring, and reverting fast when things go south. Real-Time SPM uses the same cost and runtime data we always trusted from AWR, but now the optimizer reacts within seconds instead of waiting for us.

On large production systems, it won’t prevent every performance issue, but it dramatically reduces the window where a regression hurts. It’s safe to enable, requires no extra license beyond Enterprise Edition, and it works right out of the box in Autonomous DB.

In Short

  • 23ai on-prem Enterprise Edition and Autonomous DB both support Real-Time SPM.
  • It compares runtime metrics from the Automatic SQL Tuning Set.
  • When a new plan runs slower, Oracle reinstates the previous good baseline automatically.
  • You’ll see it immediately in DBA_SQL_PLAN_BASELINES with ORIGIN = 'FOREGROUND-CAPTURE'.
  • You can watch it happen live through execution plans and elapsed times.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | 1 Comment »

From Painful Manual LOB Shrink to Automatic SecureFiles Shrink

Posted by FatDBA on November 15, 2025

I’ve been working with LOBs for years now, and trust me, shrinking them has always been a headache. Anyone who has ever tried ALTER TABLE … SHRINK SPACE on a big SecureFiles LOB knows the pain … blocking sessions, unexpected waits, and sometimes that lovely ORA-1555 popping up at the worst time. Every DBA eventually gets into that situation where a LOB segment is 200 GB on disk but only 10 GB of real data remains. You delete rows… but the space never comes back unless you manually shrink it, which itself can cause more issues.

With the introduction of Automatic SecureFiles Shrink, Oracle really made a DBA’s life easier. This feature, which first came out in 23ai, quietly frees up unused LOB space in the background without disrupting your workload. I wanted to see how it behaves in a real scenario, so I set up a small lab and tested it out. Here’s the whole experiment, raw and simple.

Lets do a demo and understand how this new feature works … I spun up a fresh PDB and made a small tablespace just for this test. Nothing fancy.

CREATE TABLESPACE lobts
  DATAFILE '/u02/oradata/LOBTS01.dbf'
  SIZE 1G AUTOEXTEND ON NEXT 256M;

Tablespace created.



-- Table with a securefile LOB based column.
CREATE TABLE tst_securefile_lob
(
    id NUMBER,
    lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE (
    TABLESPACE lobts
    CACHE
);

Table created.




SELECT table_name, column_name, securefile
FROM   user_lobs
WHERE  table_name='TST_SECUREFILE_LOB';

TABLE_NAME           COLUMN_NAME   SECUREFILE
-------------------  ------------  ----------
TST_SECUREFILE_LOB   LOB_DATA      YES


Next, I inserted a good amount of junk data around 10,000 rows of random CLOB strings. I wanted the LOB segment to be big enough to see clear differences after shrink.

BEGIN
  FOR r IN 1 .. 10 LOOP
    INSERT INTO tst_securefile_lob (id, lob_data)
    SELECT r*100000 + level,
           TO_CLOB(DBMS_RANDOM.STRING('x', 32767))
    FROM dual
    CONNECT BY level <= 1000;
    COMMIT;
  END LOOP;
END;
/

PL/SQL procedure successfully completed.



SELECT COUNT(*) FROM tst_securefile_lob;

  COUNT(*)
----------
     10000




SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE  ul.table_name = 'TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$     131072     1024.00




-- After stats and a quick check in USER_SEGMENTS, the LOB segment was showing a nice chunky size. 
-- Then I deleted almost everything
-- Now the table will have very few rows left, but the LOB segment was still the same size. As usual.
DELETE FROM tst_securefile_lob
WHERE id < 900000;
COMMIT;

9990 rows deleted. 
Commit complete.


-- Checking LOB Internal Usage (Before Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 131072  bytes=1073741824
Used blocks         =  10240  bytes=83886080
Expired blocks      = 110592  bytes=905969664
Unexpired blocks    =  10240  bytes=83886080
-- This clearly shows almost the entire segment is expired/free but not reclaimed.


-- Checking Auto Shrink Statistics (Before Enabling)
SELECT name, value
FROM   v$sysstat
WHERE  name LIKE '%Auto SF SHK%'
ORDER  BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               0
Auto SF SHK successful                       0
Auto SF SHK total number of tasks            0

Turning on Automatic Shrink — By default this feature is OFF, so I enabled it:

EXEC DBMS_SPACE.SECUREFILE_SHRINK_ENABLE;

PL/SQL procedure successfully completed.

That’s literally it. No parameters, no tuning, nothing else. Just enable.

Automatic SecureFiles Shrink does not run immediately after you delete data. Oracle requires that a SecureFiles LOB segment be idle for a specific amount of time before it becomes eligible for shrinking, and the default idle-time limit is 1,440 minutes (24 hours). “Idle” means that no DML or preallocation activity has occurred on that LOB during that period. Once the segment meets this condition, Oracle considers it during its automatic background shrink task, which is part of AutoTask and runs every 30 minutes with a defined processing window.

When the task executes, it attempts to shrink eligible segments, but it does so gradually and in small increments using a trickle-based approach, rather than reclaiming all possible space in a single operation. This incremental behavior is deliberate: it reduces impact on running workloads and avoids heavy reorganization all at once. Only segments that meet all selection criteria .. such as having sufficient free space above the defined thresholds and not using RETENTION MAX … are processed. Because of this incremental design and the eligibility rules, the full space reclamation process can span multiple background cycles.

Some of the internal hidden/underscore params that can be used to control these limits (not unless support asked you to do or you are dealing a lab system)

Parameter                                    Default_Value    Session_Value    Instance_Value   IS_SESSION_MODIFIABLE   IS_SYSTEM_MODIFIABLE
------------------------------------------- ---------------  ---------------- ---------------- ----------------------- ---------------------
_ktsls_autoshrink_seg_idle_seconds            86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_seg_pen_seconds             86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_trickle_mb                  5                5                5                FALSE                   IMMEDIATE

Okay, lets check out post chnaghe outputs, what automatic LOB shrink does to our test object.

SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE ul.table_name='TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$      40960      320.00

Note: From ~1024 MB down to ~320 MB. Auto shrink worked     🙂



-- DBMS_SPACE Usage (After Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 40960  bytes=335544320
Used blocks         =  9216  bytes=75497472
Expired blocks      =  6144  bytes=50331648
Unexpired blocks    =  9216  bytes=75497472

Notee:  Expired blocks dropped from 110k -->  6k. This confirms auto shrink freed most of the fragmented space.






-- After the task is run.
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%Auto SF SHK%'
ORDER BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               1
Auto SF SHK successful                       1
Auto SF SHK total number of tasks            1




SELECT owner,
       segment_name,
       shrunk_bytes,
       attempts,
       last_shrink_time
FROM v$securefile_shrink;

OWNER      SEGMENT_NAME               SHRUNK_BYTES    ATTEMPTS   LAST_SHRINK_TIME
---------  ------------------------   -------------   ---------  ---------------------------
PRASHANT   SYS_LOB0001234567C00002$      744947712          1     14-NOV-2025 06:32:15

Note:  Oracle automatically reclaimed ~710 MB of wasted LOB space.

This feature, It’s simple, it’s safe, and it saves DBAs from doing manual shrink maintenance again and again. It’s not a fast feature it’s slow and polite on purpose but it works exactly as expected.

If your system has LOBs (EBS attachments, documents, JSON, logs, media files, etc.), you should absolutely enable this. Let Oracle handle the boring part.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

fatdba explores Vector Search in Oracle 23ai

Posted by FatDBA on July 23, 2025

So Oracle rolled out 23ai a while back and like every major release, it came packed with some really cool and interesting features. One that definitely caught my eye was Vector Search. I couldn’t resist diving in… and recently I explored it in depth and would like to share a though on this subject.

You see, we’ve been doing LIKE '%tax policy%' since forever. But now, Oracle’s SQL has become more powerful. Not only does it match words … it matches meaning.

So here’s me trying to explain what vector search is, how Oracle does it, why you’d care, and some examples that’ll hopefully make it click.

What’s Vector Search, Anyway?

Alright, imagine this:

You have a table of products. You search for “lightweight laptop for travel”.
Some entries say “ultrabook”, others say “portable notebook”, and none mention “lightweight” or “travel”. Old-school SQL would’ve said: “No Matches Found”

But with vector search, it gets it. Oracle turns all that text into math .. basically, a long list of numbers called a vector … and compares meanings instead of words.

So What’s a Vector?

When we say “vector” in vector search, we’re not talking about geometry class. In the world of AI and databases, a vector is just a long list of numbers … each number representing some aspect or feature of the original input (like a sentence, product description, image, etc.).

Here’s a basic example:
[0.12, -0.45, 0.88, …, 0.03]
This is a vector … maybe a 512 or 1536-dimension one .. depending on the embedding model used (like OpenAI, Oracle’s built-in model, Cohere, etc.).

Each number in this list is abstract, but together they represent the essence or meaning of your data.

Let’s say you have these two phrases:
“Apple is a tech company”
“iPhone maker based in California”

Now, even though they don’t share many words, they mean nearly the same thing. When passed through an embedding model, both phrases are converted into vectors:

Vector A: [0.21, -0.32, 0.76, …, 0.02]
Vector B: [0.22, -0.30, 0.74, …, 0.01]

They look very close … and that’s exactly the point.

What Oracle 23ai Gives You

  • A new VECTOR datatype (yeah!)
  • AI_VECTOR() function to convert text into vectors
  • VECTOR_INDEX to make search blazing fast
  • VECTOR_DISTANCE() to measure similarity
  • It’s all native in SQL ..no need for another vector DB bolted on

Let’s Build Something Step-by-Step

We’ll build a simple product table and do a vector search on it.

Step 1: Create the table

CREATE TABLE products (
  product_id     NUMBER PRIMARY KEY,
  product_name   VARCHAR2(100),
  description    VARCHAR2(1000),
  embedding      VECTOR(1536)
);

1536? Yeah, that’s the number of dimensions from Oracle’s built-in embedding model. Depends on which one you use.

Step 2: Generate vector embeddings

UPDATE products
SET embedding = ai_vector('text_embedding', description);

This’ll take the description, pass it through Oracle’s AI model, and give you a vector. Magic.

Step 3: Create the vector index

CREATE VECTOR INDEX product_vec_idx
ON products (embedding)
WITH (DISTANCE METRIC COSINE);

This speeds up the similarity comparisons … much like an index does for normal WHERE clauses.

Step 4: Semantic Search in SQL

SELECT product_id, product_name, 
       VECTOR_DISTANCE(embedding, ai_vector('text_embedding', 'light laptop for designers')) AS score
FROM products
ORDER BY score
FETCH FIRST 5 ROWS ONLY;

Now we’re searching for meaning, not words.

VECTOR_DISTANCE Breakdown

You can use different math behind the scenes:

VECTOR_DISTANCE(v1, v2 USING COSINE)
VECTOR_DISTANCE(v1, v2 USING EUCLIDEAN)
VECTOR_DISTANCE(v1, v2 USING DOT_PRODUCT)

Cosine is the usual go-to for text. Oracle handles the rest for you.

Use Cases You’ll Actually Care About

1. Semantic Product Search — “Fast shoes for runners” => shows “Nike Vaporfly”, even if it doesn’t say “fast”.

2. Similar Document Retrieval — Find all NDAs that look like this one (even with totally different words).

3. Customer Ticket Suggestion — Auto-suggest resolutions from past tickets. Saves your support team hours.

4. Content Recommendation — “People who read this also read…” kind of stuff. Easy to build now.

5. Risk or Fraud Pattern Matching — Find transactions that feel like fraud ..even if the details don’t match 1:1.

I know it might sound little confusing .. lets do a Onwe more example : Legal Document Matching

CREATE TABLE legal_docs (
  doc_id       NUMBER PRIMARY KEY,
  title        VARCHAR2(255),
  content      CLOB,
  content_vec  VECTOR(1536)
);

Update vectors:

UPDATE legal_docs
SET content_vec = ai_vector('text_embedding', content);

Now find similar docs:

SELECT doc_id, title
FROM legal_docs
ORDER BY VECTOR_DISTANCE(content_vec, ai_vector('text_embedding', 'confidentiality in government contracts'))
FETCH FIRST 10 ROWS ONLY;

That’s it. You’re officially building an AI-powered legal search engine.

Things to Know

  • Creating vectors can be heavy .. batch it.
  • Indexing speeds up similarity search a lot.
  • Combine with normal filters for best results:
SELECT * FROM products
WHERE category = 'laptop'
ORDER BY VECTOR_DISTANCE(embedding, ai_vector('gaming laptop under 1kg'))
FETCH FIRST 5 ROWS ONLY;

Final Thoughts from fatdba

I’m honestly impressed. Oracle took something that felt like ML black magic and put it right in SQL. No external service. No complicated setups. Just regular SQL, but smater.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

All new Oracle 23ai new views for enhanced Data Pump troubleshooting

Posted by FatDBA on April 2, 2025

Oracle 23ai (also known as Oracle 23c with AI features) introduces several powerful enhancements aimed at improving observability, diagnostics, and performance debugging. Among these enhancements are three new dynamic performance views designed specifically to help DBAs and developers troubleshoot Data Pump performance issues more efficiently and in real-time.

When dealing with large data exports or imports using Oracle Data Pump, performance bottlenecks or unexplained hangs can be frustrating and time-consuming to investigate. Until now, DBAs had to rely heavily on Data Pump log files, trace files, and session-level v$ views to diagnose problems. With the introduction of the following views, Oracle has taken a major step toward simplifying that process:

The three new views are: GV$DATAPUMP_PROCESS_INFO, GV$DATAPUMP_PROCESSWAIT_INFO and GV$DATAPUMP_SESSIONWAIT_INFO

These views provide real-time information about the state of Data Pump processes, their wait events, and any session-level contentions. GV$ views return cluster-wide information in a RAC environment, while V$ views return information specific to the current instance.

  • GV$DATAPUMP_PROCESS_INFO – This view shows the current Data Pump processes, including both master and worker processes. It provides basic information like the program name, session ID, username, job name, status, and system process ID.
SELECT * FROM V$DATAPUMP_PROCESS_INFO;

CUR_DATE	PROGRAM	SESSIONID	STATUS	USERNAME	JOBNAME	SPID
2023-01-09 13:56:07	ude@orcl (TNS V1-V3)	42	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891480
2023-01-09 13:56:07	oracle@orcl (DW00)	48	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891500

  • GV$DATAPUMP_PROCESSWAIT_INFO – This view helps detect contention between Data Pump processes. It shows which sessions are waiting, what events they are waiting for, and which other sessions may be blocking them.
    SELECT * FROM GV$DATAPUMP_PROCESSWAIT_INFO;
    
    WAITING_SESSION	HOLDING_SESSION	EVENT	PROGRAM_WAITSESSION	PROGRAM_HOLDINGDSESSION
    174	57	enq: TM - contention	oracle@orcl (DM00)	oracle@orcl (DW00)
    
    • GV$DATAPUMP_SESSIONWAIT_INFO – Provides deep insights into session-level waits during Data Pump operations, including how long the session has been in the wait state and what it’s waiting on.
    SELECT * FROM GV$DATAPUMP_SESSIONWAIT_INFO;
    
    WAITING_SESSION	EVENT	DP_SECONDS_IN_WAIT	DP_STATE_IN_WAIT	DP_P1TEXT	DP_P1
    46	enq: TM - contention	8086	WAITING	name	mode
    
    

    Before Oracle 23ai, debugging Data Pump jobs required checking logs, trace files, and manual session analysis. These new views provide real-time visibility into what each Data Pump process is doing, what it is waiting on, and where it might be blocked.

    Use Cases:

    1. If a Data Pump job appears hung, GV$DATAPUMP_PROCESSWAIT_INFO can help identify which process is waiting and what it is waiting on.
    2. If sessions are slow or idle, GV$DATAPUMP_SESSIONWAIT_INFO provides detailed timing and wait reasons.
    3. If parallel execution is used, GV$DATAPUMP_PROCESS_INFO shows how many worker processes are active and whether they are all functioning as expected.

    The new Data Pump views in Oracle 23ai are a significant step forward for real-time performance diagnostics. These views provide valuable insights that were previously hard to obtain, allowing DBAs to troubleshoot and optimize Data Pump operations with much more confidence and clarity.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , | Leave a Comment »

    Materialized Views: Concurrent Refresh in Oracle 23c

    Posted by FatDBA on December 28, 2024

    Oracle Database 23c introduces a powerful new feature allowing concurrent refresh of materialized views. This enhancement improves performance by enabling multiple users to refresh the same materialized view at the same time.

    Before Oracle 23c, refreshing a materialized view (to reflect changes in the base tables) could only happen sequentially. This meant:

    • If multiple users triggered a refresh at the same time, Oracle handled them one by one.
    • This could cause delays in environments where real-time data updates were needed.

    For example, during month-end reporting, if multiple users tried to refresh a sales summary MV, they had to wait for each other, impacting productivity.

    Oracle 23c allows concurrent refreshes, enabling multiple users to refresh the materialized view simultaneously. This improves efficiency, especially for fast-refresh MVs, which update incrementally. Let’s walk through a detailed example to demonstrate how to create, configure, and utilize concurrent refresh.

    CREATE TABLE sales (
      sale_id NUMBER(10),
      sale_date DATE,
      amount NUMBER(10, 2),
      customer_id NUMBER(10),
      CONSTRAINT sales_pk PRIMARY KEY (sale_id)
    );
    
    CREATE TABLE customers (
      customer_id NUMBER(10),
      customer_name VARCHAR2(100),
      CONSTRAINT customers_pk PRIMARY KEY (customer_id)
    );
    
    
    INSERT INTO customers VALUES (1, 'Alice');
    INSERT INTO customers VALUES (2, 'Bob');
    
    INSERT INTO sales VALUES (101, '01-JAN-2024', 500, 1);
    INSERT INTO sales VALUES (102, '05-JAN-2024', 300, 1);
    INSERT INTO sales VALUES (103, '10-JAN-2024', 450, 2);
    
    

    Next will create MV logs as Materialized view logs track changes to base tables, enabling fast refresh.

    CREATE MATERIALIZED VIEW LOG ON sales
    WITH ROWID, SEQUENCE (sale_id, sale_date, amount, customer_id)
    INCLUDING NEW VALUES;
    
    CREATE MATERIALIZED VIEW LOG ON customers
    WITH ROWID, SEQUENCE (customer_id, customer_name)
    INCLUDING NEW VALUES;
    
    
    --- Create MView with concurrent refresh option.
    CREATE MATERIALIZED VIEW sales_summary_mv
    REFRESH FAST ON COMMIT
    ENABLE CONCURRENT REFRESH
    AS
    SELECT c.customer_name,
           COUNT(s.sale_id) AS total_sales,
           SUM(s.amount) AS total_amount
    FROM sales s
    JOIN customers c ON s.customer_id = c.customer_id
    GROUP BY c.customer_name;
    
    -- collect stats to reflect row changes.
    exec dbms_stats.gather_table_stats(null, 'sales_summary_mv');
    
    

    Now we can trigger concurrent refresh.

    EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'F');  -- Fast refresh
    EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'C');  -- Complete refresh
    
    
    -- In our case the concurrent_refresh_enabled returns Y, means concurrent refresh is enabled.
    SELECT mview_name, concurrent_refresh_enabled
    FROM user_mviews
    WHERE mview_name = 'SALES_SUMMARY_MV';
    
    MVIEW_NAME                     CONCURRENT_REFRESH_ENABLED
    ------------------------------ ------------------------------
    SALES_SUMMARY_MV              Y
    

    Oracle 23c’s concurrent refresh for materialized views is a game-changer for data-intensive applications. By allowing multiple users to refresh views in parallel, businesses can significantly boost performance and ensure faster reporting.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

    Effortless LOB Segment Renaming in Oracle 23ai: A Game-Changer for Database Management

    Posted by FatDBA on December 28, 2024

    Oracle Database 23ai introduces a groundbreaking feature that simplifies the renaming of LOB segments. This enhancement offers database administrators a more efficient way to manage large data objects without resorting to complex workarounds.

    LOBs play a critical role in storing massive data types such as images, documents, and videos. As databases grow and evolve, LOB segment names can become outdated or misaligned with business needs. Previously, renaming required duplicating LOBs, migrating data, and deleting old segments—an inefficient process. Before Oracle 23ai:

    • Create a New LOB Column: A new LOB column had to be created in the table with the desired segment name.
    • Migrate Data: Data from the old LOB segment had to be copied to the new LOB column.
    • Drop Old LOB: Once the migration was completed, the old LOB column and segment were dropped.
    • Rename Indirectly: This indirect process often required downtime and posed risks of data loss or corruption during migration.

    Example (Pre-23ai Approach):

    ALTER TABLE media ADD (file_data_temp BLOB);
    UPDATE media SET file_data_temp = file_data;
    ALTER TABLE media DROP COLUMN file_data;
    ALTER TABLE media RENAME COLUMN file_data_temp TO file_data;
    

    This method was cumbersome and inefficient for large databases … With Oracle 23ai, renaming LOB segments is now as simple as running a single command 🙂 🙂

    Renaming LOB segments can be achieved with the following straightforward SQL command:

    ALTER TABLE <table_name> MODIFY LOB (<lob_column>) (RENAME TO <new_segment_name>);

    Suppose you manage a media table with a LOB column file_data, currently named FILE_DATA_LOB. To rename it to MEDIA_ARCHIVE_LOB: —> ALTER TABLE media MODIFY LOB (file_data) (RENAME TO MEDIA_ARCHIVE_LOB);

    Practical Applications:

    • Data Reorganization: Update segment names to align with evolving data categories.
    • Simplifying Audits: Ensure LOB segment names clearly represent their purpose.
    • Consistency Enforcement: Standardize naming conventions across different LOB segments.

    Important Considerations

    • Dependencies: Check for application-level dependencies that reference LOB segment names.
    • User Privileges: Ensure you have the required ALTER permissions.
    • Tablespace Integrity: Segment renaming does not affect the tablespace or physical location.

    Automating LOB Segment Renaming

    For larger environments, automating LOB segment renaming can save significant time. The following PL/SQL script demonstrates how to rename multiple segments programmatically:

    BEGIN
      FOR r IN (SELECT table_name, column_name FROM user_lobs) LOOP
        EXECUTE IMMEDIATE 'ALTER TABLE ' || r.table_name || 
                         ' MODIFY LOB (' || r.column_name || 
                         ') (RENAME TO ' || r.column_name || '_UPDATEDLOB)';
      END LOOP;
    END;
    /
    

    This script appends _UPDATEDLOB to all LOB segments in user tables.

    Final Thoughts

    The ability to rename LOB segments directly in Oracle Database 23ai marks a significant step forward in database administration. By eliminating complex workarounds, this feature enhances efficiency and ensures database environments remain well-organized. Whether standardizing segment names or adapting to shifting business needs, this capability helps administrators to maintain cleaner, more streamlined data structures easily.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

    Oracle 23ai Tip: Use SESSION_EXIT_ON_PACKAGE_STATE_ERROR to Prevent Silent Data Corruption

    Posted by FatDBA on December 28, 2024

    Oracle Database 23ai introduces a new parameter, SESSION_EXIT_ON_PACKAGE_STATE_ERROR, designed to enhance session management and prevent potential data corruption by enforcing a hard session exit when the session state becomes invalidated.

    Why SESSION_EXIT_ON_PACKAGE_STATE_ERROR Matters ?

    In typical Oracle database environments, stateful PL/SQL packages, MLE modules, or environments may be modified while sessions actively use them. This can lead to errors such as:

    • ORA-04068: Can occur when a PL/SQL package body is recompiled, invalidating the session state.
    • ORA-4106 / ORA-4107: Can be raisrd when an MLE module or environment is altered via DDL, invalidating the session.

    By default, the session remains active and throws an error when the invalid package or module is called. However, many applications may not properly handle these errors, leading to silent data corruption or unexpected behavior.

    The SESSION_EXIT_ON_PACKAGE_STATE_ERROR parameter mitigates this risk by forcing an immediate session exit instead of raising an error.

    Some of the benefits of using the parameter.

    • Prevents Data Corruption: By terminating sessions with invalid state, the risk of silent data corruption is reduced.
    • Simplifies Error Handling: Many applications are better at handling session disconnects than catching specific errors like ORA-04068.
    • Consistency Across Sessions: Ensures that all sessions dealing with modified packages or MLE modules are treated consistently, minimizing inconsistencies.

    How SESSION_EXIT_ON_PACKAGE_STATE_ERROR Works

    When SESSION_EXIT_ON_PACKAGE_STATE_ERROR is set to TRUE, the following behavior is enforced:

    1. PL/SQL Package Modification:
      • If a stateful PL/SQL package is modified, any active session that tries to invoke the package receives ORA-04068. With this parameter set to TRUE, the session exits immediately instead of raising the error.
    2. MLE Module or Environment Modification:
      • If an MLE module or environment is modified via DDL, active sessions receive ORA-4106 or ORA-4107. With SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE, these sessions are forcibly disconnected.
    3. Application Handling:
      • Most applications are designed to capture session disconnects and reestablish connections, streamlining recovery from session invalidation.

    Use Cases

    • High-Availability Environments: In systems where continuous uptime is critical, preventing data corruption is paramount.
    • Distributed Applications: Applications spread across multiple environments that frequently modify PL/SQL packages or MLE modules benefit from session termination to maintain data integrity.
    • Oracle RAC Deployments: Different instances in an Oracle RAC environment can independently configure this parameter, allowing fine-grained control based on workload requirements.

    Configuring SESSION_EXIT_ON_PACKAGE_STATE_ERROR:

    Examples:
    ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
    ALTER SESSION SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
    ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE SCOPE = SPFILE;

    Considerations

    • Default Behavior: By default, this parameter is set to FALSE, meaning sessions will raise errors rather than exit.
    • Testing and Validation: Test this configuration in lower environments to ensure application compatibility.
    • Session Management: Monitor session disconnects to ensure that forced exits do not disrupt critical workflows.

    Conclusion

    SESSION_EXIT_ON_PACKAGE_STATE_ERROR is a powerful new feature in Oracle Database 23ai that enhances session management by enforcing session termination on package or module state invalidation. By using this parameter, Oracle environments can significantly reduce the risk of data corruption and streamline error handling processes across diverse applications. Whether managing PL/SQL packages or MLE modules, this parameter offers greater control and reliability for database administrators and developers both.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

    How I see vector indexing and search in Oracle 23ai

    Posted by FatDBA on November 1, 2024

    Hi everyone!

    It’s been a while since Oracle 23c (or 23c AI) was released, and there’s already a ton of great content out there on one of its coolest new features: Vector Indexes and Searches. This feature has opened up exciting possibilities for AI-driven applications, from advanced recommendations to similarity searches within databases. Now, it’s finally my turn to dive in and put it to the test!

    Vector search in Oracle 23c AI is a feature designed for searching, comparing, and finding similarities between data represented as vectors (numerical arrays). This capability is particularly useful for AI and machine learning applications, where data like images, text, and products are often encoded as vector embeddings. Oracle’s vector search enables the database to efficiently process these embeddings, allowing users to perform similarity-based queries within the database. Oracle’s vector search in 23c AI makes it possible to bring advanced similarity-based search and recommendation capabilities directly into the database, facilitating powerful new applications that leverage the latest in AI and machine learning.

    The base or the foundation of this feature is VECTOR data type, a powerful tool for storing and managing vector embeddings directly within the database. This capability allows Oracle to support advanced AI and machine learning applications, such as similarity searches and recommendation engines, by efficiently handling multi-dimensional arrays that represent data features, semantics, or other complex characteristics.

    Key Advantages of the VECTOR Data Type

    1. Compact Multi-Dimensional Array Storage:
    • VECTOR allows fixed-length arrays of numbers (often floating-point) to be stored compactly in a single column. A 100-dimensional vector, for instance, might represent an embedding for text, an image, or a user profile, capturing nuanced data in a manageable form.
    1. Efficient Similarity Calculations:
    • With Oracle 23c AI, VECTOR data can be processed for similarity searches directly in the database. Functions like VECTOR_DISTANCE make it easy to calculate similarity scores or distances between vectors, enabling Oracle to quickly identify similar items without needing external tools or complex data transformations.
    1. Tailored for AI and Machine Learning:
    • VECTOR is ideal for applications that require semantic searches, such as finding similar products, media, or documents, and for AI-driven recommendations. This makes it well-suited for Natural Language Processing (NLP), recommendation engines, and image recognition tasks.
    1. Support for Various Distance Metrics:
    • Oracle’s VECTOR data type is compatible with common distance metrics, such as Euclidean and cosine similarity, which measure how close or far two vectors are from each other. This enables efficient and flexible similarity calculations, essential for applications relying on vector comparisons.
    1. Scalability with Vector Indexing:
    • Oracle 23c AI allows for VECTOR data indexing through VECTOR_INDEX, optimizing performance for vector-based queries. This makes real-time similarity searches on large datasets feasible, supporting applications that need quick response times, such as real-time AI insights and personalized recommendations.

    The VECTOR data type in Oracle 23c AI is a robust addition for data scientists and developers, enabling high-performance similarity searches, efficient data storage, and a foundation for complex AI and machine learning workloads directly within the Oracle database.

    Alright, so now when the stage is set, lets stay tuned as I explore how Vector Indexes and Searches can bring a whole new dimension to Oracle 23c. Here’s a detailed, step-by-step demo to help you showcase vector search and indexes in Oracle Database 23c AI, which will be ideal for your blog post. This example will simulate a product recommendation system using vector embeddings to represent product features.

    1. Set Up the Environment

    Ensure you’re using Oracle 23c AI, as vector indexes and the VECTOR data type are new features available in this release.

    2. Create a Table with Vector Embeddings

    In this example, each product is represented by a 10-dimensional vector embedding. This vector could represent a combination of product attributes, like color, category, brand, and other numerical encodings.

    CREATE TABLE products (
        product_id NUMBER PRIMARY KEY,
        product_name VARCHAR2(100),
        product_description VARCHAR2(255),
        product_embedding VECTOR(10) -- Vector with 10 dimensions
    );
    

    3. Insert Sample Data

    Populate the table with sample products and their vector embeddings. Here’s an example where each vector represents the product’s characteristics:

    INSERT INTO products (product_id, product_name, product_description, product_embedding) VALUES 
        (1, 'Laptop A', 'High-performance laptop for professionals', VECTOR(0.5, 0.2, 0.7, 0.1, 0.4, 0.3, 0.8, 0.6, 0.9, 0.3)),
        (2, 'Smartphone B', 'Latest smartphone with advanced features', VECTOR(0.6, 0.1, 0.8, 0.5, 0.2, 0.7, 0.4, 0.3, 0.6, 0.1)),
        (3, 'Tablet C', 'Compact and powerful tablet for multimedia', VECTOR(0.4, 0.9, 0.5, 0.7, 0.3, 0.6, 0.2, 0.8, 0.1, 0.9)),
        (4, 'Laptop D', 'Affordable laptop with reliable performance', VECTOR(0.7, 0.6, 0.4, 0.9, 0.8, 0.2, 0.5, 0.3, 0.1, 0.4)),
        (5, 'Headphones E', 'Noise-canceling headphones with high fidelity', VECTOR(0.3, 0.8, 0.1, 0.9, 0.6, 0.4, 0.7, 0.2, 0.5, 0.6));
    

    4. Create a Vector Index for Efficient Searches

    Creating a vector index allows Oracle to search these embeddings efficiently based on similarity, which is ideal for recommendation systems, image retrieval, and text similarity searches.

    CREATE INDEX product_vector_idx ON products (product_embedding)
        INDEXTYPE IS VECTOR_INDEX;
    

    5. Perform a Vector Similarity Search

    With the vector index in place, you can now perform searches to find products that are similar to a given vector. For example, suppose a user has shown interest in products that correspond to a vector representation of [0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6]. You can use this vector to find products with similar embeddings.

    SELECT product_name, product_description, product_embedding
    FROM products
    ORDER BY VECTOR_DISTANCE(product_embedding, VECTOR(0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6)) ASC
    FETCH FIRST 5 ROWS ONLY;
    

    In this query:

    • The VECTOR_DISTANCE function calculates the similarity (distance) between the target vector and each product’s embedding. Sorting by VECTOR_DISTANCE allows you to retrieve the closest matches.
    • This example fetches the top 5 most similar products.

    6. Use Case: Personalized Product Recommendation

    This setup is ideal for applications such as a product recommendation engine. Suppose you have a user profile represented as a vector, you could use a vector search to find products similar to that profile, offering highly relevant recommendations.

    7. Handling Updates to Vector Embeddings

    If a product’s features change, you may need to update its embedding. Here’s how you could update the vector for a specific product:

    UPDATE products
    SET product_embedding = VECTOR(0.7, 0.3, 0.9, 0.5, 0.6, 0.4, 0.8, 0.2, 0.3, 0.5)
    WHERE product_id = 2;
    

    8. Advanced Use Case: Nearest Neighbor Search with Filtering

    Let’s extend the search by filtering for specific criteria. For example, if you only want to recommend products that contain “Laptop” in the name, you can add a WHERE clause to the query.

    SELECT product_name, product_description, product_embedding
    FROM products
    WHERE product_name LIKE '%Laptop%'
    ORDER BY VECTOR_DISTANCE(product_embedding, VECTOR(0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6)) ASC
    FETCH FIRST 5 ROWS ONLY;
    

    9. Explain the Vector Distance Metric

    For your blog, it’s useful to explain the concept of vector similarity:

    • Vector Distance (Similarity): This metric measures how similar or dissimilar two vectors are. A smaller distance implies a higher similarity. Oracle 23c AI’s VECTOR_DISTANCE function calculates this distance to identify closely related items, making it ideal for applications in recommendations, content search, and clustering.

    10. Additional Notes for Optimization and Use Cases

    • Index Maintenance: Like other indexes, a vector index should be maintained for performance. Ensure to monitor the index usage and rebuild if necessary for large datasets.
    • Scalability: Oracle 23c AI’s vector search is optimized for large-scale deployments, making it suitable for extensive recommendation systems, content-based searches, and personalized user experiences.

    This detailed walkthrough should provide a comprehensive demo for your blog post, showcasing Oracle 23c AI’s new vector search and indexing features. Let me know if you need any additional details or further customization!

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , , , , | Leave a Comment »

    Adapting Data Pump for the Cloud: Trailer Blocks in 23ai

    Posted by FatDBA on November 1, 2024

    Oracle Database 23c AI introduces a new trailer block format for Data Pump dump files, moving away from the traditional header block approach used in earlier versions like 21c. This new structure is designed specifically to support cloud storage environments, particularly within Oracle Cloud Infrastructure (OCI), and makes it easier to manage and transfer Data Pump files to object stores.

    What’s Different with Trailer Blocks?

    Trailer blocks are written only once the export process completes, contrasting with header blocks, which required upfront metadata and continuous updates throughout the job. This deferred approach means trailer blocks accumulate necessary metadata during the export process, allowing it to be stored in a more cloud-compatible format at the end. The switch to trailer blocks by default enables seamless storage of dump files in cloud object stores and reduces the limitations associated with local-only file systems.

    Note : Trailer block files are compatible only with Oracle Database 23c AI and later. To ensure backward compatibility for previous Oracle versions, you can use the VERSION parameter to revert to the legacy header block format.

    How to Use Trailer Blocks with Cloud Object Storage

    When running Data Pump in Oracle 23c AI, trailer blocks allow straightforward use with object storage by configuring these essential parameters:

    1. Authenticate with Cloud Credentials: Ensure you have a credential object in place to establish the required connection with your object store.
    2. Configure DUMPFILE and CREDENTIAL Parameters: List export files using DUMPFILE and specify the credential to direct these files to the cloud.
    3. Set Log File Storage: Use the DEFAULT_DIRECTORY parameter to control log file storage. Failing to set up a valid cloud credential for object store URIs will lead to errors in dump file writing.

    Steps for Cloud Storage Access

    To export files directly to a cloud object store, ensure you have cloud credentials stored in an Oracle Wallet, with the location specified in your sqlnet.ora configuration. You will also need to add a certificate for authentication using orapki:

    orapki wallet add -wallet <path_to_wallet> -trusted_cert -cert <certificate_file> -pwd <wallet_password>

    This setup allows the Data Pump to use the CREDENTIAL parameter with the export command, establishing secure access to cloud storage.

    Header Blocks vs. Trailer Blocks

    In Oracle Database versions prior to 23c, Data Pump files used header blocks, where metadata was embedded at the start of each dump file. This format was limited to local storage and required metadata updates during export, which could be resource-intensive.

    With Oracle Database 23c AI, trailer blocks are the default. This approach shifts metadata to the end of the file, streamlining the process for cloud storage. By using the VERSION parameter, you can control whether to retain the newer trailer block format or revert to header blocks based on compatibility needs.

    Types of Trailer Blocks in Oracle 23c AI

    Oracle 23c AI introduces two types of trailer blocks for handling Data Pump files:

    • Disk-Based Trailer Blocks: Directly written at the end of each primary table’s data in the dump file, providing a compact storage format.
    • Table-Based Trailer Blocks: Stored externally within the export’s primary table, conserving space by omitting individual trailer blocks for each data object. This approach efficiently manages storage while maintaining essential metadata for cloud-based and local export jobs.

    Trailer blocks make Data Pump in Oracle Database 23c AI a flexible and cloud-ready solution, enabling efficient data movement across hybrid and cloud infrastructures. This updated approach supports larger cloud-based workflows by improving performance and storage efficiency for Data Pump exports.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

    Oracle 23ai DBMS_HCHECK is your new best friend for Data Dictionary health

    Posted by FatDBA on September 22, 2024

    Today, while experimenting with Oracle 23ai new features, I came across a particularly helpful feature called DBMS_DICTIONARY_CHECK (referred to as DBMS_HCHECK in Oracle 23c). This tool proves to be both critical and incredibly useful for identifying and resolving data dictionary issues, which can sometimes be indirectly linked to errors like ORA-00600 or any inconsistencies or changes in DD.

    I tested both options—FULL and CRITICAL—and found them highly effective. Both procedures are quick and performs a deep validation of the data dictionary, and while they print the results on the screen, they also write the output to a file on the operating system for further review. The FULL procedure conducts an exhaustive check, while the CRITICAL option focuses on only the most essential checks, making it quicker to execute. One of the most valuable aspects of these procedures is their ability not just to identify issues but also to repair them if you set the repair=>TRUE parameter.

    The results of the execution are reported as:

    CRITICAL: Requires immediate attention and fixing.
    FAIL: High-priority issue that needs resolution.
    WARN: Non-urgent but worth resolving.
    PASS: No issues detected.

    It brings back memories of the days when you had to manually download Oracle’s hcheck.sql script to analyze your data dictionary for known problems. Now, with this feature integrated directly into the database, it makes the process significantly more convenient and efficient.

    Note: In Oracle 23c, this package is called DBMS_HCHECK, while in earlier versions (like Oracle 23.3ai), it’s referred to as DBMS_DICTIONARY_CHECK.

    This integrated approach makes monitoring and maintaining the database’s health much simpler and more automated.

    Below is how you perform a full check on the data dictionary.

    SQL>  set serveroutput on size unlimited
    SQL> execute dbms_dictionary_check.full
    dbms_dictionary_check on 22-SEP-2024 05:04:51
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
    .- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
    .- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:04:53 PASS
    .- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:04:53 PASS
    .- ObjError                    ... 2300000000 >  1102000000 09/22 05:04:53 PASS
    .- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:04:53 PASS
    .- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:04:53 PASS
    .- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    .- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
    ---------------------------------------
    22-SEP-2024 05:04:53  Elapsed: 2 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    dbms_dictionary_check on 22-SEP-2024 05:05:17
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:05:17 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:05:17 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:05:17 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:05:17 PASS
    ---------------------------------------
    22-SEP-2024 05:05:17  Elapsed: 0 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    dbms_dictionary_check on 22-SEP-2024 05:06:45
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
    .- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
    .- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:06:47 PASS
    .- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:06:47 PASS
    .- ObjError                    ... 2300000000 >  1102000000 09/22 05:06:47 PASS
    .- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:06:47 PASS
    .- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:06:47 PASS
    .- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    .- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
    ---------------------------------------
    22-SEP-2024 05:06:47  Elapsed: 2 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    dbms_dictionary_check on 22-SEP-2024 05:09:29
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
    .- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:09:30 PASS
    .- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:09:30 PASS
    .- ObjError                    ... 2300000000 >  1102000000 09/22 05:09:30 PASS
    .- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:09:30 PASS
    .- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:09:30 PASS
    .- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:09:31 PASS
    .- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:09:31 PASS
    ---------------------------------------
    22-SEP-2024 05:09:31  Elapsed: 2 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    dbms_dictionary_check on 22-SEP-2024 05:11:55
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
    .- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:11:56 PASS
    .- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:11:56 PASS
    .- ObjError                    ... 2300000000 >  1102000000 09/22 05:11:56 PASS
    .- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:11:56 PASS
    .- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:11:56 PASS
    .- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    .- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
    ---------------------------------------
    22-SEP-2024 05:11:56  Elapsed: 1 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
    PL/SQL procedure successfully completed.
    
    SQL>
    
    

    This is how you can run a quick CRITICAL check on the data dictionary.

    SQL> set serveroutput on size unlimited
    SQL> execute dbms_dictionary_check.critical
    dbms_dictionary_check on 22-SEP-2024 05:15:42
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:15:42 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:15:42 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:15:42 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:15:42 PASS
    ---------------------------------------
    22-SEP-2024 05:15:42  Elapsed: 0 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
    PL/SQL procedure successfully completed.
    
    

    It also creates a report within your DIAG directory as well.

    -rw-r-----. 1 oracle oinstall  15K Sep 22 05:15 FREE_ora_3941_DICTCHECK.trm
    -rw-r-----. 1 oracle oinstall  35K Sep 22 05:15 FREE_ora_3941_DICTCHECK.trc

    And below is how you can even repair issues or inconsistencies with the data dictionary. In my case there isn’t any, but this is how you can do it.

    SQL>
    SQL>
    SQL> EXECUTE dbms_dictionary_check.full(repair=>TRUE)
    dbms_dictionary_check on 22-SEP-2024 05:13:36
    ----------------------------------------------
    Catalog Version 23.0.0.0.0 (2300000000)
    db_name: FREE
    Is CDB?: YES CON_ID: 3 Container: FREEPDB1
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
                                       Catalog       Fixed
    Procedure Name                     Version    Vs Release    Timestamp
    Result
    ------------------------------ ... ---------- -- ---------- --------------
    ------
    .- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:13:36 PASS
    .- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:13:36 PASS
    .- ObjError                    ... 2300000000 >  1102000000 09/22 05:13:36 PASS
    .- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:13:36 PASS
    .- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:13:36 PASS
    .- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    .- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
    ---------------------------------------
    22-SEP-2024 05:13:36  Elapsed: 0 secs
    ---------------------------------------
    Found 0 potential problem(s) and 0 warning(s)
    Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
    
    PL/SQL procedure successfully completed.
    
    SQL>
    
    

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , | Leave a Comment »