Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘ai’

How Oracle 23ai Fixes Bad Plans Instantly.. A Live Demo of Real-Time SPM

Posted by FatDBA on November 23, 2025

Recently, I did a quick round of testing in my lab with Oracle Database 23ai, focusing on its new SQL Plan Management (SPM) enhancements and the results were impressive.

In 23ai, SPM can actually detect and fix bad execution plans in real time. It monitors a running query, compares the performance of the newly generated plan against the best ones it has seen before (tracked in the Automatic SQL Tuning Set), and if the new plan performs worse, it automatically switches back to the proven baseline.

No evolve task. No manual verification. No DBA intervention.
This feature Real-Time SQL Plan Management was tested on an Oracle 23ai lab build (pre-release environment). While 23ai itself isn’t generally available on-prem (its successor Oracle 26ai now carries these capabilities), the behavior shown here is identical in both 23ai and 26ai, and is already live and enabled by default in Oracle Autonomous Database.

Let’s see it live!!!

SET ECHO ON FEEDBACK ON HEADING ON LINES 300 PAGES 500 TIMING ON
SET SERVEROUTPUT ON
SET LONG 2000000 LONGCHUNKSIZE 2000000 TRIMSPOOL ON

PROMPT === Create tables
CREATE TABLE fact_sales (
  sales_id     NUMBER GENERATED BY DEFAULT AS IDENTITY,
  prod_id      NUMBER NOT NULL,
  cust_id      NUMBER NOT NULL,
  date_id      DATE   NOT NULL,
  channel_id   NUMBER NOT NULL,
  qty          NUMBER NOT NULL,
  amount       NUMBER NOT NULL
);

CREATE TABLE dim_product  ( prod_id NUMBER PRIMARY KEY, category VARCHAR2(40), brand VARCHAR2(40) );
CREATE TABLE dim_customer ( cust_id NUMBER PRIMARY KEY, region   VARCHAR2(30), segment VARCHAR2(30) );
CREATE TABLE dim_channel  ( channel_id NUMBER PRIMARY KEY, channel_name VARCHAR2(30) );

Table created.
Table created.
Table created.
Table created.



PROMPT === Load dimensions
DECLARE
BEGIN
  FOR i IN 1..100000 LOOP
    INSERT INTO dim_product VALUES(i, CASE WHEN MOD(i,10)=0 THEN 'Premium' ELSE 'Standard' END, 'Brand_'||MOD(i,200));
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_product rows inserted: '||TO_CHAR(100000));

  FOR i IN 1..500000 LOOP
    INSERT INTO dim_customer VALUES(i,
      CASE MOD(i,5) WHEN 0 THEN 'NA' WHEN 1 THEN 'EU' WHEN 2 THEN 'APAC' WHEN 3 THEN 'ME' ELSE 'LATAM' END,
      CASE MOD(i,4) WHEN 0 THEN 'ENT' WHEN 1 THEN 'SMB' WHEN 2 THEN 'MID' ELSE 'CONSUMER' END);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_customer rows inserted: '||TO_CHAR(500000));

  FOR i IN 1..6 LOOP
    INSERT INTO dim_channel VALUES(i, 'CH_'||i);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_channel rows inserted: 6');

  COMMIT;
  DBMS_OUTPUT.PUT_LINE('Commit complete.');
END;
/
SHOW ERRORS

dim_product rows inserted: 100000
dim_customer rows inserted: 500000
dim_channel rows inserted: 6
Commit complete.
PL/SQL procedure successfully completed.





PROMPT === Load fact 
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 0..59 LOOP  -- 60 days x 80k = ~4.8M
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/
SHOW ERRORS

fact_sales rows inserted:  4,800,000
PL/SQL procedure successfully completed.






PROMPT === Indexes + Stats
CREATE INDEX fs_prod    ON fact_sales(prod_id);
CREATE INDEX fs_cust    ON fact_sales(cust_id);
CREATE INDEX fs_date    ON fact_sales(date_id);
CREATE INDEX fs_channel ON fact_sales(channel_id);

Index created.
Index created.
Index created.
Index created.




EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_PRODUCT',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CUSTOMER',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CHANNEL',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'FACT_SALES',CASCADE=>TRUE);

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

In this first stage of my testing, I have created a mini data warehouse environment inside Oracle 23ai to simulate a realistic analytical workload. I built three dimension tables (DIM_PRODUCT, DIM_CUSTOMER, DIM_CHANNEL) and one large fact table (FACT_SALES) containing roughly 4.8 million rows of transactional data spread across 60 business days.

After loading the data, I have collected fresh optimizer statistics on all tables and their indexes to ensure that the Oracle Optimizer starts with a clear and accurate view of data distribution before I begin the query and plan testing phase. This baseline setup is essential because, in the next steps, I’ll demonstrate how Real-Time SQL Plan Management (SPM) reacts when execution plans change both for the better and for the worse.


-- What date range do we have?
SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 01-MAR-22



-- Rows per month (quick sanity check)
SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;
SQL> 

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03      80000

3 rows selected.



-- Lets addd the remaining days so Apr to Jun exists (60..180 --> Mar 2 .. Jun 30)
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 60..180 LOOP
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('Additional fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/

SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 30-JUN-22


SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03    2480000
2022-04    2400000
2022-05    2480000
2022-06    2400000







VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- Optional: flush shared pool to force a fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;

-- Run the query
SELECT p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id = f.prod_id
JOIN   dim_customer c ON c.cust_id = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4

2 rows selected.

Elapsed: 00:00:21.33


-- 0) (optional but recommended) collect row-source stats
ALTER SESSION SET statistics_level = ALL;

-- 1) run the query with a gather hint (guarantees ALLSTATS LAST works)
VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)
EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

ALTER SYSTEM FLUSH SHARED_POOL;

SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4




-- get the SQL_ID & CHILD_NUMBER of the query you just ran
COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
2t8da8zxfzp50            0      1151134654 24-NOV-25


SQL> SELECT *
  2  FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id        => '2t8da8zxfzp50',
         cursor_child_no => 0,            
         format        => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));  

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2t8da8zxfzp50, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category,
c.region,        SUM(f.amount) AS rev FROM   fact_sales f JOIN
dim_product  p ON p.prod_id  = f.prod_id JOIN   dim_customer c ON
c.cust_id  = f.cust_id JOIN   dim_channel  ch ON ch.channel_id =
f.channel_id WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                 AND TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category =
'Premium'   AND  c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 1151134654

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |              |      1 |        |       |      2 |00:00:05.79 |   59198 |       |       |          |
|   1 |  HASH GROUP BY                          |              |      1 |      2 |   102 |      2 |00:00:05.79 |   59198 |  1200K|  1200K|  654K (0)|
|*  2 |   HASH JOIN                             |              |      1 |  36018 |  1793K|    145K|00:00:05.75 |   59198 |    23M|  4013K|   26M (0)|
|*  3 |    HASH JOIN                            |              |      1 |  36018 |  1477K|    363K|00:00:05.26 |   57865 |  1995K|  1995K| 3065K (0)|
|   4 |     VIEW                                | VW_GBF_17    |      1 |  50000 |   390K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  5 |      FILTER                             |              |      1 |        |       |  10000 |00:00:00.01 |     396 |       |       |          |
|*  6 |       TABLE ACCESS FULL                 | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  7 |     HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.33 |   57469 |  1922K|  1922K| 1411K (0)|
|*  8 |      TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |       |       |          |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.70 |   57467 |       |       |          |
|* 10 |       INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:00.97 |   17336 |       |       |          |
|* 11 |    TABLE ACCESS FULL                    | DIM_CUSTOMER |      1 |    200K|  1757K|    200K|00:00:00.07 |    1333 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$B734A6F8
   4 - SEL$FB6C8AF9 / VW_GBF_17@SEL$393E5B1C
   5 - SEL$FB6C8AF9
   6 - SEL$FB6C8AF9 / P@SEL$1
   8 - SEL$B734A6F8 / CH@SEL$3
   9 - SEL$B734A6F8 / F@SEL$1
  10 - SEL$B734A6F8 / F@SEL$1
  11 - SEL$B734A6F8 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$FB6C8AF9")
      ELIM_GROUPBY(@"SEL$2B4A2561")
      OUTLINE_LEAF(@"SEL$B734A6F8")
      PLACE_GROUP_BY(@"SEL$EE94F965" ( "P"@"SEL$1" ) 17)
      OUTLINE(@"SEL$2B4A2561")
      ELIM_GROUPBY(@"SEL$85CD04AE")
      OUTLINE(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$85CD04AE")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$393E5B1C")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$B734A6F8" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$B734A6F8" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$B734A6F8" "F"@"SEL$1")
      NO_ACCESS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      FULL(@"SEL$B734A6F8" "C"@"SEL$2")
      LEADING(@"SEL$B734A6F8" "CH"@"SEL$3" "F"@"SEL$1" "VW_GBF_17"@"SEL$393E5B1C" "C"@"SEL$2")
      USE_HASH(@"SEL$B734A6F8" "F"@"SEL$1")
      USE_HASH(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH(@"SEL$B734A6F8" "C"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH_AGGREGATION(@"SEL$B734A6F8")
      FULL(@"SEL$FB6C8AF9" "P"@"SEL$1")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C"."CUST_ID"="F"."CUST_ID")
   3 - access("ITEM_1"="F"."PROD_ID")
   5 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   6 - filter("P"."CATEGORY"='Premium')
   7 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   8 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
  10 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  11 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT"*"ITEM_2")[22]
   2 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   4 - (rowset=256) "ITEM_1"[NUMBER,22], "ITEM_2"[NUMBER,2]
   5 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   6 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   7 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22],
       "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   8 - "CH"."CHANNEL_ID"[NUMBER,22]
   9 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
  10 - "F".ROWID[ROWID,10]
  11 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - this is an adaptive plan


122 rows selected.



Here I just set the reporting window using bind variables and flushed the shared pool to make sure Oracle parses the query fresh. The optimizer picked a clean and efficient plan using an index range scan on FS_DATE to grab only the required time slice from FACT_SALES, then performing hash joins to the small dimension tables and a hash group by at the end.

All the filters from the WHERE clause show up correctly in the predicates, and the stats (about 95k buffers, ~3 seconds) confirm it’s a solid in-memory run. The note at the bottom clearly says SQL plan baseline accepted, which means this is now my “good plan” the one Real-Time SPM will lock onto and automatically revert to if a slower plan shows up later.

VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- 1) regress stats on DIM_CUSTOMER
EXEC DBMS_STATS.DELETE_TABLE_STATS(USER,'DIM_CUSTOMER');

PL/SQL procedure successfully completed.


-- 2) force fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;
System altered.


-- 3) run the query (convert binds to DATEs)
SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category, c.region, SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4



SQL> COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;SQL>   

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
7v44tqagjad7k            0      2441995847 24-NOV-25


SELECT *
FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id => '7v44tqagjad7k', 
         cursor_child_no => 0,
         format => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7v44tqagjad7k, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category, c.region,
SUM(f.amount) AS rev FROM   fact_sales f JOIN   dim_product  p ON
p.prod_id  = f.prod_id JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id WHERE  f.date_id
BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')                      AND
TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category = 'Premium'   AND
c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 2441995847

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |              |      1 |        |       |      2 |00:00:09.07 |   59199 |  14384 |  14384 |       |       |          |         |
|   1 |  HASH GROUP BY                           |              |      1 |  36018 |  2743K|      2 |00:00:09.07 |   59199 |  14384 |  14384 |  1200K|  1200K|  661K (0)|         |
|*  2 |   FILTER                                 |              |      1 |        |       |    145K|00:00:09.04 |   59199 |  14384 |  14384 |       |       |          |         |
|*  3 |    HASH JOIN                             |              |      1 |  36018 |  2743K|    145K|00:00:09.02 |   59199 |  14384 |  14384 |  2402K|  2402K| 3124K (0)|         |
|*  4 |     TABLE ACCESS FULL                    | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN                            |              |      1 |  40000 |  2500K|   1454K|00:00:08.67 |   58803 |  14384 |  14384 |   234M|    14M|   40M (1)|     116M|
|*  6 |      HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.60 |   57469 |      0 |      0 |  1922K|  1922K| 1348K (0)|         |
|*  7 |       TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |      0 |      0 |       |       |          |         |
|   8 |       TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.83 |   57467 |      0 |      0 |       |       |          |         |
|*  9 |        INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:01.04 |   17336 |      0 |      0 |       |       |          |         |
|* 10 |      TABLE ACCESS FULL                   | DIM_CUSTOMER |      1 |    212K|  6229K|    200K|00:00:00.05 |    1334 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$EE94F965
   4 - SEL$EE94F965 / P@SEL$1
   7 - SEL$EE94F965 / CH@SEL$3
   8 - SEL$EE94F965 / F@SEL$1
   9 - SEL$EE94F965 / F@SEL$1
  10 - SEL$EE94F965 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$EE94F965" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$EE94F965" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$EE94F965" "F"@"SEL$1")
      FULL(@"SEL$EE94F965" "C"@"SEL$2")
      FULL(@"SEL$EE94F965" "P"@"SEL$1")
      LEADING(@"SEL$EE94F965" "CH"@"SEL$3" "F"@"SEL$1" "C"@"SEL$2" "P"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "F"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "C"@"SEL$2")
      USE_HASH(@"SEL$EE94F965" "P"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$EE94F965" "P"@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$EE94F965")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   3 - access("P"."PROD_ID"="F"."PROD_ID")
   4 - filter("P"."CATEGORY"='Premium')
   5 - access("C"."CUST_ID"="F"."CUST_ID")
   6 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   7 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
   9 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  10 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT")[22]
   2 - "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   4 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   5 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30],
       "C"."REGION"[VARCHAR2,30]
   6 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   7 - "CH"."CHANNEL_ID"[NUMBER,22]
   8 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   9 - "F".ROWID[ROWID,10]
  10 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan


110 rows selected.







-- Real-Time SPM evidence
-- Real-Time SPM Evidence
ALTER SESSION SET optimizer_capture_sql_plan_baselines = TRUE;
ALTER SESSION SET optimizer_use_sql_plan_baselines     = TRUE;

COLUMN last_verified            FORMAT A19
COLUMN foreground_last_verified FORMAT A19
COLUMN accepted                 FORMAT A8
COLUMN enabled                  FORMAT A8
COLUMN origin                   FORMAT A20
COLUMN sql_handle               FORMAT A20
COLUMN plan_name                FORMAT A25

SELECT sql_handle,
       plan_name,
       origin,
       accepted,
       enabled,
       TO_CHAR(last_verified,'YYYY-MM-DD HH24:MI:SS')            AS last_verified,
       TO_CHAR(foreground_last_verified,'YYYY-MM-DD HH24:MI:SS') AS fg_verified
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SUM(f.amount)%'
ORDER  BY last_verified DESC;

SQL_HANDLE           PLAN_NAME                 ORIGIN               ACCEPTED ENABLED LAST_VERIFIED        FG_VERIFIED
-------------------- ------------------------- -------------------- -------- ------- -------------------- --------------------
SQL_bf3d9f8a42       SQL_PLAN_2t8da8zxfzp50    AUTO-CAPTURE         YES      YES     2025-11-24 14:06:19  2025-11-24 14:06:19
SQL_bf3d9f8a42       SQL_PLAN_7v44tqagjad7k    FOREGROUND-CAPTURE   NO       NO      2025-11-24 14:06:18  2025-11-24 14:06:18

-- SQL_PLAN_2t8da8zxfzp50 ---> the good baseline plan, automatically captured, accepted and enabled by Real-Time SPM.

-- SQL_PLAN_7v44tqagjad7k ----> the regressed plan, captured during foreground execution but rejected (ACCEPTED=NO) as it underperformed compared to the verified baseline.




In the good baseline plan (SQL_ID 2t8da8zxfzp50, plan hash 1151134654), the optimizer uses an FS_DATE index range scan with batched ROWID fetches from FACT_SALES, then hash-joins (F ↔ VW_GBF_17(P) ↔ C) with a tiny full scan of CH (3 rows) and finishes with a hash group by. It’s fully in-memory about 59,198 buffers, ~5.79s elapsed, 0 reads, 0 temp with predicates correctly slicing DATE_ID; this is the efficient baseline. After deleting stats on DIM_CUSTOMER, the regressed plan (SQL_ID 7v44tqagjad7k, plan hash 2441995847) keeps the same general shape and the FS_DATE index range scan, but row-source sizing changes due to misestimates; it still hash-joins and groups, yet now needs ~59,199 buffers, ~9.07s, ~14,384 reads, and ~116MB temp (dynamic sampling level=2), i.e., the same outline but degraded cardinality drives extra I/O and temp usage aabout a 56% slowdown …

The numbers tell the story: Oracle tried the new plan once, learned it was bad, and reverted. This isn’t magic or marketing. It’s just Oracle 23ai doing what every good DBA does watching, measuring, and reverting fast when things go south. Real-Time SPM uses the same cost and runtime data we always trusted from AWR, but now the optimizer reacts within seconds instead of waiting for us.

On large production systems, it won’t prevent every performance issue, but it dramatically reduces the window where a regression hurts. It’s safe to enable, requires no extra license beyond Enterprise Edition, and it works right out of the box in Autonomous DB.

In Short

  • 23ai on-prem Enterprise Edition and Autonomous DB both support Real-Time SPM.
  • It compares runtime metrics from the Automatic SQL Tuning Set.
  • When a new plan runs slower, Oracle reinstates the previous good baseline automatically.
  • You’ll see it immediately in DBA_SQL_PLAN_BASELINES with ORIGIN = 'FOREGROUND-CAPTURE'.
  • You can watch it happen live through execution plans and elapsed times.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | 1 Comment »

From Painful Manual LOB Shrink to Automatic SecureFiles Shrink

Posted by FatDBA on November 15, 2025

I’ve been working with LOBs for years now, and trust me, shrinking them has always been a headache. Anyone who has ever tried ALTER TABLE … SHRINK SPACE on a big SecureFiles LOB knows the pain … blocking sessions, unexpected waits, and sometimes that lovely ORA-1555 popping up at the worst time. Every DBA eventually gets into that situation where a LOB segment is 200 GB on disk but only 10 GB of real data remains. You delete rows… but the space never comes back unless you manually shrink it, which itself can cause more issues.

With the introduction of Automatic SecureFiles Shrink, Oracle really made a DBA’s life easier. This feature, which first came out in 23ai, quietly frees up unused LOB space in the background without disrupting your workload. I wanted to see how it behaves in a real scenario, so I set up a small lab and tested it out. Here’s the whole experiment, raw and simple.

Lets do a demo and understand how this new feature works … I spun up a fresh PDB and made a small tablespace just for this test. Nothing fancy.

CREATE TABLESPACE lobts
  DATAFILE '/u02/oradata/LOBTS01.dbf'
  SIZE 1G AUTOEXTEND ON NEXT 256M;

Tablespace created.



-- Table with a securefile LOB based column.
CREATE TABLE tst_securefile_lob
(
    id NUMBER,
    lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE (
    TABLESPACE lobts
    CACHE
);

Table created.




SELECT table_name, column_name, securefile
FROM   user_lobs
WHERE  table_name='TST_SECUREFILE_LOB';

TABLE_NAME           COLUMN_NAME   SECUREFILE
-------------------  ------------  ----------
TST_SECUREFILE_LOB   LOB_DATA      YES


Next, I inserted a good amount of junk data around 10,000 rows of random CLOB strings. I wanted the LOB segment to be big enough to see clear differences after shrink.

BEGIN
  FOR r IN 1 .. 10 LOOP
    INSERT INTO tst_securefile_lob (id, lob_data)
    SELECT r*100000 + level,
           TO_CLOB(DBMS_RANDOM.STRING('x', 32767))
    FROM dual
    CONNECT BY level <= 1000;
    COMMIT;
  END LOOP;
END;
/

PL/SQL procedure successfully completed.



SELECT COUNT(*) FROM tst_securefile_lob;

  COUNT(*)
----------
     10000




SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE  ul.table_name = 'TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$     131072     1024.00




-- After stats and a quick check in USER_SEGMENTS, the LOB segment was showing a nice chunky size. 
-- Then I deleted almost everything
-- Now the table will have very few rows left, but the LOB segment was still the same size. As usual.
DELETE FROM tst_securefile_lob
WHERE id < 900000;
COMMIT;

9990 rows deleted. 
Commit complete.


-- Checking LOB Internal Usage (Before Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 131072  bytes=1073741824
Used blocks         =  10240  bytes=83886080
Expired blocks      = 110592  bytes=905969664
Unexpired blocks    =  10240  bytes=83886080
-- This clearly shows almost the entire segment is expired/free but not reclaimed.


-- Checking Auto Shrink Statistics (Before Enabling)
SELECT name, value
FROM   v$sysstat
WHERE  name LIKE '%Auto SF SHK%'
ORDER  BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               0
Auto SF SHK successful                       0
Auto SF SHK total number of tasks            0

Turning on Automatic Shrink — By default this feature is OFF, so I enabled it:

EXEC DBMS_SPACE.SECUREFILE_SHRINK_ENABLE;

PL/SQL procedure successfully completed.

That’s literally it. No parameters, no tuning, nothing else. Just enable.

Automatic SecureFiles Shrink does not run immediately after you delete data. Oracle requires that a SecureFiles LOB segment be idle for a specific amount of time before it becomes eligible for shrinking, and the default idle-time limit is 1,440 minutes (24 hours). “Idle” means that no DML or preallocation activity has occurred on that LOB during that period. Once the segment meets this condition, Oracle considers it during its automatic background shrink task, which is part of AutoTask and runs every 30 minutes with a defined processing window.

When the task executes, it attempts to shrink eligible segments, but it does so gradually and in small increments using a trickle-based approach, rather than reclaiming all possible space in a single operation. This incremental behavior is deliberate: it reduces impact on running workloads and avoids heavy reorganization all at once. Only segments that meet all selection criteria .. such as having sufficient free space above the defined thresholds and not using RETENTION MAX … are processed. Because of this incremental design and the eligibility rules, the full space reclamation process can span multiple background cycles.

Some of the internal hidden/underscore params that can be used to control these limits (not unless support asked you to do or you are dealing a lab system)

Parameter                                    Default_Value    Session_Value    Instance_Value   IS_SESSION_MODIFIABLE   IS_SYSTEM_MODIFIABLE
------------------------------------------- ---------------  ---------------- ---------------- ----------------------- ---------------------
_ktsls_autoshrink_seg_idle_seconds            86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_seg_pen_seconds             86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_trickle_mb                  5                5                5                FALSE                   IMMEDIATE

Okay, lets check out post chnaghe outputs, what automatic LOB shrink does to our test object.

SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE ul.table_name='TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$      40960      320.00

Note: From ~1024 MB down to ~320 MB. Auto shrink worked     🙂



-- DBMS_SPACE Usage (After Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 40960  bytes=335544320
Used blocks         =  9216  bytes=75497472
Expired blocks      =  6144  bytes=50331648
Unexpired blocks    =  9216  bytes=75497472

Notee:  Expired blocks dropped from 110k -->  6k. This confirms auto shrink freed most of the fragmented space.






-- After the task is run.
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%Auto SF SHK%'
ORDER BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               1
Auto SF SHK successful                       1
Auto SF SHK total number of tasks            1




SELECT owner,
       segment_name,
       shrunk_bytes,
       attempts,
       last_shrink_time
FROM v$securefile_shrink;

OWNER      SEGMENT_NAME               SHRUNK_BYTES    ATTEMPTS   LAST_SHRINK_TIME
---------  ------------------------   -------------   ---------  ---------------------------
PRASHANT   SYS_LOB0001234567C00002$      744947712          1     14-NOV-2025 06:32:15

Note:  Oracle automatically reclaimed ~710 MB of wasted LOB space.

This feature, It’s simple, it’s safe, and it saves DBAs from doing manual shrink maintenance again and again. It’s not a fast feature it’s slow and polite on purpose but it works exactly as expected.

If your system has LOBs (EBS attachments, documents, JSON, logs, media files, etc.), you should absolutely enable this. Let Oracle handle the boring part.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

MySQL OPTIMIZE TABLE – Disk Space Reclaim Defragmentation and Common Myths

Posted by FatDBA on August 8, 2025

When working with MySQL databases, one common task is reclaiming disk space and defragmenting tables. The typical solution that most of us have turned to is OPTIMIZE TABLE. While this sounds like a simple, quick fix, there are a few myths and things we often overlook that can lead to confusion. Let’s break it down.

The Basics: OPTIMIZE TABLE and ALTER TABLE

To reclaim space or defragment a table in MySQL, the go-to commands are usually:

  • OPTIMIZE TABLE <table_name>; or OPTIMIZE TABLE [table_name_1], [table_name_2] or via sudo mysqlcheck -o [schema] [table] -u [username] -p [password]

But before we dive into the myths, let’s clarify what happens when you run these commands.

OPTIMIZE TABLE Overview

  • OPTIMIZE TABLE is essentially a shorthand for ALTER TABLE <table_name> ENGINE=InnoDB for InnoDB tables. It works by rebuilding the table, compacting data, and reclaiming unused space.
  • In MySQL 5.6.17 and later, the command works online, meaning it allows concurrent reads and writes during the rebuild, with some exceptions (brief locking during initial and final stages). Prior to 5.6.17, the table was locked for the entire duration of the operation, causing application downtime.

Myth #1: OPTIMIZE TABLE Is Always Quick

  • No: OPTIMIZE TABLE can indeed take a long time for large tables, especially if there are a lot of inserts, deletes, or updates. This is true when rebuilding the table. For larger datasets, the I/O load can be significant.
mysql> OPTIMIZE TABLE my_large_table;
+----------------------------+--------+----------+----------+----------+
| Table                      | Op     | Msg_type | Msg_text |
+----------------------------+--------+----------+----------+----------+
| mydb.my_large_table         | optimize | ok       | Table optimized |
+----------------------------+--------+----------+----------+----------+

In the output, the values under each column heading would show:

  • Table: The table that was optimized (e.g., yourdb.customers).
  • Op: The operation performed (optimize).
  • Msg_type: Type of message, usually status.
  • Msg_text: The result of the operation, such as OK or a specific message (e.g., “Table is already up to date”).

If the table is already optimized or doesn’t require optimization, the output might look like this:

+------------------+----------+----------+-----------------------------+
| Table            | Op       | Msg_type | Msg_text                    |
+------------------+----------+----------+-----------------------------+
| yourdb.customers | optimize | note     | Table is already up to date |
+------------------+----------+----------+-----------------------------+

Below screenshot explains possible values of msg_text etc.

Real-Time Example Validation:

  • MySQL logs can show something like this: [Note] InnoDB: Starting online optimize table my_large_table [Note] InnoDB: Table optimized successfully

However, for larger tables, it is critical to consider the additional I/O load during the rebuild. For example:
bash [Note] InnoDB: Rebuilding index my_large_table_idx [Note] InnoDB: Table rebuild completed in 300 seconds

Note: In order to get more detailed information its good to verify PROCESSLIST or SLOW QUERY LOG (if enabled).

Myth #2: OPTIMIZE TABLE Doesn’t Block Other Operations

  • Yes/No: This myth is partly true and partly false depending on the MySQL version.
  • For MySQL 5.5 and earlier: The table is locked for writes, but concurrent reads are allowed.
  • For MySQL 5.6.16 and earlier: Same as above .. concurrent reads are allowed, but writes are blocked.
  • For MySQL 5.6.17 and later: Concurrent reads and writes are allowed during the rebuild process, but the table still needs to be briefly locked during the initial and final phases. There is a brief lock required to start the process, which is often overlooked.

Real-Time Example for MySQL 5.6.17+:

[Note] InnoDB: Starting online optimize table my_large_table
[Note] InnoDB: Table optimized successfully

Although reads and writes are allowed during this process, you might still experience short bursts of lock at the start and end of the operation.

Myth #3: You Don’t Need to Worry About Disk Space

  • No: You need sufficient disk space before running OPTIMIZE TABLE. If you’re running low on space, you could encounter errors or performance issues during the rebuild process.
  • There are few bugs as well which might could occur if disk space is insufficient. Additionally, there’s also temporary disk space required during the rebuild process. Running OPTIMIZE TABLE with insufficient space could fail silently, leading to issues down the line.

Best Practice:
Ensure that your disk has at least as much free space as the table you’re optimizing, as a copy of the table is created temporarily during the rebuild.

Myth #4: ALTER TABLE with Row Format Is Always the Solution

  • No: ALTER TABLE ... ROW_FORMAT=COMPRESSED or other formats can help optimize space, but it may not always result in savings, especially for certain data types (like BLOBs or large text fields). It can also introduce overhead on the CPU if you’re using compression.

In some cases, switching to a compressed format can actually increase the size of the table, depending on the type of data stored.

Real-Time Example:

  • For a table like customer_data: ALTER TABLE customer_data ROW_FORMAT=COMPRESSED; Depending on the types of columns and data (e.g., BLOBs or TEXT), compression might not always yield the expected results.

Myth #5: You Only Need to Optimize Tables When They Get Slow

  • No: This is another common misconception. Regular optimization is crucial to ensure long-term performance, especially for heavily modified tables. Tables that undergo a lot of updates or deletions can become fragmented over time, even without obvious performance degradation.

Optimizing periodically can help prevent gradual performance loss.

Real-Time Example:

  • If you have an orders table: mysql> OPTIMIZE TABLE orders; Over time, especially with frequent UPDATE or DELETE operations, fragmentation can slow down access, even if it’s not immediately noticeable.

Main Pointerss ..

  • OPTIMIZE TABLE is a helpful tool but not a one-size-fits-all solution.
  • It requires sufficient disk space and careful consideration of your MySQL version and storage engine (InnoDB vs. MyISAM).
  • In MySQL 5.6.17 and later, online optimizations are possible, but brief locking still occurs during the process.
  • For MyISAM tables, there’s no escaping the full lock during optimization.
  • Always assess the potential overhead (I/O and CPU usage) before running the operation, especially on larger datasets.

By breaking these myths, you can make better decisions when using OPTIMIZE TABLE to keep your database healthy without causing unnecessary downtime or performance hits.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

fatdba explores Vector Search in Oracle 23ai

Posted by FatDBA on July 23, 2025

So Oracle rolled out 23ai a while back and like every major release, it came packed with some really cool and interesting features. One that definitely caught my eye was Vector Search. I couldn’t resist diving in… and recently I explored it in depth and would like to share a though on this subject.

You see, we’ve been doing LIKE '%tax policy%' since forever. But now, Oracle’s SQL has become more powerful. Not only does it match words … it matches meaning.

So here’s me trying to explain what vector search is, how Oracle does it, why you’d care, and some examples that’ll hopefully make it click.

What’s Vector Search, Anyway?

Alright, imagine this:

You have a table of products. You search for “lightweight laptop for travel”.
Some entries say “ultrabook”, others say “portable notebook”, and none mention “lightweight” or “travel”. Old-school SQL would’ve said: “No Matches Found”

But with vector search, it gets it. Oracle turns all that text into math .. basically, a long list of numbers called a vector … and compares meanings instead of words.

So What’s a Vector?

When we say “vector” in vector search, we’re not talking about geometry class. In the world of AI and databases, a vector is just a long list of numbers … each number representing some aspect or feature of the original input (like a sentence, product description, image, etc.).

Here’s a basic example:
[0.12, -0.45, 0.88, …, 0.03]
This is a vector … maybe a 512 or 1536-dimension one .. depending on the embedding model used (like OpenAI, Oracle’s built-in model, Cohere, etc.).

Each number in this list is abstract, but together they represent the essence or meaning of your data.

Let’s say you have these two phrases:
“Apple is a tech company”
“iPhone maker based in California”

Now, even though they don’t share many words, they mean nearly the same thing. When passed through an embedding model, both phrases are converted into vectors:

Vector A: [0.21, -0.32, 0.76, …, 0.02]
Vector B: [0.22, -0.30, 0.74, …, 0.01]

They look very close … and that’s exactly the point.

What Oracle 23ai Gives You

  • A new VECTOR datatype (yeah!)
  • AI_VECTOR() function to convert text into vectors
  • VECTOR_INDEX to make search blazing fast
  • VECTOR_DISTANCE() to measure similarity
  • It’s all native in SQL ..no need for another vector DB bolted on

Let’s Build Something Step-by-Step

We’ll build a simple product table and do a vector search on it.

Step 1: Create the table

CREATE TABLE products (
  product_id     NUMBER PRIMARY KEY,
  product_name   VARCHAR2(100),
  description    VARCHAR2(1000),
  embedding      VECTOR(1536)
);

1536? Yeah, that’s the number of dimensions from Oracle’s built-in embedding model. Depends on which one you use.

Step 2: Generate vector embeddings

UPDATE products
SET embedding = ai_vector('text_embedding', description);

This’ll take the description, pass it through Oracle’s AI model, and give you a vector. Magic.

Step 3: Create the vector index

CREATE VECTOR INDEX product_vec_idx
ON products (embedding)
WITH (DISTANCE METRIC COSINE);

This speeds up the similarity comparisons … much like an index does for normal WHERE clauses.

Step 4: Semantic Search in SQL

SELECT product_id, product_name, 
       VECTOR_DISTANCE(embedding, ai_vector('text_embedding', 'light laptop for designers')) AS score
FROM products
ORDER BY score
FETCH FIRST 5 ROWS ONLY;

Now we’re searching for meaning, not words.

VECTOR_DISTANCE Breakdown

You can use different math behind the scenes:

VECTOR_DISTANCE(v1, v2 USING COSINE)
VECTOR_DISTANCE(v1, v2 USING EUCLIDEAN)
VECTOR_DISTANCE(v1, v2 USING DOT_PRODUCT)

Cosine is the usual go-to for text. Oracle handles the rest for you.

Use Cases You’ll Actually Care About

1. Semantic Product Search — “Fast shoes for runners” => shows “Nike Vaporfly”, even if it doesn’t say “fast”.

2. Similar Document Retrieval — Find all NDAs that look like this one (even with totally different words).

3. Customer Ticket Suggestion — Auto-suggest resolutions from past tickets. Saves your support team hours.

4. Content Recommendation — “People who read this also read…” kind of stuff. Easy to build now.

5. Risk or Fraud Pattern Matching — Find transactions that feel like fraud ..even if the details don’t match 1:1.

I know it might sound little confusing .. lets do a Onwe more example : Legal Document Matching

CREATE TABLE legal_docs (
  doc_id       NUMBER PRIMARY KEY,
  title        VARCHAR2(255),
  content      CLOB,
  content_vec  VECTOR(1536)
);

Update vectors:

UPDATE legal_docs
SET content_vec = ai_vector('text_embedding', content);

Now find similar docs:

SELECT doc_id, title
FROM legal_docs
ORDER BY VECTOR_DISTANCE(content_vec, ai_vector('text_embedding', 'confidentiality in government contracts'))
FETCH FIRST 10 ROWS ONLY;

That’s it. You’re officially building an AI-powered legal search engine.

Things to Know

  • Creating vectors can be heavy .. batch it.
  • Indexing speeds up similarity search a lot.
  • Combine with normal filters for best results:
SELECT * FROM products
WHERE category = 'laptop'
ORDER BY VECTOR_DISTANCE(embedding, ai_vector('gaming laptop under 1kg'))
FETCH FIRST 5 ROWS ONLY;

Final Thoughts from fatdba

I’m honestly impressed. Oracle took something that felt like ML black magic and put it right in SQL. No external service. No complicated setups. Just regular SQL, but smater.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

Diagnosing a MySQL database performance Issue Using MySQLTuner.

Posted by FatDBA on July 20, 2025

A few weeks ago, we ran into a pretty nasty performance issue on one of our MySQL production-like grade databases. It started with slow application response times and ended with my phone blowing up with alerts. Something was clearly wrong, and while I suspected some bad queries or config mismatches, I needed a fast way to get visibility into what was really happening under the hood.

This is where MySQLTuner came to the rescue, again 🙂 I’ve used this tool in the past, and honestly, it’s one of those underrated gems for DBAs and sysadmins. It’s a Perl script that inspects your MySQL configuration and runtime status and then gives you a human-readable report with recommendations.

Let me walk you through how I used it to identify and fix the problem ..step by step .. including actual command output, what I changed, and the final outcome.

Step 1: Getting MySQLTuner

First things first, if you don’t already have MySQLTuner installed, just download it:

bashCopyEditwget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl
chmod +x mysqltuner.pl

You don’t need to install anything. Just run it like this:

bashCopyEdit./mysqltuner.pl --user=root --pass='YourStrongPassword'

(Note: Avoid running this in peak traffic hours on prod unless you’re sure about your load and risk.)

Step 2: Sample Output Snapshot

Here’s a portion of what I got when I ran it:

 >>  MySQLTuner 2.6.20 
 >>  Run with '--help' for additional options and output filtering

[OK] Currently running supported MySQL version 5.7.43
[!!] Switch to 64-bit OS - MySQL cannot use more than 2GB of RAM on 32-bit systems
[OK] Operating on 64-bit Linux

-------- Performance Metrics -------------------------------------------------
[--] Up for: 3d 22h 41m  (12M q [35.641 qps], 123K conn, TX: 92G, RX: 8G)
[--] Reads / Writes: 80% / 20%
[--] Binary logging is enabled (GTID MODE: ON)
[--] Total buffers: 3.2G global + 2.8M per thread (200 max threads)
[OK] Maximum reached memory usage: 4.2G (27.12% of installed RAM)
[!!] Slow queries: 15% (1M/12M)
[!!] Highest connection usage: 98% (197/200)
[!!] Aborted connections: 2.8K
[!!] Temporary tables created on disk: 37% (1M on disk / 2.7M total)

-------- MyISAM Metrics ------------------------------------------------------
[!!] Key buffer used: 17.2% (89M used / 512M cache)
[!!] Key buffer size / total MyISAM indexes: 512.0M/800.0M

-------- InnoDB Metrics ------------------------------------------------------
[OK] InnoDB buffer pool / data size: 2.0G/1.5G
[OK] InnoDB buffer pool instances: 1
[--] InnoDB Read buffer efficiency: 99.92% (925M hits / 926M total)
[!!] InnoDB Write log efficiency: 85.10% (232417 hits / 273000 total)
[!!] InnoDB log waits: 28

-------- Recommendations -----------------------------------------------------
General recommendations:
    Control warning line(s) size by reducing joins or increasing packet size
    Increase max_connections slowly if needed
    Reduce or eliminate persistent connections
    Enable the slow query log to troubleshoot bad queries
    Consider increasing the InnoDB log file size
    Query cache is deprecated and should be disabled

Variables to adjust:
    max_connections (> 200)
    key_buffer_size (> 512M)
    innodb_log_file_size (>= 512M)
    tmp_table_size (> 64M)
    max_heap_table_size (> 64M)

Step 3: What I Observed

Here’s what stood out for me:

1. Too many slow queries — 15% of all queries were slow. That’s a huge red flag. This wasn’t being logged properly either — the slow query log was off.

2. Disk-based temporary tables — 37% of temporary tables were being written to disk. This kills performance during joins and sorts.

3. Connections hitting limit — 197 out of 200 max connections used at peak. Close to saturation ..possibly causing application timeouts.

4. MyISAM key buffer inefficient — Key buffer was too small for the amount of MyISAM index data (yes, we still have a couple legacy MyISAM tables..

5. InnoDB log file too small — Frequent log flushing and waits were indicated, meaning innodb_log_file_size wasn’t enough for our write load.

Step 4: Actions I Took

Here’s what I changed based on the output and a quick double-check of our workload patterns:

– Enabled Slow Query Log

sqlCopyEditSET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;

And updated /etc/my.cnf:

iniCopyEditslow_query_log = 1
slow_query_log_file = /var/log/mysql-slow.log
long_query_time = 1

– Increased tmp_table_size and max_heap_table_size:

iniCopyEdittmp_table_size = 128M
max_heap_table_size = 128M

(This reduced the % of temp tables going to disk.)

– Raised innodb_log_file_size:

iniCopyEditinnodb_log_file_size = 512M
innodb_log_files_in_group = 2

Caution: You need to shut down MySQL cleanly and delete old redo logs before applying this change.

– Raised key_buffer_size:

iniCopyEditkey_buffer_size = 1G

We still had some legacy MyISAM usage and this definitely helped reduce read latency.

– Upped the max_connections a bit (but also discussed with devs about app-level connection pooling):

iniCopyEditmax_connections = 300

Step 5: Post-Change Observations

After making these changes and restarting MySQL (for some of the changes to take effect), here’s what I observed:

  • CPU dropped by ~15% at peak hours.
  • Threads_running dropped significantly, meaning less contention.
  • Temp table usage on disk dropped to 12%.
  • Slow query log started capturing some really bad queries, which were fixed in the app code within a few days.
  • No more aborted connections or connection errors from the app layer.

Final Thoughts

MySQLTuner is not a magic bullet, but it’s one of those tools that gives you quick, actionable insights without the need to install big observability stacks or pay for enterprise APM tools. I’d strongly suggest any MySQL admin or engineer dealing with production performance issues keep this tool handy.

It’s also good for periodic health checks, even if you’re not in a crisis. Run it once a month or so, and you’ll catch slow config drifts or usage pattern changes.

Resources

If you’ve had a similar experience or used MySQLTuner in your infra, would love to hear what kind of findings you had. Drop them in the comments or message me directly .. Want to know more 🙂 Happy tuning!

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , | Leave a Comment »

How I see vector indexing and search in Oracle 23ai

Posted by FatDBA on November 1, 2024

Hi everyone!

It’s been a while since Oracle 23c (or 23c AI) was released, and there’s already a ton of great content out there on one of its coolest new features: Vector Indexes and Searches. This feature has opened up exciting possibilities for AI-driven applications, from advanced recommendations to similarity searches within databases. Now, it’s finally my turn to dive in and put it to the test!

Vector search in Oracle 23c AI is a feature designed for searching, comparing, and finding similarities between data represented as vectors (numerical arrays). This capability is particularly useful for AI and machine learning applications, where data like images, text, and products are often encoded as vector embeddings. Oracle’s vector search enables the database to efficiently process these embeddings, allowing users to perform similarity-based queries within the database. Oracle’s vector search in 23c AI makes it possible to bring advanced similarity-based search and recommendation capabilities directly into the database, facilitating powerful new applications that leverage the latest in AI and machine learning.

The base or the foundation of this feature is VECTOR data type, a powerful tool for storing and managing vector embeddings directly within the database. This capability allows Oracle to support advanced AI and machine learning applications, such as similarity searches and recommendation engines, by efficiently handling multi-dimensional arrays that represent data features, semantics, or other complex characteristics.

Key Advantages of the VECTOR Data Type

  1. Compact Multi-Dimensional Array Storage:
  • VECTOR allows fixed-length arrays of numbers (often floating-point) to be stored compactly in a single column. A 100-dimensional vector, for instance, might represent an embedding for text, an image, or a user profile, capturing nuanced data in a manageable form.
  1. Efficient Similarity Calculations:
  • With Oracle 23c AI, VECTOR data can be processed for similarity searches directly in the database. Functions like VECTOR_DISTANCE make it easy to calculate similarity scores or distances between vectors, enabling Oracle to quickly identify similar items without needing external tools or complex data transformations.
  1. Tailored for AI and Machine Learning:
  • VECTOR is ideal for applications that require semantic searches, such as finding similar products, media, or documents, and for AI-driven recommendations. This makes it well-suited for Natural Language Processing (NLP), recommendation engines, and image recognition tasks.
  1. Support for Various Distance Metrics:
  • Oracle’s VECTOR data type is compatible with common distance metrics, such as Euclidean and cosine similarity, which measure how close or far two vectors are from each other. This enables efficient and flexible similarity calculations, essential for applications relying on vector comparisons.
  1. Scalability with Vector Indexing:
  • Oracle 23c AI allows for VECTOR data indexing through VECTOR_INDEX, optimizing performance for vector-based queries. This makes real-time similarity searches on large datasets feasible, supporting applications that need quick response times, such as real-time AI insights and personalized recommendations.

The VECTOR data type in Oracle 23c AI is a robust addition for data scientists and developers, enabling high-performance similarity searches, efficient data storage, and a foundation for complex AI and machine learning workloads directly within the Oracle database.

Alright, so now when the stage is set, lets stay tuned as I explore how Vector Indexes and Searches can bring a whole new dimension to Oracle 23c. Here’s a detailed, step-by-step demo to help you showcase vector search and indexes in Oracle Database 23c AI, which will be ideal for your blog post. This example will simulate a product recommendation system using vector embeddings to represent product features.

1. Set Up the Environment

Ensure you’re using Oracle 23c AI, as vector indexes and the VECTOR data type are new features available in this release.

2. Create a Table with Vector Embeddings

In this example, each product is represented by a 10-dimensional vector embedding. This vector could represent a combination of product attributes, like color, category, brand, and other numerical encodings.

CREATE TABLE products (
    product_id NUMBER PRIMARY KEY,
    product_name VARCHAR2(100),
    product_description VARCHAR2(255),
    product_embedding VECTOR(10) -- Vector with 10 dimensions
);

3. Insert Sample Data

Populate the table with sample products and their vector embeddings. Here’s an example where each vector represents the product’s characteristics:

INSERT INTO products (product_id, product_name, product_description, product_embedding) VALUES 
    (1, 'Laptop A', 'High-performance laptop for professionals', VECTOR(0.5, 0.2, 0.7, 0.1, 0.4, 0.3, 0.8, 0.6, 0.9, 0.3)),
    (2, 'Smartphone B', 'Latest smartphone with advanced features', VECTOR(0.6, 0.1, 0.8, 0.5, 0.2, 0.7, 0.4, 0.3, 0.6, 0.1)),
    (3, 'Tablet C', 'Compact and powerful tablet for multimedia', VECTOR(0.4, 0.9, 0.5, 0.7, 0.3, 0.6, 0.2, 0.8, 0.1, 0.9)),
    (4, 'Laptop D', 'Affordable laptop with reliable performance', VECTOR(0.7, 0.6, 0.4, 0.9, 0.8, 0.2, 0.5, 0.3, 0.1, 0.4)),
    (5, 'Headphones E', 'Noise-canceling headphones with high fidelity', VECTOR(0.3, 0.8, 0.1, 0.9, 0.6, 0.4, 0.7, 0.2, 0.5, 0.6));

4. Create a Vector Index for Efficient Searches

Creating a vector index allows Oracle to search these embeddings efficiently based on similarity, which is ideal for recommendation systems, image retrieval, and text similarity searches.

CREATE INDEX product_vector_idx ON products (product_embedding)
    INDEXTYPE IS VECTOR_INDEX;

5. Perform a Vector Similarity Search

With the vector index in place, you can now perform searches to find products that are similar to a given vector. For example, suppose a user has shown interest in products that correspond to a vector representation of [0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6]. You can use this vector to find products with similar embeddings.

SELECT product_name, product_description, product_embedding
FROM products
ORDER BY VECTOR_DISTANCE(product_embedding, VECTOR(0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6)) ASC
FETCH FIRST 5 ROWS ONLY;

In this query:

  • The VECTOR_DISTANCE function calculates the similarity (distance) between the target vector and each product’s embedding. Sorting by VECTOR_DISTANCE allows you to retrieve the closest matches.
  • This example fetches the top 5 most similar products.

6. Use Case: Personalized Product Recommendation

This setup is ideal for applications such as a product recommendation engine. Suppose you have a user profile represented as a vector, you could use a vector search to find products similar to that profile, offering highly relevant recommendations.

7. Handling Updates to Vector Embeddings

If a product’s features change, you may need to update its embedding. Here’s how you could update the vector for a specific product:

UPDATE products
SET product_embedding = VECTOR(0.7, 0.3, 0.9, 0.5, 0.6, 0.4, 0.8, 0.2, 0.3, 0.5)
WHERE product_id = 2;

8. Advanced Use Case: Nearest Neighbor Search with Filtering

Let’s extend the search by filtering for specific criteria. For example, if you only want to recommend products that contain “Laptop” in the name, you can add a WHERE clause to the query.

SELECT product_name, product_description, product_embedding
FROM products
WHERE product_name LIKE '%Laptop%'
ORDER BY VECTOR_DISTANCE(product_embedding, VECTOR(0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6)) ASC
FETCH FIRST 5 ROWS ONLY;

9. Explain the Vector Distance Metric

For your blog, it’s useful to explain the concept of vector similarity:

  • Vector Distance (Similarity): This metric measures how similar or dissimilar two vectors are. A smaller distance implies a higher similarity. Oracle 23c AI’s VECTOR_DISTANCE function calculates this distance to identify closely related items, making it ideal for applications in recommendations, content search, and clustering.

10. Additional Notes for Optimization and Use Cases

  • Index Maintenance: Like other indexes, a vector index should be maintained for performance. Ensure to monitor the index usage and rebuild if necessary for large datasets.
  • Scalability: Oracle 23c AI’s vector search is optimized for large-scale deployments, making it suitable for extensive recommendation systems, content-based searches, and personalized user experiences.

This detailed walkthrough should provide a comprehensive demo for your blog post, showcasing Oracle 23c AI’s new vector search and indexing features. Let me know if you need any additional details or further customization!

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , , | Leave a Comment »

Oracle 23ai DBMS_HCHECK is your new best friend for Data Dictionary health

Posted by FatDBA on September 22, 2024

Today, while experimenting with Oracle 23ai new features, I came across a particularly helpful feature called DBMS_DICTIONARY_CHECK (referred to as DBMS_HCHECK in Oracle 23c). This tool proves to be both critical and incredibly useful for identifying and resolving data dictionary issues, which can sometimes be indirectly linked to errors like ORA-00600 or any inconsistencies or changes in DD.

I tested both options—FULL and CRITICAL—and found them highly effective. Both procedures are quick and performs a deep validation of the data dictionary, and while they print the results on the screen, they also write the output to a file on the operating system for further review. The FULL procedure conducts an exhaustive check, while the CRITICAL option focuses on only the most essential checks, making it quicker to execute. One of the most valuable aspects of these procedures is their ability not just to identify issues but also to repair them if you set the repair=>TRUE parameter.

The results of the execution are reported as:

CRITICAL: Requires immediate attention and fixing.
FAIL: High-priority issue that needs resolution.
WARN: Non-urgent but worth resolving.
PASS: No issues detected.

It brings back memories of the days when you had to manually download Oracle’s hcheck.sql script to analyze your data dictionary for known problems. Now, with this feature integrated directly into the database, it makes the process significantly more convenient and efficient.

Note: In Oracle 23c, this package is called DBMS_HCHECK, while in earlier versions (like Oracle 23.3ai), it’s referred to as DBMS_DICTIONARY_CHECK.

This integrated approach makes monitoring and maintaining the database’s health much simpler and more automated.

Below is how you perform a full check on the data dictionary.

SQL>  set serveroutput on size unlimited
SQL> execute dbms_dictionary_check.full
dbms_dictionary_check on 22-SEP-2024 05:04:51
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:04:51 PASS
.- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:04:52 PASS
.- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:04:53 PASS
.- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:04:53 PASS
.- ObjError                    ... 2300000000 >  1102000000 09/22 05:04:53 PASS
.- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:04:53 PASS
.- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:04:53 PASS
.- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
.- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:04:53 PASS
---------------------------------------
22-SEP-2024 05:04:53  Elapsed: 2 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
dbms_dictionary_check on 22-SEP-2024 05:05:17
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:05:17 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:05:17 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:05:17 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:05:17 PASS
---------------------------------------
22-SEP-2024 05:05:17  Elapsed: 0 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
dbms_dictionary_check on 22-SEP-2024 05:06:45
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:06:45 PASS
.- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:06:46 PASS
.- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:06:47 PASS
.- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:06:47 PASS
.- ObjError                    ... 2300000000 >  1102000000 09/22 05:06:47 PASS
.- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:06:47 PASS
.- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:06:47 PASS
.- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
.- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:06:47 PASS
---------------------------------------
22-SEP-2024 05:06:47  Elapsed: 2 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
dbms_dictionary_check on 22-SEP-2024 05:09:29
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:09:29 PASS
.- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:09:30 PASS
.- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:09:30 PASS
.- ObjError                    ... 2300000000 >  1102000000 09/22 05:09:30 PASS
.- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:09:30 PASS
.- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:09:30 PASS
.- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:09:30 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:09:31 PASS
.- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:09:31 PASS
---------------------------------------
22-SEP-2024 05:09:31  Elapsed: 2 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc
dbms_dictionary_check on 22-SEP-2024 05:11:55
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:11:55 PASS
.- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:11:56 PASS
.- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:11:56 PASS
.- ObjError                    ... 2300000000 >  1102000000 09/22 05:11:56 PASS
.- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:11:56 PASS
.- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:11:56 PASS
.- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
.- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:11:56 PASS
---------------------------------------
22-SEP-2024 05:11:56  Elapsed: 1 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

PL/SQL procedure successfully completed.

SQL>

This is how you can run a quick CRITICAL check on the data dictionary.

SQL> set serveroutput on size unlimited
SQL> execute dbms_dictionary_check.critical
dbms_dictionary_check on 22-SEP-2024 05:15:42
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:15:42 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:15:42 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:15:42 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:15:42 PASS
---------------------------------------
22-SEP-2024 05:15:42  Elapsed: 0 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

PL/SQL procedure successfully completed.

It also creates a report within your DIAG directory as well.

-rw-r-----. 1 oracle oinstall  15K Sep 22 05:15 FREE_ora_3941_DICTCHECK.trm
-rw-r-----. 1 oracle oinstall  35K Sep 22 05:15 FREE_ora_3941_DICTCHECK.trc

And below is how you can even repair issues or inconsistencies with the data dictionary. In my case there isn’t any, but this is how you can do it.

SQL>
SQL>
SQL> EXECUTE dbms_dictionary_check.full(repair=>TRUE)
dbms_dictionary_check on 22-SEP-2024 05:13:36
----------------------------------------------
Catalog Version 23.0.0.0.0 (2300000000)
db_name: FREE
Is CDB?: YES CON_ID: 3 Container: FREEPDB1
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- OIDOnObjCol                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- LobNotInObj                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- SourceNotInObj              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- OversizedFiles              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- PoorDefaultStorage          ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- PoorStorage                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- TabPartCountMismatch        ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- TabComPartObj               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- Mview                       ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidDir                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- DuplicateDataobj            ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjSyn                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjSeq                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidateSeg                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- UndoSeg                     ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- IndexSeg                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- IndexPartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- IndexSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- TableSeg                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- TablePartitionSeg           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- TableSubPartitionSeg        ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- PartCol                     ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- IndPartObj                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- DuplicateBlockUse           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- FetUet                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- Uet0Check                   ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- SeglessUET                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidInd                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidTab                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- IcolDepCnt                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjIndDobj                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- TrgAfterUpgrade             ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjType0                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidOwner                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- StmtAuditOnCommit           ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- PublicObjects               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- SegFreelist                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidDepends                ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- CheckDual                   ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjectNames                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ChkIotTs                    ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- NoSegmentIndex              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- NextObject                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- DroppedROTS                 ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- FilBlkZero                  ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- DbmsSchemaCopy              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- IdnseqObj                   ... 2300000000 >  1201000000 09/22 05:13:36 PASS
.- IdnseqSeq                   ... 2300000000 >  1201000000 09/22 05:13:36 PASS
.- ObjError                    ... 2300000000 >  1102000000 09/22 05:13:36 PASS
.- ObjNotLob                   ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- MaxControlfSeq              ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- SegNotInDeferredStg         ... 2300000000 >  1102000000 09/22 05:13:36 PASS
.- SystemNotRfile1             ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- DictOwnNonDefaultSYSTEM     ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidateTrigger             ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjNotTrigger               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- InvalidTSMaxSCN             ... 2300000000 >  1202000000 09/22 05:13:36 PASS
.- OBJRecycleBin               ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- LobSeg                      ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ObjLogicalConstraints       ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- SysSequences                ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
.- ValidateFile                ... 2300000000 <=  *All Rel* 09/22 05:13:36 PASS
---------------------------------------
22-SEP-2024 05:13:36  Elapsed: 0 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)
Trace File: /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_3941_DICTCHECK.trc

PL/SQL procedure successfully completed.

SQL>

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »