Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘databases’

From Painful Manual LOB Shrink to Automatic SecureFiles Shrink

Posted by FatDBA on November 15, 2025

I’ve been working with LOBs for years now, and trust me, shrinking them has always been a headache. Anyone who has ever tried ALTER TABLE … SHRINK SPACE on a big SecureFiles LOB knows the pain … blocking sessions, unexpected waits, and sometimes that lovely ORA-1555 popping up at the worst time. Every DBA eventually gets into that situation where a LOB segment is 200 GB on disk but only 10 GB of real data remains. You delete rows… but the space never comes back unless you manually shrink it, which itself can cause more issues.

With the introduction of Automatic SecureFiles Shrink, Oracle really made a DBA’s life easier. This feature, which first came out in 23ai, quietly frees up unused LOB space in the background without disrupting your workload. I wanted to see how it behaves in a real scenario, so I set up a small lab and tested it out. Here’s the whole experiment, raw and simple.

Lets do a demo and understand how this new feature works … I spun up a fresh PDB and made a small tablespace just for this test. Nothing fancy.

CREATE TABLESPACE lobts
  DATAFILE '/u02/oradata/LOBTS01.dbf'
  SIZE 1G AUTOEXTEND ON NEXT 256M;

Tablespace created.



-- Table with a securefile LOB based column.
CREATE TABLE tst_securefile_lob
(
    id NUMBER,
    lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE (
    TABLESPACE lobts
    CACHE
);

Table created.




SELECT table_name, column_name, securefile
FROM   user_lobs
WHERE  table_name='TST_SECUREFILE_LOB';

TABLE_NAME           COLUMN_NAME   SECUREFILE
-------------------  ------------  ----------
TST_SECUREFILE_LOB   LOB_DATA      YES


Next, I inserted a good amount of junk data around 10,000 rows of random CLOB strings. I wanted the LOB segment to be big enough to see clear differences after shrink.

BEGIN
  FOR r IN 1 .. 10 LOOP
    INSERT INTO tst_securefile_lob (id, lob_data)
    SELECT r*100000 + level,
           TO_CLOB(DBMS_RANDOM.STRING('x', 32767))
    FROM dual
    CONNECT BY level <= 1000;
    COMMIT;
  END LOOP;
END;
/

PL/SQL procedure successfully completed.



SELECT COUNT(*) FROM tst_securefile_lob;

  COUNT(*)
----------
     10000




SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE  ul.table_name = 'TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$     131072     1024.00




-- After stats and a quick check in USER_SEGMENTS, the LOB segment was showing a nice chunky size. 
-- Then I deleted almost everything
-- Now the table will have very few rows left, but the LOB segment was still the same size. As usual.
DELETE FROM tst_securefile_lob
WHERE id < 900000;
COMMIT;

9990 rows deleted. 
Commit complete.


-- Checking LOB Internal Usage (Before Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 131072  bytes=1073741824
Used blocks         =  10240  bytes=83886080
Expired blocks      = 110592  bytes=905969664
Unexpired blocks    =  10240  bytes=83886080
-- This clearly shows almost the entire segment is expired/free but not reclaimed.


-- Checking Auto Shrink Statistics (Before Enabling)
SELECT name, value
FROM   v$sysstat
WHERE  name LIKE '%Auto SF SHK%'
ORDER  BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               0
Auto SF SHK successful                       0
Auto SF SHK total number of tasks            0

Turning on Automatic Shrink — By default this feature is OFF, so I enabled it:

EXEC DBMS_SPACE.SECUREFILE_SHRINK_ENABLE;

PL/SQL procedure successfully completed.

That’s literally it. No parameters, no tuning, nothing else. Just enable.

Automatic SecureFiles Shrink does not run immediately after you delete data. Oracle requires that a SecureFiles LOB segment be idle for a specific amount of time before it becomes eligible for shrinking, and the default idle-time limit is 1,440 minutes (24 hours). “Idle” means that no DML or preallocation activity has occurred on that LOB during that period. Once the segment meets this condition, Oracle considers it during its automatic background shrink task, which is part of AutoTask and runs every 30 minutes with a defined processing window.

When the task executes, it attempts to shrink eligible segments, but it does so gradually and in small increments using a trickle-based approach, rather than reclaiming all possible space in a single operation. This incremental behavior is deliberate: it reduces impact on running workloads and avoids heavy reorganization all at once. Only segments that meet all selection criteria .. such as having sufficient free space above the defined thresholds and not using RETENTION MAX … are processed. Because of this incremental design and the eligibility rules, the full space reclamation process can span multiple background cycles.

Some of the internal hidden/underscore params that can be used to control these limits (not unless support asked you to do or you are dealing a lab system)

Parameter                                    Default_Value    Session_Value    Instance_Value   IS_SESSION_MODIFIABLE   IS_SYSTEM_MODIFIABLE
------------------------------------------- ---------------  ---------------- ---------------- ----------------------- ---------------------
_ktsls_autoshrink_seg_idle_seconds            86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_seg_pen_seconds             86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_trickle_mb                  5                5                5                FALSE                   IMMEDIATE

Okay, lets check out post chnaghe outputs, what automatic LOB shrink does to our test object.

SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE ul.table_name='TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$      40960      320.00

Note: From ~1024 MB down to ~320 MB. Auto shrink worked     🙂



-- DBMS_SPACE Usage (After Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 40960  bytes=335544320
Used blocks         =  9216  bytes=75497472
Expired blocks      =  6144  bytes=50331648
Unexpired blocks    =  9216  bytes=75497472

Notee:  Expired blocks dropped from 110k -->  6k. This confirms auto shrink freed most of the fragmented space.






-- After the task is run.
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%Auto SF SHK%'
ORDER BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               1
Auto SF SHK successful                       1
Auto SF SHK total number of tasks            1




SELECT owner,
       segment_name,
       shrunk_bytes,
       attempts,
       last_shrink_time
FROM v$securefile_shrink;

OWNER      SEGMENT_NAME               SHRUNK_BYTES    ATTEMPTS   LAST_SHRINK_TIME
---------  ------------------------   -------------   ---------  ---------------------------
PRASHANT   SYS_LOB0001234567C00002$      744947712          1     14-NOV-2025 06:32:15

Note:  Oracle automatically reclaimed ~710 MB of wasted LOB space.

This feature, It’s simple, it’s safe, and it saves DBAs from doing manual shrink maintenance again and again. It’s not a fast feature it’s slow and polite on purpose but it works exactly as expected.

If your system has LOBs (EBS attachments, documents, JSON, logs, media files, etc.), you should absolutely enable this. Let Oracle handle the boring part.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

A New ACE Chapter…Oracle ACE Apprentice.. and I wasn’t even in Vegas

Posted by FatDBA on October 18, 2025

This year I couldn’t attend Oracle AI World in Las Vegas, and I was honestly feeling a bit bad about missing it, especially when timelines were full of shiny posts, selfies, and of course swag pics 😄

But while going through the updates, I came across something that actually made me happier than any conference badge … Oracle officially launched a new ACE tier called “Oracle ACE Apprentice”, and this is a proper new program, not the old structure.

The idea behind the Apprentice tier is simple but powerful, give people who are learning, sharing, speaking, or contributing (even in small ways) a proper entry ramp into the ACE ecosystem. It’s for early contributors who want to grow into ACE Pro / ACE Director later, but need an official “start of journey” recognition & guidance.

Earlier, people used to wait until they became a heavy community contributor before applying. Now Oracle has created a pathway instead of a gate which I personally think is the most modern and inclusive update they’ve made to the ACE program.

How can someone join?

  • There is a simple pre-registration form for ACE Apprentice
  • You need to use the same email as your Oracle University (certification) account
  • After joining, you continue contributing — blogs, talks, code samples, community help etc.
  • And when your impact grows, you can later nominate yourself for ACE Pro

Very clean and very beginner-friendly. Honestly, this is going to bring a lot more new talent into the ecosystem.

For more details: https://ace.oracle.com/ords/r/ace/oracle-aces/join-ace

I have been into the Ace club for a while now and it’s really nice to see the program evolve with this new energy, especially at the same time Oracle is pushing the AI era forward …. So yeah .. I missed Vegas this year… but I still ended up with good ACE news in the most unexpected way 😎

Looks like next year I’ll need to be there in person again, this time wearing two badges: ACE Pro + ACE Program Fanboy 😂

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Stop Writing Purge Scripts. DBMS_ILM Is the Hidden Hero You Ignored

Posted by FatDBA on September 14, 2025

Enterprise databases accumulate vast amounts of historical data over time journal lines in Oracle E-Business Suite (EBS), payroll checks in PeopleSoft HCM, closed service requests in CRM, or audit logs in custom ERP systems. Left unchecked, this data creates storage bloat, longer backup windows, degraded query performance, and costly migrations. Traditionally, DBAs wrote custom purge scripts to handle “cold” data. But Oracle introduced a smarter, automated way: DBMS_ILM. With DBMS_ILM and the broader Information Lifecycle Management (ILM) framework, you can define policies that automatically compress, move, or archive data as it ages turning Oracle into a self-managing database.

Historical Timeline

  • Oracle 11g (2007–2012)
    • ILM was more of a concept than a feature.
    • DBMS_ILM package existed but was very limited, focusing mainly on manual compression/move operations.
  • Oracle 12c Release 1 (2013)
    • Heat Map introduced (tracks row/segment usage).
    • Automatic Data Optimization (ADO) became available.
    • DBMS_ILM matured to support policy-driven movement and compression.
    • Licensing Note: Compression features required Advanced Compression Option (ACO).
  • Oracle 12c Release 2 (2016)
    • Enhanced policy management, subpartition-level actions.
    • Support for Hybrid Columnar Compression (HCC) in engineered systems.
  • Oracle 18c / 19c (2018–2019)
    • DBMS_ILM + ADO stabilized as enterprise-ready.
    • Background ILM jobs integrated with DBMS_SCHEDULER.
    • Adopted widely in ERP/CRM archiving projects.
  • Oracle 21c / 23c (2021–2023)
    • Minor refinements but ILM model remained stable.
    • Greater focus on integration with Exadata and Cloud tiering.
    • In 23ai (23c), DBMS_ILM is still the core API for lifecycle policies, with Oracle pushing more cloud-managed automation in Autonomous DB.

Licensing is often misunderstood, so let’s break it down clearly:

FeatureOracle VersionLicense Requirement
Heat Map (track row/segment usage)12c+Enterprise Edition (included)
Basic ILM actions (MOVE partitions, mark READ ONLY)12c+Enterprise Edition (included)
Compression policies (OLTP, Advanced Row Compression)12c+Enterprise Edition + Advanced Compression Option (ACO)
Hybrid Columnar Compression (HCC)11g+Exadata, ZFS, or ACFS + ACO
Automatic Tiering to external storage12c+ILM Pack / Engineered Systems
Autonomous DB ILM policies19c+Included in Autonomous service pricing

Golden Rule:

  • Moving cold partitions to an archive tablespace = FREE (EE)
  • Compressing cold partitions = PAID (ACO)
  • Using Exadata HCC or tiering = Requires Engineered System licensing

Always validate with:

SELECT name, detected_usages, currently_used, last_usage_date
FROM   dba_feature_usage_statistics
WHERE  name LIKE '%Compression%' OR name LIKE '%Heat Map%' OR name LIKE '%ILM%';

This query helps confirm whether your test triggered paid options.

DBMS_ILM Components

  • Heat Map
    • Tracks data access (reads/writes).
    • Provides metrics for ILM decisions.
  • Automatic Data Optimization (ADO)
    • Executes policies when conditions are met.
    • Works on segments, partitions, or rows.
  • DBMS_ILM Package
    • API to create, enable, execute, and monitor ILM policies.
    • Provides manual overrides for testing.

Use Cases (ERP/CRM/PeopleSoft)

  1. PeopleSoft Payroll (PAY_CHECKS)
    • Compress after 2 years.
    • Move to ARCHIVE tablespace after 7 years.
  2. Oracle EBS GL (GL_JE_LINES)
    • Move journals > 10 years old.
    • Keep recent 3 years online.
  3. CRM Tickets
    • Close/compress tickets > 5 years.
    • Keep only 2 years active for customer service SLAs.
  4. Custom Banking ERP (TXN_AUDIT_LOG)
    • Auto-compress audit logs after 36 months.
    • Mark READ ONLY after 7 years.

Detailed Demo

6.1. Enable Heat Map

ALTER SYSTEM SET HEAT_MAP=ON SCOPE=BOTH;

6.2. Create Archive Tablespace

CREATE TABLESPACE ARCHIVE_TS 
DATAFILE '/u01/app/oracle/oradata/ARCHIVE01.dbf' SIZE 20G AUTOEXTEND ON;

6.3. Create Partitioned Table

CREATE TABLE psft.pay_checks
(
  check_id    NUMBER,
  emp_id      NUMBER,
  amount      NUMBER,
  check_date  DATE
)
PARTITION BY RANGE (check_date)
(
  PARTITION p2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-YYYY')),
  PARTITION p2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-YYYY')),
  PARTITION p2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-YYYY')),
  PARTITION pmax  VALUES LESS THAN (MAXVALUE)
);

6.4. Insert Data

INSERT INTO psft.pay_checks
SELECT level, mod(level,1000), dbms_random.value(1000,5000),
       ADD_MONTHS(SYSDATE, -level)
FROM dual CONNECT BY level <= 200000;
COMMIT;

6.5. Add Policy

BEGIN
  DBMS_ILM.ADD_POLICY(
    object_owner     => 'PSFT',
    object_name      => 'PAY_CHECKS',
    policy_name      => 'PAYROLL_ARCHIVE_POLICY',
    action           => DBMS_ILM.MOVE,
    scope            => DBMS_ILM.SCOPE_PARTITION,
    tier_tablespace  => 'ARCHIVE_TS',
    condition        => 'CREATION_TIME < ADD_MONTHS(SYSDATE, -84)', -- 7 years
    enabled          => DBMS_ILM.ENABLED
  );
END;
/

6.6. Execute Policy

BEGIN
  DBMS_ILM.EXECUTE_ILM(
    object_owner => 'PSFT',
    object_name  => 'PAY_CHECKS',
    policy_name  => 'PAYROLL_ARCHIVE_POLICY'
  );
END;
/

Monitoring ILM

-- check policiess.
SELECT policy_name, action, condition, enabled
FROM   dba_ilm_policies
WHERE  object_name='PAY_CHECKS';

--Monitor Jobs
SELECT job_id, object_name, status, start_time, end_time
FROM   dba_ilmjobs;

-- check results
SELECT * FROM dba_ilmresults WHERE object_name='PAY_CHECKS';

Verify Partition Location

SELECT table_name, partition_name, tablespace_name
FROM   dba_tab_partitions
WHERE  table_name='PAY_CHECKS';

Heat Map Access Tracking

SELECT segment_name, access_count, last_access
FROM   dba_heat_map_segment
WHERE  segment_name='PAY_CHECKS';

Troubleshooting

IssueLikely CauseFix
Policy doesn’t fireHeat Map offALTER SYSTEM SET HEAT_MAP=ON
Partitions not movedNot partitionedPartition by DATE
ORA-4031 errorsMemory pressureTune ILM job resource usage
Compression errorsMissing ACO licenseRemove compression actions

Best Practices

  1. Always partition large tables by date before using ILM.
  2. Run in DEV and validate with ERP/CRM reports before PROD rollout.
  3. Align with compliance/legal retention rules (SOX, GDPR, HIPAA).
  4. Schedule ILM jobs in maintenance windows.
  5. Monitor dba_feature_usage_statistics to avoid license violations.
  6. Combine ILM with purging strategies for truly obsolete data.

Conclusion

Introduced in Oracle 11g as a concept and matured in 12c, DBMS_ILM is now a stable enterprise feature that brings policy-driven automation to Oracle databases.

  • For PeopleSoft Payroll, it can transparently archive checks older than 7 years.
  • For EBS Financials, it can compress historical journal entries.
  • For CRM logs, it can move cold data to cheaper storage.

When deployed with proper licensing (EE + ACO where needed), DBMS_ILM transforms data management from ad-hoc DBA scripts into a compliant, auditable, and automated lifecycle system.

It is not just a DBA feature… it’s a data governance framework, critical for ERP/CRM modernization and cloud migrations.

If you want to test it, start by enabling Heat Map in your DEV/TEST system, track your largest ERP/CRM tables for 30 days, then design your first MOVE-only ILM policy (no compression). Once comfortable, expand to compression (if licensed) and tiering strategies.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

Early Access, Cool Swag, and the Best Community Ever – The Oracle ACE Story

Posted by FatDBA on August 11, 2025

I was inducted into the Oracle ACE Program back in 2021, and honestly, it has been one of the most rewarding chapters of my career. Over the years, I’ve had the chance to connect directly with Oracle teams, work with passionate community members, and be part of projects that pushed me to learn, share, and grow.

Last year at Vegas, Oracle CloudWorld 2024, I met so many of these like-minded Oracle professionals in person. It was incredible. The hallway conversations, the deep-dive technical sessions, the after-hours chats over coffee — those are moments you simply can’t replicate online. The energy of being surrounded by people who live and breathe Oracle technology was something else entirely.

Sadly, I’ll be missing Oracle CloudWorld this year because of other travel commitments 😦 It’s a bit of a heartbreaker, but it also makes me appreciate just how special these opportunities are when you do get them.

That said, my journey as an Oracle ACE has been about far more than conferences. From early access to technology, to direct collaboration with Oracle’s product teams, to the friendships formed with other ACEs — the benefits go well beyond the badge on my LinkedIn profile … I also have to give a huge shout-out to the Oracle ACE Program team – Jen Nicholson, Oana Bonu, and Cassandra Call — for the constant encouragement, guidance, and for keeping the ACE community so well-connected and engaged. Their support plays a massive role in making the program such a great experience.


So, you have probably seen someone proudly showing off that little red Oracle ACE badge on their LinkedIn profile.
It looks cool, but what does it actually mean? Is it just a bragging sticker or is there something real behind it?

It turns out it is a whole lot more than a badge.
It feels like joining an invite-only club where the currency is knowledge, the meetings happen in Slack channels, and the secret handshake is swapping tuning tips over coffee at a conference.

One of my favorite perks is the Beta Program eligibility. This is where you get to play with Oracle’s new features before the rest of the world even knows they exist. Imagine Oracle telling you, “Here is a new 23ai feature, go ahead and test it, break it, tell us what you think.” You run your tests, push it to the limit, maybe even crash it completely in your lab and then send feedback straight to the people building it. It is like being a database taste tester. Sometimes it is exciting, sometimes it is messy, and sometimes you discover something so odd you are half-expecting them to name the bug after you.

Then there are the five thousand dollars in Oracle Cloud credits. This is one of the most underrated perks. You basically get your own sandbox to build whatever you want. It could be Autonomous Databases, test clusters, proof-of-concepts, machine learning experiments, or even a fully populated database of every coffee type known to humankind just to test JSON search performance. You can go wild without worrying about the cost or your manager asking why the cloud bill suddenly tripled.

And of course, Oracle CloudWorld. As an ACE, you can get passes to attend. This is where it feels like Disneyland for Oracle professionals. You get hands-on labs, deep dives into technology you have never even heard of, and those hallway conversations that somehow end up teaching you more than any formal session. You meet engineers who wrote the features you use every day, swap ideas with other ACEs, and sometimes find yourself in late-night debates about why people still write SELECT * in production code.

Another important benefit is the direct connection to Oracle Product Management. It does not mean they will fix your SR in two minutes, but it does mean you can talk directly to the people who design and own the products. Got a question about hybrid partitioned tables or JSON indexing or Vector Search/Indexing or …. ? You can skip the long escalation chain and ask the folks who built it. It is like having a hotline for database issues, except your contact is probably in a polo shirt and speaks PL/SQL.

There is also the social media boost. If you are blogging, making YouTube tutorials, or speaking at events, Oracle helps amplify your work. They will share your content, feature it in newsletters, and suddenly your small weekend blog post is getting attention from DBAs all over the world. You wake up to dozens of comments and connection requests from people who just read your take on optimizer statistics.

Now, I would be lying if I did not mention the swag. Hoodies, jackets, mugs, backpacks. And not the cheap conference giveaway kind. This is the sort of gear that makes colleagues raise an eyebrow and ask, “Where did you get that?” The ACE hoodie is an instant conversation starter at events. I have met people simply because they spotted it from across the room and came over to talk.

If you speak at conferences, there is also travel support. This can be a game changer, especially for events far from home. If you have ever wanted to present to a large audience but travel costs made you think twice, this can tip the balance and make it possible.

The biggest thing for me though is the community. You are surrounded by people who love this stuff as much as you do. People who understand why you are excited about a new optimizer hint. People who send you screenshots of strange execution plans just for fun. People who will happily spend part of their evening helping you troubleshoot a tricky replication setup because they have been there before. It is not just networking. It is friendship, mentorship, and constant learning all rolled into one.

Being an Oracle ACE comes with many perks such as early access to features, cloud credits, event passes, direct connections to product management, social media amplification, great swag, and travel support. The real magic is in the access, the learning, and the people you meet along the way.

If you enjoy sharing your knowledge, writing, presenting, or helping other Oracle users, keep doing it. One day that email saying “Congratulations, you are an Oracle ACE” might land in your inbox. That is when the real adventure starts.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Oracle AWR Scripts Decoded .. No More Guessing!

Posted by FatDBA on July 29, 2025

Recently, someone asked me why there are so many AWR-like files in the directory and whether they are as useful as the well-known awrrpt.sql. I took the opportunity to explain what I knew about them and their purpose. Since I thought it could be helpful, I decided to share this insight with my readers as well.

If you’re into performance tuning in Oracle, very likey you’ve already used AWR reports. But then you open this directory: $ORACLE_HOME/rdbms/admin …. …and boom – you’re hit with a list of cryptic scripts: awrrpt.sql, awrgdrpi.sql, awrsqrpt.sql, awrextr.sql

What do they all do?
When should you use which one?
Why are they named like 90s DOS files 🙂 ?

Let’s keep it short and sharp. Here’s your point-to-point breakdown of the most important AWR scripts.

Before you go running any of these scripts – make sure you have the Oracle Diagnostic Pack license.
AWR stuff is not free.

I grouped them into logical chunks – reports, comparisons, SQLs, data movement, etc.

Performance Reports

These are the most common AWR reports you run to analyze performance between 2 snapshots.

ScriptWhat it does
awrrpt.sqlGenerates AWR report for the current instance (for single instance DBs)
awrrpti.sqlSame as above, but lets you select another DBID or instance (useful for RAC)
awrgrpt.sqlAWR Global Report – gives a full RAC-wide view
awrgrpti.sqlSame as above, but lets you pick another DBID/instance

Example:
You’re troubleshooting high CPU on node 2 of your RAC? Use awrrpti.sql.

Comparison Reports

These help you compare two different time ranges – maybe before and after a patch, or different load periods.

ScriptWhat it does
awrddrpt.sqlCompares two AWR snapshots (date diff) – for a single instance
awrddrpi.sqlSame as above, for another dbid/instance
awrgdrpt.sqlGlobal RAC diff report (current RAC)
awrgdrpi.sqlGlobal RAC diff report (another dbid/instance)

Use these when you wanna say, “Hey, this new code made the DB slower… prove it!”

Want to see what a particular SQL is doing? These are your tools.

ScriptWhat it does
awrsqrpt.sqlSQL report for a specific SQL_ID in the current instance
awrsqrpi.sqlSame thing but lets you pick another dbid/instance

You’ll be surprised how useful this is when hunting bad queries.

Sometimes, you need to take AWR data from one system and analyze it somewhere else (like test or dev).

ScriptWhat it does
awrextr.sqlExport AWR data using datapump
awrload.sqlImport AWR data using datapump

This is actually gold when working on performance issues across environments.

Helper / Utility Scripts

These are mostly helper scripts to define input or make reports more automated or interactive.

ScriptWhat it does
perfhubrpt.sqlGenerates a fancy interactive Performance Hub report
awrinpnm.sqlInput name helper for AWR
awrinput.sqlGet inputs before running AWR reports
awrddinp.sqlInput helper for diff reports
awrgdinp.sqlInput helper for RAC diff reports

What’s with these weird script names?

Yeah, all these awrsqrpi.sql, awrgdrpt.sql, etc. – they look like random garbage at first.
But there’s actually some logic.

Here’s how to decode them:

AbbreviationMeans
awrAutomatic Workload Repository
rpt or rpReport
iLets you select specific instance or DBID
gGlobal report for RAC
d or ddDiff reports (comparing two snapshots)
sqSQL
inpInput helper

So awrsqrpi.sql = AWR SQL Report for a different instance
And awrgdrpi.sql = AWR Global Diff Report for another DBID/instance

So Which Script Should I Use?

Here’s a quick cheat sheet:

TaskScript
Normal AWR report (single instance)awrrpt.sql
AWR report for RAC (global view)awrgrpt.sql
SQL performance reportawrsqrpt.sql
Compare two AWR reportsawrddrpt.sql
Export/import AWR dataawrextr.sql and awrload.sql

If you’re doing anything with RAC – prefer the ones with g in them.
If you’re automating – use the *inp*.sql files.

Final Thoughts

Yes, the names are ugly.
Yes, the syntax is old-school.
But honestly? These AWR scripts are still some of the best tools you have for DB performance analysis.

Just remember:

  • Don’t use them without a valid license
  • Learn the naming pattern once – and it gets way easier
  • Practice running different ones on test systems

And next time someone complains, “The database is slow” … you know exactly which script to run.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

fatdba explores Vector Search in Oracle 23ai

Posted by FatDBA on July 23, 2025

So Oracle rolled out 23ai a while back and like every major release, it came packed with some really cool and interesting features. One that definitely caught my eye was Vector Search. I couldn’t resist diving in… and recently I explored it in depth and would like to share a though on this subject.

You see, we’ve been doing LIKE '%tax policy%' since forever. But now, Oracle’s SQL has become more powerful. Not only does it match words … it matches meaning.

So here’s me trying to explain what vector search is, how Oracle does it, why you’d care, and some examples that’ll hopefully make it click.

What’s Vector Search, Anyway?

Alright, imagine this:

You have a table of products. You search for “lightweight laptop for travel”.
Some entries say “ultrabook”, others say “portable notebook”, and none mention “lightweight” or “travel”. Old-school SQL would’ve said: “No Matches Found”

But with vector search, it gets it. Oracle turns all that text into math .. basically, a long list of numbers called a vector … and compares meanings instead of words.

So What’s a Vector?

When we say “vector” in vector search, we’re not talking about geometry class. In the world of AI and databases, a vector is just a long list of numbers … each number representing some aspect or feature of the original input (like a sentence, product description, image, etc.).

Here’s a basic example:
[0.12, -0.45, 0.88, …, 0.03]
This is a vector … maybe a 512 or 1536-dimension one .. depending on the embedding model used (like OpenAI, Oracle’s built-in model, Cohere, etc.).

Each number in this list is abstract, but together they represent the essence or meaning of your data.

Let’s say you have these two phrases:
“Apple is a tech company”
“iPhone maker based in California”

Now, even though they don’t share many words, they mean nearly the same thing. When passed through an embedding model, both phrases are converted into vectors:

Vector A: [0.21, -0.32, 0.76, …, 0.02]
Vector B: [0.22, -0.30, 0.74, …, 0.01]

They look very close … and that’s exactly the point.

What Oracle 23ai Gives You

  • A new VECTOR datatype (yeah!)
  • AI_VECTOR() function to convert text into vectors
  • VECTOR_INDEX to make search blazing fast
  • VECTOR_DISTANCE() to measure similarity
  • It’s all native in SQL ..no need for another vector DB bolted on

Let’s Build Something Step-by-Step

We’ll build a simple product table and do a vector search on it.

Step 1: Create the table

CREATE TABLE products (
  product_id     NUMBER PRIMARY KEY,
  product_name   VARCHAR2(100),
  description    VARCHAR2(1000),
  embedding      VECTOR(1536)
);

1536? Yeah, that’s the number of dimensions from Oracle’s built-in embedding model. Depends on which one you use.

Step 2: Generate vector embeddings

UPDATE products
SET embedding = ai_vector('text_embedding', description);

This’ll take the description, pass it through Oracle’s AI model, and give you a vector. Magic.

Step 3: Create the vector index

CREATE VECTOR INDEX product_vec_idx
ON products (embedding)
WITH (DISTANCE METRIC COSINE);

This speeds up the similarity comparisons … much like an index does for normal WHERE clauses.

Step 4: Semantic Search in SQL

SELECT product_id, product_name, 
       VECTOR_DISTANCE(embedding, ai_vector('text_embedding', 'light laptop for designers')) AS score
FROM products
ORDER BY score
FETCH FIRST 5 ROWS ONLY;

Now we’re searching for meaning, not words.

VECTOR_DISTANCE Breakdown

You can use different math behind the scenes:

VECTOR_DISTANCE(v1, v2 USING COSINE)
VECTOR_DISTANCE(v1, v2 USING EUCLIDEAN)
VECTOR_DISTANCE(v1, v2 USING DOT_PRODUCT)

Cosine is the usual go-to for text. Oracle handles the rest for you.

Use Cases You’ll Actually Care About

1. Semantic Product Search — “Fast shoes for runners” => shows “Nike Vaporfly”, even if it doesn’t say “fast”.

2. Similar Document Retrieval — Find all NDAs that look like this one (even with totally different words).

3. Customer Ticket Suggestion — Auto-suggest resolutions from past tickets. Saves your support team hours.

4. Content Recommendation — “People who read this also read…” kind of stuff. Easy to build now.

5. Risk or Fraud Pattern Matching — Find transactions that feel like fraud ..even if the details don’t match 1:1.

I know it might sound little confusing .. lets do a Onwe more example : Legal Document Matching

CREATE TABLE legal_docs (
  doc_id       NUMBER PRIMARY KEY,
  title        VARCHAR2(255),
  content      CLOB,
  content_vec  VECTOR(1536)
);

Update vectors:

UPDATE legal_docs
SET content_vec = ai_vector('text_embedding', content);

Now find similar docs:

SELECT doc_id, title
FROM legal_docs
ORDER BY VECTOR_DISTANCE(content_vec, ai_vector('text_embedding', 'confidentiality in government contracts'))
FETCH FIRST 10 ROWS ONLY;

That’s it. You’re officially building an AI-powered legal search engine.

Things to Know

  • Creating vectors can be heavy .. batch it.
  • Indexing speeds up similarity search a lot.
  • Combine with normal filters for best results:
SELECT * FROM products
WHERE category = 'laptop'
ORDER BY VECTOR_DISTANCE(embedding, ai_vector('gaming laptop under 1kg'))
FETCH FIRST 5 ROWS ONLY;

Final Thoughts from fatdba

I’m honestly impressed. Oracle took something that felt like ML black magic and put it right in SQL. No external service. No complicated setups. Just regular SQL, but smater.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

All new Oracle 23ai new views for enhanced Data Pump troubleshooting

Posted by FatDBA on April 2, 2025

Oracle 23ai (also known as Oracle 23c with AI features) introduces several powerful enhancements aimed at improving observability, diagnostics, and performance debugging. Among these enhancements are three new dynamic performance views designed specifically to help DBAs and developers troubleshoot Data Pump performance issues more efficiently and in real-time.

When dealing with large data exports or imports using Oracle Data Pump, performance bottlenecks or unexplained hangs can be frustrating and time-consuming to investigate. Until now, DBAs had to rely heavily on Data Pump log files, trace files, and session-level v$ views to diagnose problems. With the introduction of the following views, Oracle has taken a major step toward simplifying that process:

The three new views are: GV$DATAPUMP_PROCESS_INFO, GV$DATAPUMP_PROCESSWAIT_INFO and GV$DATAPUMP_SESSIONWAIT_INFO

These views provide real-time information about the state of Data Pump processes, their wait events, and any session-level contentions. GV$ views return cluster-wide information in a RAC environment, while V$ views return information specific to the current instance.

  • GV$DATAPUMP_PROCESS_INFO – This view shows the current Data Pump processes, including both master and worker processes. It provides basic information like the program name, session ID, username, job name, status, and system process ID.
SELECT * FROM V$DATAPUMP_PROCESS_INFO;

CUR_DATE	PROGRAM	SESSIONID	STATUS	USERNAME	JOBNAME	SPID
2023-01-09 13:56:07	ude@orcl (TNS V1-V3)	42	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891480
2023-01-09 13:56:07	oracle@orcl (DW00)	48	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891500

  • GV$DATAPUMP_PROCESSWAIT_INFO – This view helps detect contention between Data Pump processes. It shows which sessions are waiting, what events they are waiting for, and which other sessions may be blocking them.
    SELECT * FROM GV$DATAPUMP_PROCESSWAIT_INFO;
    
    WAITING_SESSION	HOLDING_SESSION	EVENT	PROGRAM_WAITSESSION	PROGRAM_HOLDINGDSESSION
    174	57	enq: TM - contention	oracle@orcl (DM00)	oracle@orcl (DW00)
    
    • GV$DATAPUMP_SESSIONWAIT_INFO – Provides deep insights into session-level waits during Data Pump operations, including how long the session has been in the wait state and what it’s waiting on.
    SELECT * FROM GV$DATAPUMP_SESSIONWAIT_INFO;
    
    WAITING_SESSION	EVENT	DP_SECONDS_IN_WAIT	DP_STATE_IN_WAIT	DP_P1TEXT	DP_P1
    46	enq: TM - contention	8086	WAITING	name	mode
    
    

    Before Oracle 23ai, debugging Data Pump jobs required checking logs, trace files, and manual session analysis. These new views provide real-time visibility into what each Data Pump process is doing, what it is waiting on, and where it might be blocked.

    Use Cases:

    1. If a Data Pump job appears hung, GV$DATAPUMP_PROCESSWAIT_INFO can help identify which process is waiting and what it is waiting on.
    2. If sessions are slow or idle, GV$DATAPUMP_SESSIONWAIT_INFO provides detailed timing and wait reasons.
    3. If parallel execution is used, GV$DATAPUMP_PROCESS_INFO shows how many worker processes are active and whether they are all functioning as expected.

    The new Data Pump views in Oracle 23ai are a significant step forward for real-time performance diagnostics. These views provide valuable insights that were previously hard to obtain, allowing DBAs to troubleshoot and optimize Data Pump operations with much more confidence and clarity.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , | Leave a Comment »

    Leveraging SQLT to transfer execution plans between SQL IDs using coe_load_sql_profile.sql

    Posted by FatDBA on March 11, 2025

    Hi All,

    Have you used coe_load_sql_profile.sql before? I mean a lot of people uses coe_xfr_sql_profile.sql from SQLT and these two scripts deals with SQL profiles in Oracle, but their purposes and use cases differ. coe_xfr_sql_profile.sql is used to export and migrate an existing SQL Profile from one system to another, ensuring performance stability across environments. coe_load_sql_profile.sql is used to create a new SQL Profile by capturing the execution plan from a modified SQL query and applying it to the original query, forcing it to use the optimized plan.

    Let me first explain a little bit more of the toolkit – Oracle SQLT (SQLTXPLAIN) which is a powerful tool designed to help DBAs analyze and troubleshoot SQL performance issues and all above mentioned scripts are part of the kit provided by Oracle and written by none other than Carlos Sierra.

    A common question DBAs encounter is: Can we plug the execution plan of one SQL ID into another SQL ID? …. The answer is YES! This can be accomplished using the SQLT script coe_load_sql_profile.sql. In this blog, we will explore how to use this script to achieve plan stability by enforcing a preferred execution plan across different SQL IDs. It examines the memory and AWR both to look text of the SQL IDs you passed and then it queries GV$SQL_PLAN and DBA_HIST_SQL_PLAN to extract the execution plan hash value from the modified SQL. Once it’s done collecting that information, it performs a loop to extract optimizer hints of the modified SQL’s execution plan. Finally it creates a SQL Profile using DBMS_SQLTUNE.IMPORT_SQL_PROFILE.

    Let’s give a quick demo … assume we have two SQL statements:

    SQL ID 1: 78a1nbdabcba (Original SQL) …. SQL ID 2: 9na182nn2bnn (Modified SQL)
    Both queries are logically similar but produce different execution plans.
    Our goal is to take the execution plan from SQL ID 1 and apply it to SQL ID 2.

    connect system/monkey123
    SQL> @coe_load_sql_profile.sql 
    or 
    SQL> START coe_load_sql_profile.sql <ORIGINAL_SQL_ID> <MODIFIED_SQL_ID>
    
    
    Parameter 1:
    ORIGINAL_SQL_ID (required)
    
    Enter value for 1: 78a1nbdabcba
    
    Parameter 2:
    MODIFIED_SQL_ID (required)
    
    Enter value for 2: 9na182nn2bnn
    
    
         PLAN_HASH_VALUE          AVG_ET_SECS
    -------------------- --------------------
              1181381381                 .003
    
    Parameter 3:
    PLAN_HASH_VALUE (required)
    
    Enter value for 3: 1181381381
    
    Values passed to coe_load_sql_profile:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ORIGINAL_SQL_ID: "78a1nbdabcba"
    MODIFIED_SQL_ID: "9na182nn2bnn"
    PLAN_HASH_VALUE: "1181381381"
    
    .
    .
    .
    
    ORIGINAL:78a1nbdabcba MODIFIED:9na182nn2bnn PHV:1181381381 SIGNATURE:16731003137917309319 CREATED BY COE_LOAD_SQL_PROFILE.SQL
    SQL>SET ECHO OFF;
    
    ****************************************************************************
    * Enter password to export staging table STGTAB_SQLPROF_78a1nbdabcba
    ****************************************************************************
    
    Export: Release 19.0.0- Production on Sun Mar 08 14:45:47 2012
    
    Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.
    
    Password:
    .
    .
    .
    
    coe_load_sql_profile completed.
    
    
    
    Run original query
    SQL> select ename from DIXIT where ename='Name';
    
    Plan hash value: 1181381381
    
    ---------------------------------------------------------------
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| 
    ---------------------------------------------------------------
    |   0 | SELECT STATEMENT  |        |     1 |     6 |     3   (0)|
    |*  1 |  TABLE ACCESS FULL| DIXIT  |     1 |     6 |     3   (0)|
    ---------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("ENAME"='Name')
    
    Note
    -----
       - SQL profile "78a1nbdabcba_1181381381" used for this statement
       
    


    What are your experiences with enforcing execution plans in Oracle?
    Let me know in the comments!

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , | 4 Comments »

    Oracle ACE Program: My Personal Journey …

    Posted by FatDBA on June 23, 2024

    Hi All,

    This year has been incredibly successful in terms of professional achievements and personal growth. One of the major milestones of my career was being recognized as an Oracle Ace Pro for 2023-2024. This honor was awarded for my contributions to the database community through my blogs, YouTube channels, code repositories, social media posts, Oracle forums, and in-person presentations. Receiving this recognition has not only validated my efforts but also inspired me to further deepen my engagement and share more knowledge with the community.

    I remember how it all happened – I have been blogging since 2010 when I started my blog, fatdba.com, originally known as oracleant.com. Initially, the blog served as a personal archive where I documented the issues I encountered daily and the innovative solutions I devised. It was a convenient reference for recurring problems. However, as more people began to use my blog and post questions, I started taking it more seriously. Blogging transitioned from a simple tool for self-help into my passion and hobby.

    Over time, I began posting about some of the trickiest performance and stability issues, which were greatly appreciated by the community. This positive feedback motivated me to expand my presence to other platforms. I launched YouTube channels where I share videos focused mainly on database performance and tuning. As code repositories gained popularity, I created a GitHub repository to share all my Oracle codes and scripts that I use for performance and administrative tasks. This too was well-received by the community, further fueling my dedication to sharing knowledge and helping others in the field.

    I always remember when one of my professional friends from India was awarded and posted that shiny crystal trophy on social media. At that time, I had not much idea about selection criteria’s, how to get involved, or how to get referred. I wondered if I was even eligible for such an honor. Intrigued and inspired, I started researching the program and reaching out to community members. Through consistent effort, learning, and sharing my knowledge, I gradually became more involved in the Oracle community. I still vividly remember joining this esteemed group of like-minded individuals in 2022 (for period 2021-2022) as an Ace Associate, thanks to a referral from a friend Nassyam who is an Oracle Ace Director.

    Something more about the program and other areas ..

    The Oracle ACE Program is a prestigious recognition initiative setup by Oracle Corporation to acknowledge and reward members of the Oracle community who are experts and enthusiasts of Oracle technologies. The program aims to foster a network of highly skilled professionals who actively share their knowledge and expertise through various channels. Here are the key details about the Oracle ACE Program:

    There are different levels of Recognition:

    • Oracle ACE Associate: The entry-level recognition for individuals who demonstrate strong knowledge and contributions to the Oracle community.
    • Oracle ACE Pro: Mid-level recognition for professionals who have consistently contributed through writing, presenting, and sharing their expertise. This advanced recognition is for those who have demonstrated a significant level of expertise and contribution over time, often including speaking at major conferences and publishing detailed technical content.
    • Oracle ACE Director: The highest level of recognition, awarded to individuals who not only contribute extensively to the community but also influence the direction of Oracle technologies and engage directly with Oracle’s product management and development teams.

    The selection criteia is simple and some of the examples are :

    • Community Contributions: Active involvement in the Oracle community through blogging, social media, forums, technical articles, webinars, and public speaking.
    • Technical Expertise: Demonstrated deep technical knowledge and skills in Oracle products and technologies.
    • Leadership and Mentorship: Providing guidance and mentorship to other community members and helping to grow the community.

    Benefits of the Program:

    • Recognition: Acknowledgment of one’s expertise and contributions, enhancing professional credibility and reputation. You will get your profile listed on Oracle’s Ace portal.
    • Networking Opportunities: Access to a global network of Oracle experts, enabling collaboration and knowledge exchange.
    • Exclusive Opportunities: Invitations to Oracle events, webinars, and meetings with Oracle product teams, as well as early access to Oracle products and updates.
    • Professional Growth: Opportunities to further enhance skills and knowledge through interaction with other experts and participation in advanced technical discussions and events. Free OCI credits, Certification credits etc.

    The main motive of the program is to create a vibrant community of advocates who help promote the use and understanding of Oracle technologies worldwide.

    If you have any questions about the Oracle ACE Program, feel free to reach out. I will do my best to explain the selection criteria and address any other queries related to the topic. You can email me at prashant@fatdba.com or send me a message on LinkedIn.

    Oracle Ace Page : https://ace.oracle.com/pls/apex/ace_program/r/oracle-aces/home
    Quick short video : https://www.youtube.com/watch?v=z57AD7en-BY&t=1s

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , | Leave a Comment »