Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Early Access, Cool Swag, and the Best Community Ever – The Oracle ACE Story

Posted by FatDBA on August 11, 2025

I was inducted into the Oracle ACE Program back in 2021, and honestly, it has been one of the most rewarding chapters of my career. Over the years, I’ve had the chance to connect directly with Oracle teams, work with passionate community members, and be part of projects that pushed me to learn, share, and grow.

Last year at Vegas, Oracle CloudWorld 2024, I met so many of these like-minded Oracle professionals in person. It was incredible. The hallway conversations, the deep-dive technical sessions, the after-hours chats over coffee — those are moments you simply can’t replicate online. The energy of being surrounded by people who live and breathe Oracle technology was something else entirely.

Sadly, I’ll be missing Oracle CloudWorld this year because of other travel commitments 😦 It’s a bit of a heartbreaker, but it also makes me appreciate just how special these opportunities are when you do get them.

That said, my journey as an Oracle ACE has been about far more than conferences. From early access to technology, to direct collaboration with Oracle’s product teams, to the friendships formed with other ACEs — the benefits go well beyond the badge on my LinkedIn profile … I also have to give a huge shout-out to the Oracle ACE Program team – Jen Nicholson, Oana Bonu, and Cassandra Call — for the constant encouragement, guidance, and for keeping the ACE community so well-connected and engaged. Their support plays a massive role in making the program such a great experience.


So, you have probably seen someone proudly showing off that little red Oracle ACE badge on their LinkedIn profile.
It looks cool, but what does it actually mean? Is it just a bragging sticker or is there something real behind it?

It turns out it is a whole lot more than a badge.
It feels like joining an invite-only club where the currency is knowledge, the meetings happen in Slack channels, and the secret handshake is swapping tuning tips over coffee at a conference.

One of my favorite perks is the Beta Program eligibility. This is where you get to play with Oracle’s new features before the rest of the world even knows they exist. Imagine Oracle telling you, “Here is a new 23ai feature, go ahead and test it, break it, tell us what you think.” You run your tests, push it to the limit, maybe even crash it completely in your lab and then send feedback straight to the people building it. It is like being a database taste tester. Sometimes it is exciting, sometimes it is messy, and sometimes you discover something so odd you are half-expecting them to name the bug after you.

Then there are the five thousand dollars in Oracle Cloud credits. This is one of the most underrated perks. You basically get your own sandbox to build whatever you want. It could be Autonomous Databases, test clusters, proof-of-concepts, machine learning experiments, or even a fully populated database of every coffee type known to humankind just to test JSON search performance. You can go wild without worrying about the cost or your manager asking why the cloud bill suddenly tripled.

And of course, Oracle CloudWorld. As an ACE, you can get passes to attend. This is where it feels like Disneyland for Oracle professionals. You get hands-on labs, deep dives into technology you have never even heard of, and those hallway conversations that somehow end up teaching you more than any formal session. You meet engineers who wrote the features you use every day, swap ideas with other ACEs, and sometimes find yourself in late-night debates about why people still write SELECT * in production code.

Another important benefit is the direct connection to Oracle Product Management. It does not mean they will fix your SR in two minutes, but it does mean you can talk directly to the people who design and own the products. Got a question about hybrid partitioned tables or JSON indexing or Vector Search/Indexing or …. ? You can skip the long escalation chain and ask the folks who built it. It is like having a hotline for database issues, except your contact is probably in a polo shirt and speaks PL/SQL.

There is also the social media boost. If you are blogging, making YouTube tutorials, or speaking at events, Oracle helps amplify your work. They will share your content, feature it in newsletters, and suddenly your small weekend blog post is getting attention from DBAs all over the world. You wake up to dozens of comments and connection requests from people who just read your take on optimizer statistics.

Now, I would be lying if I did not mention the swag. Hoodies, jackets, mugs, backpacks. And not the cheap conference giveaway kind. This is the sort of gear that makes colleagues raise an eyebrow and ask, “Where did you get that?” The ACE hoodie is an instant conversation starter at events. I have met people simply because they spotted it from across the room and came over to talk.

If you speak at conferences, there is also travel support. This can be a game changer, especially for events far from home. If you have ever wanted to present to a large audience but travel costs made you think twice, this can tip the balance and make it possible.

The biggest thing for me though is the community. You are surrounded by people who love this stuff as much as you do. People who understand why you are excited about a new optimizer hint. People who send you screenshots of strange execution plans just for fun. People who will happily spend part of their evening helping you troubleshoot a tricky replication setup because they have been there before. It is not just networking. It is friendship, mentorship, and constant learning all rolled into one.

Being an Oracle ACE comes with many perks such as early access to features, cloud credits, event passes, direct connections to product management, social media amplification, great swag, and travel support. The real magic is in the access, the learning, and the people you meet along the way.

If you enjoy sharing your knowledge, writing, presenting, or helping other Oracle users, keep doing it. One day that email saying “Congratulations, you are an Oracle ACE” might land in your inbox. That is when the real adventure starts.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

MySQL OPTIMIZE TABLE – Disk Space Reclaim Defragmentation and Common Myths

Posted by FatDBA on August 8, 2025

When working with MySQL databases, one common task is reclaiming disk space and defragmenting tables. The typical solution that most of us have turned to is OPTIMIZE TABLE. While this sounds like a simple, quick fix, there are a few myths and things we often overlook that can lead to confusion. Let’s break it down.

The Basics: OPTIMIZE TABLE and ALTER TABLE

To reclaim space or defragment a table in MySQL, the go-to commands are usually:

  • OPTIMIZE TABLE <table_name>; or OPTIMIZE TABLE [table_name_1], [table_name_2] or via sudo mysqlcheck -o [schema] [table] -u [username] -p [password]

But before we dive into the myths, let’s clarify what happens when you run these commands.

OPTIMIZE TABLE Overview

  • OPTIMIZE TABLE is essentially a shorthand for ALTER TABLE <table_name> ENGINE=InnoDB for InnoDB tables. It works by rebuilding the table, compacting data, and reclaiming unused space.
  • In MySQL 5.6.17 and later, the command works online, meaning it allows concurrent reads and writes during the rebuild, with some exceptions (brief locking during initial and final stages). Prior to 5.6.17, the table was locked for the entire duration of the operation, causing application downtime.

Myth #1: OPTIMIZE TABLE Is Always Quick

  • No: OPTIMIZE TABLE can indeed take a long time for large tables, especially if there are a lot of inserts, deletes, or updates. This is true when rebuilding the table. For larger datasets, the I/O load can be significant.
mysql> OPTIMIZE TABLE my_large_table;
+----------------------------+--------+----------+----------+----------+
| Table                      | Op     | Msg_type | Msg_text |
+----------------------------+--------+----------+----------+----------+
| mydb.my_large_table         | optimize | ok       | Table optimized |
+----------------------------+--------+----------+----------+----------+

In the output, the values under each column heading would show:

  • Table: The table that was optimized (e.g., yourdb.customers).
  • Op: The operation performed (optimize).
  • Msg_type: Type of message, usually status.
  • Msg_text: The result of the operation, such as OK or a specific message (e.g., “Table is already up to date”).

If the table is already optimized or doesn’t require optimization, the output might look like this:

+------------------+----------+----------+-----------------------------+
| Table            | Op       | Msg_type | Msg_text                    |
+------------------+----------+----------+-----------------------------+
| yourdb.customers | optimize | note     | Table is already up to date |
+------------------+----------+----------+-----------------------------+

Below screenshot explains possible values of msg_text etc.

Real-Time Example Validation:

  • MySQL logs can show something like this: [Note] InnoDB: Starting online optimize table my_large_table [Note] InnoDB: Table optimized successfully

However, for larger tables, it is critical to consider the additional I/O load during the rebuild. For example:
bash [Note] InnoDB: Rebuilding index my_large_table_idx [Note] InnoDB: Table rebuild completed in 300 seconds

Note: In order to get more detailed information its good to verify PROCESSLIST or SLOW QUERY LOG (if enabled).

Myth #2: OPTIMIZE TABLE Doesn’t Block Other Operations

  • Yes/No: This myth is partly true and partly false depending on the MySQL version.
  • For MySQL 5.5 and earlier: The table is locked for writes, but concurrent reads are allowed.
  • For MySQL 5.6.16 and earlier: Same as above .. concurrent reads are allowed, but writes are blocked.
  • For MySQL 5.6.17 and later: Concurrent reads and writes are allowed during the rebuild process, but the table still needs to be briefly locked during the initial and final phases. There is a brief lock required to start the process, which is often overlooked.

Real-Time Example for MySQL 5.6.17+:

[Note] InnoDB: Starting online optimize table my_large_table
[Note] InnoDB: Table optimized successfully

Although reads and writes are allowed during this process, you might still experience short bursts of lock at the start and end of the operation.

Myth #3: You Don’t Need to Worry About Disk Space

  • No: You need sufficient disk space before running OPTIMIZE TABLE. If you’re running low on space, you could encounter errors or performance issues during the rebuild process.
  • There are few bugs as well which might could occur if disk space is insufficient. Additionally, there’s also temporary disk space required during the rebuild process. Running OPTIMIZE TABLE with insufficient space could fail silently, leading to issues down the line.

Best Practice:
Ensure that your disk has at least as much free space as the table you’re optimizing, as a copy of the table is created temporarily during the rebuild.

Myth #4: ALTER TABLE with Row Format Is Always the Solution

  • No: ALTER TABLE ... ROW_FORMAT=COMPRESSED or other formats can help optimize space, but it may not always result in savings, especially for certain data types (like BLOBs or large text fields). It can also introduce overhead on the CPU if you’re using compression.

In some cases, switching to a compressed format can actually increase the size of the table, depending on the type of data stored.

Real-Time Example:

  • For a table like customer_data: ALTER TABLE customer_data ROW_FORMAT=COMPRESSED; Depending on the types of columns and data (e.g., BLOBs or TEXT), compression might not always yield the expected results.

Myth #5: You Only Need to Optimize Tables When They Get Slow

  • No: This is another common misconception. Regular optimization is crucial to ensure long-term performance, especially for heavily modified tables. Tables that undergo a lot of updates or deletions can become fragmented over time, even without obvious performance degradation.

Optimizing periodically can help prevent gradual performance loss.

Real-Time Example:

  • If you have an orders table: mysql> OPTIMIZE TABLE orders; Over time, especially with frequent UPDATE or DELETE operations, fragmentation can slow down access, even if it’s not immediately noticeable.

Main Pointerss ..

  • OPTIMIZE TABLE is a helpful tool but not a one-size-fits-all solution.
  • It requires sufficient disk space and careful consideration of your MySQL version and storage engine (InnoDB vs. MyISAM).
  • In MySQL 5.6.17 and later, online optimizations are possible, but brief locking still occurs during the process.
  • For MyISAM tables, there’s no escaping the full lock during optimization.
  • Always assess the potential overhead (I/O and CPU usage) before running the operation, especially on larger datasets.

By breaking these myths, you can make better decisions when using OPTIMIZE TABLE to keep your database healthy without causing unnecessary downtime or performance hits.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

Oracle AWR Scripts Decoded .. No More Guessing!

Posted by FatDBA on July 29, 2025

Recently, someone asked me why there are so many AWR-like files in the directory and whether they are as useful as the well-known awrrpt.sql. I took the opportunity to explain what I knew about them and their purpose. Since I thought it could be helpful, I decided to share this insight with my readers as well.

If you’re into performance tuning in Oracle, very likey you’ve already used AWR reports. But then you open this directory: $ORACLE_HOME/rdbms/admin …. …and boom – you’re hit with a list of cryptic scripts: awrrpt.sql, awrgdrpi.sql, awrsqrpt.sql, awrextr.sql

What do they all do?
When should you use which one?
Why are they named like 90s DOS files 🙂 ?

Let’s keep it short and sharp. Here’s your point-to-point breakdown of the most important AWR scripts.

Before you go running any of these scripts – make sure you have the Oracle Diagnostic Pack license.
AWR stuff is not free.

I grouped them into logical chunks – reports, comparisons, SQLs, data movement, etc.

Performance Reports

These are the most common AWR reports you run to analyze performance between 2 snapshots.

ScriptWhat it does
awrrpt.sqlGenerates AWR report for the current instance (for single instance DBs)
awrrpti.sqlSame as above, but lets you select another DBID or instance (useful for RAC)
awrgrpt.sqlAWR Global Report – gives a full RAC-wide view
awrgrpti.sqlSame as above, but lets you pick another DBID/instance

Example:
You’re troubleshooting high CPU on node 2 of your RAC? Use awrrpti.sql.

Comparison Reports

These help you compare two different time ranges – maybe before and after a patch, or different load periods.

ScriptWhat it does
awrddrpt.sqlCompares two AWR snapshots (date diff) – for a single instance
awrddrpi.sqlSame as above, for another dbid/instance
awrgdrpt.sqlGlobal RAC diff report (current RAC)
awrgdrpi.sqlGlobal RAC diff report (another dbid/instance)

Use these when you wanna say, “Hey, this new code made the DB slower… prove it!”

Want to see what a particular SQL is doing? These are your tools.

ScriptWhat it does
awrsqrpt.sqlSQL report for a specific SQL_ID in the current instance
awrsqrpi.sqlSame thing but lets you pick another dbid/instance

You’ll be surprised how useful this is when hunting bad queries.

Sometimes, you need to take AWR data from one system and analyze it somewhere else (like test or dev).

ScriptWhat it does
awrextr.sqlExport AWR data using datapump
awrload.sqlImport AWR data using datapump

This is actually gold when working on performance issues across environments.

Helper / Utility Scripts

These are mostly helper scripts to define input or make reports more automated or interactive.

ScriptWhat it does
perfhubrpt.sqlGenerates a fancy interactive Performance Hub report
awrinpnm.sqlInput name helper for AWR
awrinput.sqlGet inputs before running AWR reports
awrddinp.sqlInput helper for diff reports
awrgdinp.sqlInput helper for RAC diff reports

What’s with these weird script names?

Yeah, all these awrsqrpi.sql, awrgdrpt.sql, etc. – they look like random garbage at first.
But there’s actually some logic.

Here’s how to decode them:

AbbreviationMeans
awrAutomatic Workload Repository
rpt or rpReport
iLets you select specific instance or DBID
gGlobal report for RAC
d or ddDiff reports (comparing two snapshots)
sqSQL
inpInput helper

So awrsqrpi.sql = AWR SQL Report for a different instance
And awrgdrpi.sql = AWR Global Diff Report for another DBID/instance

So Which Script Should I Use?

Here’s a quick cheat sheet:

TaskScript
Normal AWR report (single instance)awrrpt.sql
AWR report for RAC (global view)awrgrpt.sql
SQL performance reportawrsqrpt.sql
Compare two AWR reportsawrddrpt.sql
Export/import AWR dataawrextr.sql and awrload.sql

If you’re doing anything with RAC – prefer the ones with g in them.
If you’re automating – use the *inp*.sql files.

Final Thoughts

Yes, the names are ugly.
Yes, the syntax is old-school.
But honestly? These AWR scripts are still some of the best tools you have for DB performance analysis.

Just remember:

  • Don’t use them without a valid license
  • Learn the naming pattern once – and it gets way easier
  • Practice running different ones on test systems

And next time someone complains, “The database is slow” … you know exactly which script to run.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

fatdba explores Vector Search in Oracle 23ai

Posted by FatDBA on July 23, 2025

So Oracle rolled out 23ai a while back and like every major release, it came packed with some really cool and interesting features. One that definitely caught my eye was Vector Search. I couldn’t resist diving in… and recently I explored it in depth and would like to share a though on this subject.

You see, we’ve been doing LIKE '%tax policy%' since forever. But now, Oracle’s SQL has become more powerful. Not only does it match words … it matches meaning.

So here’s me trying to explain what vector search is, how Oracle does it, why you’d care, and some examples that’ll hopefully make it click.

What’s Vector Search, Anyway?

Alright, imagine this:

You have a table of products. You search for “lightweight laptop for travel”.
Some entries say “ultrabook”, others say “portable notebook”, and none mention “lightweight” or “travel”. Old-school SQL would’ve said: “No Matches Found”

But with vector search, it gets it. Oracle turns all that text into math .. basically, a long list of numbers called a vector … and compares meanings instead of words.

So What’s a Vector?

When we say “vector” in vector search, we’re not talking about geometry class. In the world of AI and databases, a vector is just a long list of numbers … each number representing some aspect or feature of the original input (like a sentence, product description, image, etc.).

Here’s a basic example:
[0.12, -0.45, 0.88, …, 0.03]
This is a vector … maybe a 512 or 1536-dimension one .. depending on the embedding model used (like OpenAI, Oracle’s built-in model, Cohere, etc.).

Each number in this list is abstract, but together they represent the essence or meaning of your data.

Let’s say you have these two phrases:
“Apple is a tech company”
“iPhone maker based in California”

Now, even though they don’t share many words, they mean nearly the same thing. When passed through an embedding model, both phrases are converted into vectors:

Vector A: [0.21, -0.32, 0.76, …, 0.02]
Vector B: [0.22, -0.30, 0.74, …, 0.01]

They look very close … and that’s exactly the point.

What Oracle 23ai Gives You

  • A new VECTOR datatype (yeah!)
  • AI_VECTOR() function to convert text into vectors
  • VECTOR_INDEX to make search blazing fast
  • VECTOR_DISTANCE() to measure similarity
  • It’s all native in SQL ..no need for another vector DB bolted on

Let’s Build Something Step-by-Step

We’ll build a simple product table and do a vector search on it.

Step 1: Create the table

CREATE TABLE products (
  product_id     NUMBER PRIMARY KEY,
  product_name   VARCHAR2(100),
  description    VARCHAR2(1000),
  embedding      VECTOR(1536)
);

1536? Yeah, that’s the number of dimensions from Oracle’s built-in embedding model. Depends on which one you use.

Step 2: Generate vector embeddings

UPDATE products
SET embedding = ai_vector('text_embedding', description);

This’ll take the description, pass it through Oracle’s AI model, and give you a vector. Magic.

Step 3: Create the vector index

CREATE VECTOR INDEX product_vec_idx
ON products (embedding)
WITH (DISTANCE METRIC COSINE);

This speeds up the similarity comparisons … much like an index does for normal WHERE clauses.

Step 4: Semantic Search in SQL

SELECT product_id, product_name, 
       VECTOR_DISTANCE(embedding, ai_vector('text_embedding', 'light laptop for designers')) AS score
FROM products
ORDER BY score
FETCH FIRST 5 ROWS ONLY;

Now we’re searching for meaning, not words.

VECTOR_DISTANCE Breakdown

You can use different math behind the scenes:

VECTOR_DISTANCE(v1, v2 USING COSINE)
VECTOR_DISTANCE(v1, v2 USING EUCLIDEAN)
VECTOR_DISTANCE(v1, v2 USING DOT_PRODUCT)

Cosine is the usual go-to for text. Oracle handles the rest for you.

Use Cases You’ll Actually Care About

1. Semantic Product Search — “Fast shoes for runners” => shows “Nike Vaporfly”, even if it doesn’t say “fast”.

2. Similar Document Retrieval — Find all NDAs that look like this one (even with totally different words).

3. Customer Ticket Suggestion — Auto-suggest resolutions from past tickets. Saves your support team hours.

4. Content Recommendation — “People who read this also read…” kind of stuff. Easy to build now.

5. Risk or Fraud Pattern Matching — Find transactions that feel like fraud ..even if the details don’t match 1:1.

I know it might sound little confusing .. lets do a Onwe more example : Legal Document Matching

CREATE TABLE legal_docs (
  doc_id       NUMBER PRIMARY KEY,
  title        VARCHAR2(255),
  content      CLOB,
  content_vec  VECTOR(1536)
);

Update vectors:

UPDATE legal_docs
SET content_vec = ai_vector('text_embedding', content);

Now find similar docs:

SELECT doc_id, title
FROM legal_docs
ORDER BY VECTOR_DISTANCE(content_vec, ai_vector('text_embedding', 'confidentiality in government contracts'))
FETCH FIRST 10 ROWS ONLY;

That’s it. You’re officially building an AI-powered legal search engine.

Things to Know

  • Creating vectors can be heavy .. batch it.
  • Indexing speeds up similarity search a lot.
  • Combine with normal filters for best results:
SELECT * FROM products
WHERE category = 'laptop'
ORDER BY VECTOR_DISTANCE(embedding, ai_vector('gaming laptop under 1kg'))
FETCH FIRST 5 ROWS ONLY;

Final Thoughts from fatdba

I’m honestly impressed. Oracle took something that felt like ML black magic and put it right in SQL. No external service. No complicated setups. Just regular SQL, but smater.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

Diagnosing a MySQL database performance Issue Using MySQLTuner.

Posted by FatDBA on July 20, 2025

A few weeks ago, we ran into a pretty nasty performance issue on one of our MySQL production-like grade databases. It started with slow application response times and ended with my phone blowing up with alerts. Something was clearly wrong, and while I suspected some bad queries or config mismatches, I needed a fast way to get visibility into what was really happening under the hood.

This is where MySQLTuner came to the rescue, again 🙂 I’ve used this tool in the past, and honestly, it’s one of those underrated gems for DBAs and sysadmins. It’s a Perl script that inspects your MySQL configuration and runtime status and then gives you a human-readable report with recommendations.

Let me walk you through how I used it to identify and fix the problem ..step by step .. including actual command output, what I changed, and the final outcome.

Step 1: Getting MySQLTuner

First things first, if you don’t already have MySQLTuner installed, just download it:

bashCopyEditwget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl
chmod +x mysqltuner.pl

You don’t need to install anything. Just run it like this:

bashCopyEdit./mysqltuner.pl --user=root --pass='YourStrongPassword'

(Note: Avoid running this in peak traffic hours on prod unless you’re sure about your load and risk.)

Step 2: Sample Output Snapshot

Here’s a portion of what I got when I ran it:

 >>  MySQLTuner 2.6.20 
 >>  Run with '--help' for additional options and output filtering

[OK] Currently running supported MySQL version 5.7.43
[!!] Switch to 64-bit OS - MySQL cannot use more than 2GB of RAM on 32-bit systems
[OK] Operating on 64-bit Linux

-------- Performance Metrics -------------------------------------------------
[--] Up for: 3d 22h 41m  (12M q [35.641 qps], 123K conn, TX: 92G, RX: 8G)
[--] Reads / Writes: 80% / 20%
[--] Binary logging is enabled (GTID MODE: ON)
[--] Total buffers: 3.2G global + 2.8M per thread (200 max threads)
[OK] Maximum reached memory usage: 4.2G (27.12% of installed RAM)
[!!] Slow queries: 15% (1M/12M)
[!!] Highest connection usage: 98% (197/200)
[!!] Aborted connections: 2.8K
[!!] Temporary tables created on disk: 37% (1M on disk / 2.7M total)

-------- MyISAM Metrics ------------------------------------------------------
[!!] Key buffer used: 17.2% (89M used / 512M cache)
[!!] Key buffer size / total MyISAM indexes: 512.0M/800.0M

-------- InnoDB Metrics ------------------------------------------------------
[OK] InnoDB buffer pool / data size: 2.0G/1.5G
[OK] InnoDB buffer pool instances: 1
[--] InnoDB Read buffer efficiency: 99.92% (925M hits / 926M total)
[!!] InnoDB Write log efficiency: 85.10% (232417 hits / 273000 total)
[!!] InnoDB log waits: 28

-------- Recommendations -----------------------------------------------------
General recommendations:
    Control warning line(s) size by reducing joins or increasing packet size
    Increase max_connections slowly if needed
    Reduce or eliminate persistent connections
    Enable the slow query log to troubleshoot bad queries
    Consider increasing the InnoDB log file size
    Query cache is deprecated and should be disabled

Variables to adjust:
    max_connections (> 200)
    key_buffer_size (> 512M)
    innodb_log_file_size (>= 512M)
    tmp_table_size (> 64M)
    max_heap_table_size (> 64M)

Step 3: What I Observed

Here’s what stood out for me:

1. Too many slow queries — 15% of all queries were slow. That’s a huge red flag. This wasn’t being logged properly either — the slow query log was off.

2. Disk-based temporary tables — 37% of temporary tables were being written to disk. This kills performance during joins and sorts.

3. Connections hitting limit — 197 out of 200 max connections used at peak. Close to saturation ..possibly causing application timeouts.

4. MyISAM key buffer inefficient — Key buffer was too small for the amount of MyISAM index data (yes, we still have a couple legacy MyISAM tables..

5. InnoDB log file too small — Frequent log flushing and waits were indicated, meaning innodb_log_file_size wasn’t enough for our write load.

Step 4: Actions I Took

Here’s what I changed based on the output and a quick double-check of our workload patterns:

– Enabled Slow Query Log

sqlCopyEditSET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;

And updated /etc/my.cnf:

iniCopyEditslow_query_log = 1
slow_query_log_file = /var/log/mysql-slow.log
long_query_time = 1

– Increased tmp_table_size and max_heap_table_size:

iniCopyEdittmp_table_size = 128M
max_heap_table_size = 128M

(This reduced the % of temp tables going to disk.)

– Raised innodb_log_file_size:

iniCopyEditinnodb_log_file_size = 512M
innodb_log_files_in_group = 2

Caution: You need to shut down MySQL cleanly and delete old redo logs before applying this change.

– Raised key_buffer_size:

iniCopyEditkey_buffer_size = 1G

We still had some legacy MyISAM usage and this definitely helped reduce read latency.

– Upped the max_connections a bit (but also discussed with devs about app-level connection pooling):

iniCopyEditmax_connections = 300

Step 5: Post-Change Observations

After making these changes and restarting MySQL (for some of the changes to take effect), here’s what I observed:

  • CPU dropped by ~15% at peak hours.
  • Threads_running dropped significantly, meaning less contention.
  • Temp table usage on disk dropped to 12%.
  • Slow query log started capturing some really bad queries, which were fixed in the app code within a few days.
  • No more aborted connections or connection errors from the app layer.

Final Thoughts

MySQLTuner is not a magic bullet, but it’s one of those tools that gives you quick, actionable insights without the need to install big observability stacks or pay for enterprise APM tools. I’d strongly suggest any MySQL admin or engineer dealing with production performance issues keep this tool handy.

It’s also good for periodic health checks, even if you’re not in a crisis. Run it once a month or so, and you’ll catch slow config drifts or usage pattern changes.

Resources

If you’ve had a similar experience or used MySQLTuner in your infra, would love to hear what kind of findings you had. Drop them in the comments or message me directly .. Want to know more 🙂 Happy tuning!

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , | Leave a Comment »

Demo on how to use Oracle XStream Replication Setup Using Java Client

Posted by FatDBA on June 15, 2025

Oracle XStream is a feature of Oracle Database that allows for real-time data replication between databases. Unlike Oracle GoldenGate, which is more GUI-driven and feature-rich, XStream is more code-centric. This makes it highly customizable, especially suitable when you want a Java or C-based client to control how data flows.

XStream vs GoldenGate

  • XStream is older and less frequently updated compared to Oracle GoldenGate.
  • XStream is limited to Oracle-to-Oracle replication and does not support heterogeneous environments.
  • GoldenGate supports a wide range of sources and targets, is easier to manage with its GUI-based tools, and has better support for DDL replication, integrated capture, and coordinated replication.

When and Why to Use XStream

Use XStream when:

  • You need fine-grained control using custom applications written in Java or C.
  • Your environment is Oracle-to-Oracle only.
  • You want a lightweight, programmatic replication tool that can be embedded in your application logic.
  • Licensing or infrastructure limitations do not permit GoldenGate and licen sing fees is a concern.

Do not use XStream if:

  • You require GUI-driven replication monitoring and setup.
  • You need heterogeneous replication (e.g., Oracle to PostgreSQL, MySQL, etc.).
  • Your use case demands continuous support and new feature releases.

Lets do a quick demo on how to use it, I have setup a lab environment for the test.

  • Source Database: dixitdb on 192.168.68.79:1521
  • Target Database: targetdb on 192.168.68.85:1521

Java Application (xio.java) will:

  • Connect to XStream Outbound on the source DB
  • Connect to XStream Inbound on the target DB
  • Transfer changes using Logical Change Records (LCRs)

Source Database Setup (dixitdb)

Step 1: Enable Replication Features

ALTER SYSTEM SET enable_goldengate_replication = TRUE SCOPE=BOTH;

Step 2: Ensure Archive Logging is Enabled

archive log list;

Must show: Database log mode              Archive Mode

Step 3: Create and Grant User for XStream

CREATE USER xstream_admin IDENTIFIED BY Welcome123;
GRANT CONNECT, RESOURCE, DBA TO xstream_admin;

Step 4: Enable Supplemental Logging

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Step 5: Create Demo Table

CREATE TABLE xstream_admin.demo_table (
  id    NUMBER PRIMARY KEY,
  name  VARCHAR2(100),
  value NUMBER
);

Step 6: Grant XStream Admin Privilege

BEGIN
  DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'XSTREAM_ADMIN',
    privilege_type => 'CAPTURE',
    grant_select_privileges => TRUE
  );
END;
/

Step 7: Create Outbound Server

BEGIN
  DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
    server_name => 'XOUT_SRV',
    connect_user => 'XSTREAM_ADMIN'
  );
END;
/

Check status:

SELECT capture_name, status FROM dba_capture;


Target Database Setup (targetdb)

Step 1: Create the Same User

CREATE USER xstream_admin IDENTIFIED BY Welcome123;
GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO xstream_admin;

Step 2: Setup Queue

BEGIN
  DBMS_XSTREAM_ADM.SET_UP_QUEUE(
    queue_table => 'xstream_admin.XIN_SRV',
    queue_name  => 'xstream_admin.XIN_SRV'
  );
END;
/

Step 3: Create Inbound Server

BEGIN
  DBMS_XSTREAM_ADM.CREATE_INBOUND(
    server_name =>'XIN_SRV',
    queue_name  =>'xstream_admin.XIN_SRV',
    apply_user  =>'xstream_admin',
    comment     =>'xstream_admin in'
  );
END;
/

Step 4: Create Target Table

CREATE TABLE xstream_admin.demo_table (
  id    NUMBER PRIMARY KEY,
  name  VARCHAR2(100),
  value NUMBER
);

Step 5: Add Table Rules

VAR dml_rule VARCHAR2(30);
VAR ddl_rule VARCHAR2(30);
BEGIN
  DBMS_XSTREAM_ADM.ADD_TABLE_RULES(
    table_name => 'xstream_admin.demo_table',
    streams_type => 'APPLY',
    streams_name => 'XIN_SRV',
    queue_name => 'xstream_admin.XIN_SRV',
    source_database => 'dixitdb',
    dml_rule_name => :dml_rule,
    ddl_rule_name => :ddl_rule
  );
END;
/

Step 6: Start Apply Process

BEGIN
  DBMS_APPLY_ADM.START_APPLY(apply_name => 'XIN_SRV');
END;
/

Check status:
SELECT apply_name, status FROM dba_apply;

Java Application: xio.java

The xio.java program acts as a custom replication engine. It:

  • Connects to XStream Inbound and Outbound
  • Receives LCRs (Logical Change Records) from the source
  • Sends them to the target

Pre-Requisites:

  • ojdbc8-19.15.0.0.jar (Oracle JDBC driver)
  • xstreams.jar (from $ORACLE_HOME/rdbms/jlib/)
import oracle.streams.*;
import oracle.jdbc.internal.OracleConnection;
import oracle.jdbc.*;
import oracle.sql.*;
import java.sql.*;
import java.util.*;

public class xio
{
public static String xsinusername = null;
public static String xsinpasswd = null;
public static String xsinName = null;
public static String xsoutusername = null;
public static String xsoutpasswd = null;
public static String xsoutName = null;
public static String in_url = null;
public static String out_url = null;
public static Connection in_conn = null;
public static Connection out_conn = null;
public static XStreamIn xsIn = null;
public static XStreamOut xsOut = null;
public static byte[] lastPosition = null;
public static byte[] processedLowPosition = null;

public static void main(String args[])
{
// get connection url to inbound and outbound server
in_url = parseXSInArguments(args);
out_url = parseXSOutArguments(args);

// create connection to inbound and outbound server
in_conn = createConnection(in_url, xsinusername, xsinpasswd);
out_conn = createConnection(out_url, xsoutusername, xsoutpasswd);

// attach to inbound and outbound server
xsIn = attachInbound(in_conn);
xsOut = attachOutbound(out_conn);

// main loop to get lcrs
get_lcrs(xsIn, xsOut);

// detach from inbound and outbound server
detachInbound(xsIn);
detachOutbound(xsOut);
}

// parse the arguments to get the conncetion url to inbound db
public static String parseXSInArguments(String args[])
{
String trace, pref;
String orasid, host, port;

if (args.length != 12)
{
printUsage();
System.exit(0);
}

orasid = args[0];
host = args[1];
port = args[2];
xsinusername = args[3];
xsinpasswd = args[4];
xsinName = args[5];

System.out.println("xsin_host = "+host);
System.out.println("xsin_port = "+port);
System.out.println("xsin_ora_sid = "+orasid);

String in_url = "jdbc:oracle:oci:@"+host+":"+port+":"+orasid;
System.out.println("xsin connection url: "+ in_url);

return in_url;
}

// parse the arguments to get the conncetion url to outbound db
public static String parseXSOutArguments(String args[])
{
String trace, pref;
String orasid, host, port;

if (args.length != 12)
{
printUsage();
System.exit(0);
}

orasid = args[6];
host = args[7];
port = args[8];
xsoutusername = args[9];
xsoutpasswd = args[10];
xsoutName = args[11];


System.out.println("xsout_host = "+host);
System.out.println("xsout_port = "+port);
System.out.println("xsout_ora_sid = "+orasid);

String out_url = "jdbc:oracle:oci:@"+host+":"+port+":"+orasid;
System.out.println("xsout connection url: "+ out_url);

return out_url;
}

// print out sample program usage message
public static void printUsage()
{
System.out.println("");
System.out.println("Usage: java xio "+"<xsin_oraclesid> " + "<xsin_host> "
+ "<xsin_port> ");
System.out.println(" "+"<xsin_username> " + "<xsin_passwd> "
+ "<xsin_servername> ");
System.out.println(" "+"<xsout_oraclesid> " + "<xsout_host> "
+ "<xsout_port> ");
System.out.println(" "+"<xsout_username> " + "<xsout_passwd> "
+ "<xsout_servername> ");
}

// create a connection to an Oracle Database
public static Connection createConnection(String url,
String username,
String passwd)
{
try
{
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(url, username, passwd);
}
catch(Exception e)
{
System.out.println("fail to establish DB connection to: " +url);
e.printStackTrace();
return null;
}
}

// attach to the XStream Inbound Server
public static XStreamIn attachInbound(Connection in_conn)
{
XStreamIn xsIn = null;
try
{
xsIn = XStreamIn.attach((OracleConnection)in_conn, xsinName,
"XSDEMOINCLIENT" , XStreamIn.DEFAULT_MODE);

// use last position to decide where should we start sending LCRs
lastPosition = xsIn.getLastPosition();
System.out.println("Attached to inbound server:"+xsinName);
System.out.print("Inbound Server Last Position is: ");
if (null == lastPosition)
{
System.out.println("null");
}
else
{
printHex(lastPosition);
}
return xsIn;
}
catch(Exception e)
{
System.out.println("cannot attach to inbound server: "+xsinName);
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}

// attach to the XStream Outbound Server
public static XStreamOut attachOutbound(Connection out_conn)
{
XStreamOut xsOut = null;

try
{
// when attach to an outbound server, client needs to tell outbound
// server the last position.
xsOut = XStreamOut.attach((OracleConnection)out_conn, xsoutName,
lastPosition, XStreamOut.DEFAULT_MODE);
System.out.println("Attached to outbound server:"+xsoutName);
System.out.print("Last Position is: ");
if (lastPosition != null)
{
printHex(lastPosition);
}
else
{
System.out.println("NULL");
}
return xsOut;
}
catch(Exception e)
{
System.out.println("cannot attach to outbound server: "+xsoutName);
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}

// detach from the XStream Inbound Server
public static void detachInbound(XStreamIn xsIn)
{
byte[] processedLowPosition = null;
try
{
processedLowPosition = xsIn.detach(XStreamIn.DEFAULT_MODE);
System.out.print("Inbound server processed low Position is: ");
if (processedLowPosition != null)
{
printHex(processedLowPosition);
}
else
{
System.out.println("NULL");
}
}
catch(Exception e)
{
System.out.println("cannot detach from the inbound server: "+xsinName);
System.out.println(e.getMessage());
e.printStackTrace();
}
}

// detach from the XStream Outbound Server
public static void detachOutbound(XStreamOut xsOut)
{
try
{
xsOut.detach(XStreamOut.DEFAULT_MODE);
}
catch(Exception e)
{
System.out.println("cannot detach from the outbound server: "+xsoutName);
System.out.println(e.getMessage());
e.printStackTrace();
}
}

public static void get_lcrs(XStreamIn xsIn, XStreamOut xsOut)
{
if (null == xsIn)
{
System.out.println("xstreamIn is null");
System.exit(0);
}

if (null == xsOut)
{
System.out.println("xstreamOut is null");
System.exit(0);
}

try
{
while(true)
{
// receive an LCR from outbound server
LCR alcr = xsOut.receiveLCR(XStreamOut.DEFAULT_MODE);

if (xsOut.getBatchStatus() == XStreamOut.EXECUTING) // batch is active
{
assert alcr != null;
// send the LCR to the inbound server
xsIn.sendLCR(alcr, XStreamIn.DEFAULT_MODE);

// also get chunk data for this LCR if any
if (alcr instanceof RowLCR)
{
// receive chunk from outbound then send to inbound
if (((RowLCR)alcr).hasChunkData())
{
ChunkColumnValue chunk = null;
do
{
chunk = xsOut.receiveChunk(XStreamOut.DEFAULT_MODE);
xsIn.sendChunk(chunk, XStreamIn.DEFAULT_MODE);
} while (!chunk.isEndOfRow());
}
}
processedLowPosition = alcr.getPosition();
}
else // batch is end
{
assert alcr == null;
// flush the network
xsIn.flush(XStreamIn.DEFAULT_MODE);
// get the processed_low_position from inbound server
processedLowPosition =
xsIn.getProcessedLowWatermark();
// update the processed_low_position at oubound server
if (null != processedLowPosition)
xsOut.setProcessedLowWatermark(processedLowPosition,
XStreamOut.DEFAULT_MODE);
}
}
}
catch(Exception e)
{
System.out.println("exception when processing LCRs");
System.out.println(e.getMessage());
e.printStackTrace();
}
}

public static void printHex(byte[] b)
{
for (int i = 0; i < b.length; ++i)
{
System.out.print(
Integer.toHexString((b[i]&0xFF) | 0x100).substring(1,3));
}
System.out.println("");
}
}


—-> Downloaded compatibile version of ojdbc driver ojdbc8-19.15.0.0.jar from oracle’s website
—- $ORACLE_HOME/rdbms/jlib/xstreams.jar
—-> copy ‘xstreams.jar’ from the ORACLE_HOME
—-> Compile the java code to create classes.
javac -cp “ojdbc8-19.15.0.0.jar:xstreams.jar:.” xio.java

— Once complied, call the java program now.
— Run Java Replication App
— This does: Connect to outbound server, Read LCRs (Logical Change Record), Send LCRs to inbound server and Confirm position/flush status

[oracle@oracleontario lib]$ java -cp "ojdbc8-19.15.0.0.jar:xstreams.jar:." xio targetdb
192.168.68.85 1521 XSTREAM_ADMIN Welcome123 XIN_SRV dixitdb 192.168.68.79 1521
XSTREAM_ADMIN Welcome123 XOUT_SRV
xsin_host = 192.168.68.85
xsin_port = 1521
xsin_ora_sid = targetdb
xsin connection url: jdbc:oracle:oci:@192.168.68.85:1521:targetdb
xsout_host = 192.168.68.79
xsout_port = 1521
xsout_ora_sid = dixitdb
xsout connection url: jdbc:oracle:oci:@192.168.68.79:1521:dixitdb
Attached to inbound server:XIN_SRV
Inbound Server Last Position is: null
Attached to outbound server:XOUT_SRV
Last Position is: NULL

Now when the code is running and source and target is ready. Lets try to do an insert and see if we gets it on the target.

-- on source database
SQL> select * from xstream_admin.demo_table;
ID
NAME
VALUE
----------
-----------------------------------------------------------------------------------------
----------- ----------
 10
Maine


SQL> insert into xstream_admin.demo_table (ID, NAME, VALUE) values (101, 'Calgary', 100);

1 row created.

SQL> commit;

Commit complete.

SQL> /

ID
NAME
VALUE
650 ----------
-----------------------------------------------------------------------------------------
----------- ----------
10
Maine
800
101
Calgary
100



--- on target database
SQL>
SQL> select * from xstream_admin.demo_table;
ID
NAME
VALUE
----------
-----------------------------------------------------------------------------------------
----------- ----------
10
Maine
800
101
Calgary
100

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , | Leave a Comment »

Oracle Support New AI-Powered Search

Posted by FatDBA on April 8, 2025

In a major upgrade to the user experience, Oracle Support has introduced a new AI-powered search feature on its support websites … This enhancement aims to make information discovery faster, smarter, and more relevant, solving one of the most common frustrations: finding the right document among millions.

Oracle Support’s new search interface is now powered by AI, providing:

  • Faster Search Results: As you start typing your query, the AI intelligently predicts what you’re looking for.
  • Relevant Suggestions: It offers instant access to related Knowledge Base articles, Community discussions, and Official Documentation.
  • Context-Aware Results: Instead of just matching keywords, the search engine understands the intent behind your query.
  • Cleaner Interface: A streamlined design shows results by category: Knowledge Base | Community | Documentation.

In the screenshot example (see above), when you search for “How to Find the Oracle BI Publisher version,” the AI doesn’t just return a random list … it shows an exact match article explaining where to find version info, depending on whether you use a standalone install or an embedded one.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

All new Oracle 23ai new views for enhanced Data Pump troubleshooting

Posted by FatDBA on April 2, 2025

Oracle 23ai (also known as Oracle 23c with AI features) introduces several powerful enhancements aimed at improving observability, diagnostics, and performance debugging. Among these enhancements are three new dynamic performance views designed specifically to help DBAs and developers troubleshoot Data Pump performance issues more efficiently and in real-time.

When dealing with large data exports or imports using Oracle Data Pump, performance bottlenecks or unexplained hangs can be frustrating and time-consuming to investigate. Until now, DBAs had to rely heavily on Data Pump log files, trace files, and session-level v$ views to diagnose problems. With the introduction of the following views, Oracle has taken a major step toward simplifying that process:

The three new views are: GV$DATAPUMP_PROCESS_INFO, GV$DATAPUMP_PROCESSWAIT_INFO and GV$DATAPUMP_SESSIONWAIT_INFO

These views provide real-time information about the state of Data Pump processes, their wait events, and any session-level contentions. GV$ views return cluster-wide information in a RAC environment, while V$ views return information specific to the current instance.

  • GV$DATAPUMP_PROCESS_INFO – This view shows the current Data Pump processes, including both master and worker processes. It provides basic information like the program name, session ID, username, job name, status, and system process ID.
SELECT * FROM V$DATAPUMP_PROCESS_INFO;

CUR_DATE	PROGRAM	SESSIONID	STATUS	USERNAME	JOBNAME	SPID
2023-01-09 13:56:07	ude@orcl (TNS V1-V3)	42	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891480
2023-01-09 13:56:07	oracle@orcl (DW00)	48	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891500

  • GV$DATAPUMP_PROCESSWAIT_INFO – This view helps detect contention between Data Pump processes. It shows which sessions are waiting, what events they are waiting for, and which other sessions may be blocking them.
    SELECT * FROM GV$DATAPUMP_PROCESSWAIT_INFO;
    
    WAITING_SESSION	HOLDING_SESSION	EVENT	PROGRAM_WAITSESSION	PROGRAM_HOLDINGDSESSION
    174	57	enq: TM - contention	oracle@orcl (DM00)	oracle@orcl (DW00)
    
    • GV$DATAPUMP_SESSIONWAIT_INFO – Provides deep insights into session-level waits during Data Pump operations, including how long the session has been in the wait state and what it’s waiting on.
    SELECT * FROM GV$DATAPUMP_SESSIONWAIT_INFO;
    
    WAITING_SESSION	EVENT	DP_SECONDS_IN_WAIT	DP_STATE_IN_WAIT	DP_P1TEXT	DP_P1
    46	enq: TM - contention	8086	WAITING	name	mode
    
    

    Before Oracle 23ai, debugging Data Pump jobs required checking logs, trace files, and manual session analysis. These new views provide real-time visibility into what each Data Pump process is doing, what it is waiting on, and where it might be blocked.

    Use Cases:

    1. If a Data Pump job appears hung, GV$DATAPUMP_PROCESSWAIT_INFO can help identify which process is waiting and what it is waiting on.
    2. If sessions are slow or idle, GV$DATAPUMP_SESSIONWAIT_INFO provides detailed timing and wait reasons.
    3. If parallel execution is used, GV$DATAPUMP_PROCESS_INFO shows how many worker processes are active and whether they are all functioning as expected.

    The new Data Pump views in Oracle 23ai are a significant step forward for real-time performance diagnostics. These views provide valuable insights that were previously hard to obtain, allowing DBAs to troubleshoot and optimize Data Pump operations with much more confidence and clarity.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , | Leave a Comment »

    Leveraging SQLT to transfer execution plans between SQL IDs using coe_load_sql_profile.sql

    Posted by FatDBA on March 11, 2025

    Hi All,

    Have you used coe_load_sql_profile.sql before? I mean a lot of people uses coe_xfr_sql_profile.sql from SQLT and these two scripts deals with SQL profiles in Oracle, but their purposes and use cases differ. coe_xfr_sql_profile.sql is used to export and migrate an existing SQL Profile from one system to another, ensuring performance stability across environments. coe_load_sql_profile.sql is used to create a new SQL Profile by capturing the execution plan from a modified SQL query and applying it to the original query, forcing it to use the optimized plan.

    Let me first explain a little bit more of the toolkit – Oracle SQLT (SQLTXPLAIN) which is a powerful tool designed to help DBAs analyze and troubleshoot SQL performance issues and all above mentioned scripts are part of the kit provided by Oracle and written by none other than Carlos Sierra.

    A common question DBAs encounter is: Can we plug the execution plan of one SQL ID into another SQL ID? …. The answer is YES! This can be accomplished using the SQLT script coe_load_sql_profile.sql. In this blog, we will explore how to use this script to achieve plan stability by enforcing a preferred execution plan across different SQL IDs. It examines the memory and AWR both to look text of the SQL IDs you passed and then it queries GV$SQL_PLAN and DBA_HIST_SQL_PLAN to extract the execution plan hash value from the modified SQL. Once it’s done collecting that information, it performs a loop to extract optimizer hints of the modified SQL’s execution plan. Finally it creates a SQL Profile using DBMS_SQLTUNE.IMPORT_SQL_PROFILE.

    Let’s give a quick demo … assume we have two SQL statements:

    SQL ID 1: 78a1nbdabcba (Original SQL) …. SQL ID 2: 9na182nn2bnn (Modified SQL)
    Both queries are logically similar but produce different execution plans.
    Our goal is to take the execution plan from SQL ID 1 and apply it to SQL ID 2.

    connect system/monkey123
    SQL> @coe_load_sql_profile.sql 
    or 
    SQL> START coe_load_sql_profile.sql <ORIGINAL_SQL_ID> <MODIFIED_SQL_ID>
    
    
    Parameter 1:
    ORIGINAL_SQL_ID (required)
    
    Enter value for 1: 78a1nbdabcba
    
    Parameter 2:
    MODIFIED_SQL_ID (required)
    
    Enter value for 2: 9na182nn2bnn
    
    
         PLAN_HASH_VALUE          AVG_ET_SECS
    -------------------- --------------------
              1181381381                 .003
    
    Parameter 3:
    PLAN_HASH_VALUE (required)
    
    Enter value for 3: 1181381381
    
    Values passed to coe_load_sql_profile:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ORIGINAL_SQL_ID: "78a1nbdabcba"
    MODIFIED_SQL_ID: "9na182nn2bnn"
    PLAN_HASH_VALUE: "1181381381"
    
    .
    .
    .
    
    ORIGINAL:78a1nbdabcba MODIFIED:9na182nn2bnn PHV:1181381381 SIGNATURE:16731003137917309319 CREATED BY COE_LOAD_SQL_PROFILE.SQL
    SQL>SET ECHO OFF;
    
    ****************************************************************************
    * Enter password to export staging table STGTAB_SQLPROF_78a1nbdabcba
    ****************************************************************************
    
    Export: Release 19.0.0- Production on Sun Mar 08 14:45:47 2012
    
    Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.
    
    Password:
    .
    .
    .
    
    coe_load_sql_profile completed.
    
    
    
    Run original query
    SQL> select ename from DIXIT where ename='Name';
    
    Plan hash value: 1181381381
    
    ---------------------------------------------------------------
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| 
    ---------------------------------------------------------------
    |   0 | SELECT STATEMENT  |        |     1 |     6 |     3   (0)|
    |*  1 |  TABLE ACCESS FULL| DIXIT  |     1 |     6 |     3   (0)|
    ---------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("ENAME"='Name')
    
    Note
    -----
       - SQL profile "78a1nbdabcba_1181381381" used for this statement
       
    


    What are your experiences with enforcing execution plans in Oracle?
    Let me know in the comments!

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , | 4 Comments »

    Resolving an interesting timezone mismatch in database sessions after migration

    Posted by FatDBA on March 8, 2025

    This happened a while ago, but it was an interesting scenario during a database migration. We encountered a unique issue where the system timezone varied depending on how the database connection was established. Specifically, when connecting to the database using a service name, the session timezone was set to EST, whereas a direct connection (without using a service name) correctly showed the timezone as UTC. This discrepancy had the potential to cause severe inconsistencies in time-sensitive transactions and logging mechanisms, prompting an urgent need for resolution.

    [oracle@xxxxxx ~]$ sqlplus monkey/xxxxxxx@MONKEYD
    SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
    
    NOW
    -------------------
    02-08-2025 13:20:07  -- Displaying EST timezone
    
    SQL> !date
    Sat Feb 8 18:20:10 UTC 2025  -- Server's system timezone is UTC
    
    
    
    When connecting without specifying a service name:
    [oracle@xxxxx 11.2.0.4]$ sqlplus monkey/xxxxxxx
    SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
    
    NOW
    -------------------
    02-08-2025 18:15:40  -- Correctly showing UTC timezone
    
    SQL> !date
    Sat Feb 8 18:15:42 UTC 2025  -- Server's system timezone remains UTC
    

    Upon detailed investigation, we determined that the issue stemmed from the Oracle Clusterware (HAS) environment settings. Specifically:

    Listener Start Method: When the listener was started via srvctl, the database sessions picked up the incorrect timezone (EST). When the listener was started manually, the correct timezone (UTC) was applied.

    We reviewed the file /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig__env.txt, which is responsible for defining environmental variables for Oracle Clusterware. The file contained the incorrect timezone setting: TZ=America/New_York # Incorrect setting enforcing EST … This setting was overriding the expected system timezone when the listener was started via srvctl.

    The previous production environment running Oracle 11g with Oracle Restart had the setting —> TZ=GMT-00:00 # Which correctly maintained UTC behavior
    The new setup on Oracle 19c had TZ=America/New_York, leading to the observed inconsistency.

    To resolve the issue, we first took backup of existing file before updting the TZ parameter.

    Before making any changes, we took a backup of the affected configuration file:
    cp /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig_env.txt \ /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig_env.txt.

    Modified TZ=America/New_York in environment file /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig__env.txt and updated it to TZ=UTC

    Since Oracle Restart enforces the timezone setting at the environment level, a restart of services was required. This step was planned within a scheduled outage window to avoid disruption.

    srvctl stop listener
    srvctl stop database -d MONKEYDB
    crsctl stop has
    crsctl start has
    srvctl start database -d MONKEYDB
    srvctl start listener
    

    After applying the changes and restarting the necessary services, we verified that the database sessions were now consistently reporting the correct timezone (UTC):

    sqlplus monkey/xxxxxxx@MONKEYD
    SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;

    The output now correctly showed UTC time, confirming that the issue was successfully resolved.

    This issue highlighted a subtle but critical misconfiguration that affected database session timezones in a clustered environment. The root cause was traced back to the Oracle Restart configuration file, which was enforcing the wrong timezone (America/New_York). By updating this setting to UTC and restarting the HAS services, we successfully restored consistency across all database sessions.

    Moving forward, these insights will help ensure that similar issues are proactively identified and mitigated during future database migrations. Proper validation of environment settings, particularly in Oracle Clusterware and HA setups, is crucial to maintaining operational integrity and preventing timezone-related anomalies in mission-critical applications.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »