Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Archive for February, 2025

Resolving Performance Issues in Oracle EBS 12.1 Journal Imports Due to GL Code Combinations Exclusive Table Lock

Posted by FatDBA on February 18, 2025

A long-standing challenge in Oracle EBS 12.1 involves journal imports taking an excessive amount of time due to exclusive locking on the GL.GL_CODE_COMBINATIONS table. This issue significantly slows down the entire journal import process by causing serialization, ultimately impacting system performance.

The problem arises during the journal import process when records are inserted into the GL_BALANCES table. Because of this, only one import process can proceed at a time, forcing subsequent imports to wait due to contention on the GL_CODE_COMBINATIONS table. The contention manifests as waits on the event ‘enq: TM – contention’, resulting from an exclusive table-level lock.

Understanding Profile Options: Flexfields:Validate on Server and Flexfields:Shared Table Lock

  • Flexfields: Validate on Server
    • This profile decides if flexfield validations take place on the server or client.
    • Turning this profile on (Yes) or off (No) only changes where validations run, not their validity.
    • If set to Yes, validations occur on the server, decreasing client-side workload.
  • Flexfields:Shared Table Lock
    • This setting determines the locking behavior during client-side validation.
    • When set to Yes, the flexfield table (GL_CODE_COMBINATIONS) is locked in shared mode.
    • When set to No, the table is locked in exclusive mode, leading to serialization and blocking other processes.
    • Note: If Flexfields:Validate on Server is set to No, the Flexfields:Shared Table Lock profile setting takes effect.

How These Profiles Impact Journal Import Performance : The Journal Import program is indirectly affected by these profile settings. The issue arises when the import process attempts to create new code combinations using the flexfield validation routine.

  • If Flexfields:Validate on Server is set to No, the system relies on client-side validation, which then enforces the Flexfields:Shared Table Lock setting.
  • If Flexfields:Shared Table Lock is set to No, the system applies exclusive locks on GL_CODE_COMBINATIONS, leading to serialization of imports.

To improve journal import performance and minimize locking issues, consider the following:

  1. Set Flexfields:Validate on Server to Yes
    • This shifts validation to the server, reducing contention on GL_CODE_COMBINATIONS.
  2. Ensure Flexfields:Shared Table Lock is set to Yes (if server-side validation is disabled)
    • This prevents exclusive table-level locks and allows multiple imports to run concurrently.
  3. Optimize Code Combination Creation
    • Regularly clean up and archive unused or obsolete code combinations to reduce contention.
    • Implement proper indexing on frequently accessed flexfield tables to enhance performance.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Optimizing MySQL Redo Log Performance in OCI

Posted by FatDBA on February 16, 2025

When deploying databases in the cloud, ensuring efficient redo log handling is crucial for maintaining performance and consistency. Many organizations struggle with suboptimal commit times due to increased latency introduced by virtualized cloud environments. This article explores common redo log bottlenecks and provides practical strategies to optimize transaction commit performance in cloud-based databases.

In a database, redo logs play a vital role in maintaining durability and crash recovery. Every commit operation involves writing changes to redo logs before confirming the transaction to the user. In an on-premise setup, disk latencies are relatively stable and predictable. However, in the cloud, factors like shared storage, networked disks, and hypervisor scheduling introduce additional delays that impact redo log efficiency.

To diagnose redo log-related latency, you can use Oracle wait events or MySQL’s performance schema. In Oracle, the event “log file sync” can indicate significant wait times on redo logs. Similarly, in MySQL, the following query provides insights into log-related latencies:

SELECT EVENT_NAME, SUM_TIMER_WAIT / 1000000000 AS wait_time_ms
FROM performance_schema.events_waits_summary_global_by_event_name
WHERE EVENT_NAME LIKE 'wait/io/file/innodb_log%';

High wait times suggest potential issues with redo log performance that need addressing.

Optimizing Redo Log Writes in the Cloud

Use Low-Latency Storage Options – OCI offers various storage tiers, and selecting the right one is crucial for redo log performance. Use Block Storage with Auto-Tuning Performance or BM.DenseIO shapes for optimized log writes.

Additionally, placing redo logs on separate disks ensures that redo writes don’t compete with regular data file operations.

Tune Log Buffer Size In MySQL, adjusting innodb_log_buffer_size can reduce the frequency of disk writes:

SET GLOBAL innodb_log_buffer_size = 128M;

Optimize Commit Strategies For high-throughput applications, batching commits can reduce redo log contention. Instead of committing after each transaction, applications can group multiple changes into a single commit. Example in MySQL using autocommit OFF:

START TRANSACTION;
INSERT INTO orders VALUES (1, 'New Order');
INSERT INTO orders VALUES (2, 'Second Order');
COMMIT;

Increase Parallelism for Redo Processing – In MySQL, enabling multiple redo log writer threads can improve log writing efficiency:

SET GLOBAL innodb_log_write_ahead_size = 8192;

Leverage Cloud-Native Optimizations – OCI offers database-specific optimizations. Utilize Exadata Storage Indexes to minimize redo log writes and optimize disk I/O performance.

Redo log performance is a key factor in ensuring efficient and reliable database transactions in cloud environments. By selecting low-latency storage, tuning log buffer settings, optimizing commit strategies, increasing parallelism, and leveraging cloud-specific optimizations, you can significantly reduce commit latency and improve overall database responsiveness.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Leave a Comment »

ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ

Posted by FatDBA on February 16, 2025

Recently, I had a conversation with an old friend who was struggling with Data Pump import errors. He encountered the following error message:

"ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ."

The message itself provided a clear hint—the issue stemmed from a time zone (TZ) version mismatch between the source and target databases. He was trying to export data from a 19.20.0 database (DST 42) and import it into an older 19.18.0 database (DST 32).

This situation left him quite perplexed, as his database contained multiple time zone-dependent data, raising concerns about potential data inconsistencies and conversion errors.

How It All Started – Initially, when he created the database using Oracle 19.18.0, it was configured with DST version 32 by default.

SELECT * FROM v$timezone_file;
FILENAME               VERSION    CON_ID
----------------------------------------
timezlrg_32.dat             32         0

Later, he upgraded his Oracle home to 19.20.0 and applied patches to his existing 19.18.0 database. However, the DST version remained at 32, as patching alone does not update the DST version unless explicitly modified using AutoUpgrade.

The real issue arose when he created a brand-new database using the 19.20.0 home. Upon checking its DST version, he discovered that it had been assigned a newer DST version (42), which led to the compatibility problem.

SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_42.dat              42          0

The Core Problem
The mismatch between DST 42 (new database) and DST 32 (old database) meant that Data Pump could not process time zone-based data properly, causing the import failure.

Unfortunately, there is no direct solution available for this issue in Oracle 19c. However, starting from Oracle 21c and later, this problem could have been easily bypassed by setting the following hidden parameter, which overrides the strict time zone check:

alter system set "_datapump_bypass_tstz_check"=TRUE scope=both;

Since my friend was working with Oracle 19c, we explored several potential workarounds:

Kamil’s Solution: One promising approach was suggested by Kamil Stawiarski, who detailed a method to resolve the issue in his blog post:
Solved: ORA-39405 – Oracle Data Pump does not support importing from a source database with TSTZ. However, this method requires OS-level access, which my friend did not have in his environment, making it an unfeasible option.

Preemptively Removing Newer DST Files: Another workaround we considered involved automating a cleanup process that periodically removes newer, unwanted DST files from all $ORACLE_HOME/oracore/zoneinfo directories and their subdirectories. This could be implemented as a scheduled cron job that runs regularly (or at least after each patch application) to prevent new databases from being created with newer DST files.

The downside? This approach would block access to newer time zone definitions, which could become problematic if applications or users require those updates in the future.

The Agreed Solution – After considering various options, we decided on the most practical and controlled approach:

When creating a new database, explicitly set the desired DST version at the time of creation.
This can be achieved by using DBCA (Database Configuration Assistant) or a custom script while ensuring that the ORA_TZFILE environment variable is set beforehand.

For example, using DBCA in silent mode, the database can be created with a specific DST version:

export ORA_TZFILE= <DESIRED TZ VERSION>
dbca -silent -createDatabase -source -timezoneVersion

By defining the DST version at the time of database creation, we can prevent mismatches and avoid compatibility issues when exporting/importing data across databases with different DST versions.

While there’s no perfect solution in Oracle 19c, proper planning and proactive DST version control can mitigate potential errors and ensure smoother database operations.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »