Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘19c’

ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ

Posted by FatDBA on February 16, 2025

Recently, I had a conversation with an old friend who was struggling with Data Pump import errors. He encountered the following error message:

"ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ."

The message itself provided a clear hint—the issue stemmed from a time zone (TZ) version mismatch between the source and target databases. He was trying to export data from a 19.20.0 database (DST 42) and import it into an older 19.18.0 database (DST 32).

This situation left him quite perplexed, as his database contained multiple time zone-dependent data, raising concerns about potential data inconsistencies and conversion errors.

How It All Started – Initially, when he created the database using Oracle 19.18.0, it was configured with DST version 32 by default.

SELECT * FROM v$timezone_file;
FILENAME               VERSION    CON_ID
----------------------------------------
timezlrg_32.dat             32         0

Later, he upgraded his Oracle home to 19.20.0 and applied patches to his existing 19.18.0 database. However, the DST version remained at 32, as patching alone does not update the DST version unless explicitly modified using AutoUpgrade.

The real issue arose when he created a brand-new database using the 19.20.0 home. Upon checking its DST version, he discovered that it had been assigned a newer DST version (42), which led to the compatibility problem.

SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_42.dat              42          0

The Core Problem
The mismatch between DST 42 (new database) and DST 32 (old database) meant that Data Pump could not process time zone-based data properly, causing the import failure.

Unfortunately, there is no direct solution available for this issue in Oracle 19c. However, starting from Oracle 21c and later, this problem could have been easily bypassed by setting the following hidden parameter, which overrides the strict time zone check:

alter system set "_datapump_bypass_tstz_check"=TRUE scope=both;

Since my friend was working with Oracle 19c, we explored several potential workarounds:

Kamil’s Solution: One promising approach was suggested by Kamil Stawiarski, who detailed a method to resolve the issue in his blog post:
Solved: ORA-39405 – Oracle Data Pump does not support importing from a source database with TSTZ. However, this method requires OS-level access, which my friend did not have in his environment, making it an unfeasible option.

Preemptively Removing Newer DST Files: Another workaround we considered involved automating a cleanup process that periodically removes newer, unwanted DST files from all $ORACLE_HOME/oracore/zoneinfo directories and their subdirectories. This could be implemented as a scheduled cron job that runs regularly (or at least after each patch application) to prevent new databases from being created with newer DST files.

The downside? This approach would block access to newer time zone definitions, which could become problematic if applications or users require those updates in the future.

The Agreed Solution – After considering various options, we decided on the most practical and controlled approach:

When creating a new database, explicitly set the desired DST version at the time of creation.
This can be achieved by using DBCA (Database Configuration Assistant) or a custom script while ensuring that the ORA_TZFILE environment variable is set beforehand.

For example, using DBCA in silent mode, the database can be created with a specific DST version:

export ORA_TZFILE= <DESIRED TZ VERSION>
dbca -silent -createDatabase -source -timezoneVersion

By defining the DST version at the time of database creation, we can prevent mismatches and avoid compatibility issues when exporting/importing data across databases with different DST versions.

While there’s no perfect solution in Oracle 19c, proper planning and proactive DST version control can mitigate potential errors and ensure smoother database operations.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Database Migration Challenges : JServer JAVA Virtual Machine gets INVALID or UPGRADING during manual upgrade

Posted by FatDBA on December 14, 2024

Migrations can be a horrifying experience —tricky, complex, time-intensive, and often riddled with unexpected challenges. This becomes even more evident when you’re migrating between older database versions, where architectural and component-level changes are significant. I remember one such encounter during a migration from Oracle 11g to 19c on a new infrastructure. Using RMAN DUPLICATE with the NOOPEN clause to restore source database backup in target before calling manual upgrade procedures, the process seemed smooth initially but soon wrapped into a host of issues with key database components.

The Problem

During the upgrade process, several critical components failed, leaving the database in an inconsistent state. The errors revolved around the following components:

COMP_IDCOMP_NAMEVERSIONSTATUS
JAVAVMJServer JAVA Virtual Machine11.2.0.4.0UPGRADING
XMLOracle XDK19.0.0.0.0INVALID
CATJAVAOracle Database Java Packages19.0.0.0.0INVALID

The errors observed in dbupgrade runtime logs included:

ORA-29554: unhandled Java out of memory condition
ORA-06512: at "SYS.INITJVMAUX", line 230
ORA-06512: at line 5

ORA-06512: : at "SYS.INITJVMAUX", line 230 ORA-06512: at line 5
[ORA-29548: Java system class reported: release of Java system classes in the database (11.2.0.4.190115) does not match that of the oracle executable (19.0.0.0.0 1.8)

These errors stemmed from a failure to allocate sufficient memory during the upgrade process. The Java Virtual Machine (JVM) ran out of memory, causing cascading errors that invalidated other components like Oracle XDK and Java Database Packages (CATJAVA). This wasn’t a mere inconvenience—it meant that critical database functionality was broken, making the system unusable for applications relying on these components.

Root Cause

Upon investigation, we found that the issue was caused by using a temporary RMAN parameter file during the restore process. This parameter file contained a minimal set of initialization parameters, which were insufficient to handle the resource-intensive operations required during the upgrade, particularly for recompiling and validating Java components.

Key memory areas like the SGA, shared pool, large pool, and Java pool were inadequately configured. These areas play a crucial role during the execution of upgrade scripts such as dbupgrade, catctl.pl, or catupgrd.sql. Without sufficient memory, the upgrade process for these components failed midway, leaving them in an invalid state.

The Fix

To resolve these issues and ensure the migration proceeded smoothly, the following steps were taken:

Step 1: Adjust Initialization Parameters

The first step was to allocate adequate memory for the Java components to prevent out-of-memory conditions. Critical parameters like the Java pool and other memory pools were adjusted to handle the load during the upgrade process:

ALTER SYSTEM SET java_jit_enabled = TRUE;
ALTER SYSTEM SET "_system_trig_enabled" = TRUE;
ALTER SYSTEM SET java_pool_size = 180M; -- Ensure at least 150 MB is allocated

Step 2: Recreate the Java Component

The next step was to drop and recreate the Java component in the database. This ensured that any inconsistencies caused by the previous upgrade failure were cleaned up:

CREATE OR REPLACE JAVA SYSTEM;

Step 3: Restart the Upgrade Scripts

After fixing the memory settings and recreating the Java component, the upgrade process was restarted using Oracle’s upgrade utilities:

  • dbupgrade: The recommended tool for 19c migrations.
  • catctl.pl: For manual control over the upgrade process.
  • catupgrd.sql: A fallback script for older methods.

Logs such as upg_summary.log were closely monitored during the process to catch any errors or exceptions in real-time.

Step 4: Verify the Upgrade

Once the upgrade process was completed, the status of all components was verified using the DBA_REGISTRY and DBA_REGISTRY_HISTORY views:

SELECT SUBSTR(comp_name, 1, 30) comp_name, 
SUBSTR(version, 1, 20) version,
status
FROM dba_registry
ORDER BY comp_name;

Expected output:

COMP_NAME                      VERSION              STATUS
------------------------------ -------------------- ---------------
JServer JAVA Virtual Machine 19.0.0.0.0 UPGRADED

Key Takeaways

This experience highlighted several crucial lessons when handling database migrations, especially for major version upgrades like 11g to 19c:

1. Adequate Initialization Parameters Are Essential

The memory-related initialization parameters (java_pool_size, shared_pool_size, etc.) must be configured appropriately before starting the upgrade process. Using a minimal parameter file during RMAN DUPLICATE can lead to critical issues if not adjusted later.

2. Resource-Intensive Components Need Extra Attention

Components like JAVAVM, Oracle XDK, and CATJAVA are highly resource-intensive. Even slight memory misconfigurations can lead to cascading failures that disrupt the entire migration process.

3. Monitor Upgrade Logs Closely

Keeping an eye on upgrade runtime logs and the summary logs (upg_summary.log) is vital for catching errors early. This allows you to address issues promptly before they snowball into larger problems.

4. Understand Dependencies

Database components often have interdependencies. For instance, a failure in the Java Virtual Machine component affected both the Oracle XDK and CATJAVA packages. Understanding these dependencies is key to resolving issues effectively.

Conclusion

Database migrations are inherently challenging, especially when dealing with major version jumps. This particular experience from migrating Oracle 11g to 19c served as a valuable reminder of the importance of preparation, thorough testing, and paying close attention to resource configurations. With the right approach, even complex migrations can be navigated successfully, ensuring the database is ready for modern workloads and enhanced performance.

By addressing these pitfalls and being proactive, you can ensure a smoother upgrade process and avoid unnecessary downtime or functionality issues.

Let me know if this approach resonates with your migration experiences!

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »