In a major upgrade to the user experience, Oracle Support has introduced a new AI-powered search feature on its support websites … This enhancement aims to make information discovery faster, smarter, and more relevant, solving one of the most common frustrations: finding the right document among millions.
Oracle Support’s new search interface is now powered by AI, providing:
Faster Search Results: As you start typing your query, the AI intelligently predicts what you’re looking for.
Relevant Suggestions: It offers instant access to related Knowledge Base articles, Community discussions, and Official Documentation.
Context-Aware Results: Instead of just matching keywords, the search engine understands the intent behind your query.
Cleaner Interface: A streamlined design shows results by category: Knowledge Base | Community | Documentation.
In the screenshot example (see above), when you search for “How to Find the Oracle BI Publisher version,” the AI doesn’t just return a random list … it shows an exact match article explaining where to find version info, depending on whether you use a standalone install or an embedded one.
Oracle 23ai (also known as Oracle 23c with AI features) introduces several powerful enhancements aimed at improving observability, diagnostics, and performance debugging. Among these enhancements are three new dynamic performance views designed specifically to help DBAs and developers troubleshoot Data Pump performance issues more efficiently and in real-time.
When dealing with large data exports or imports using Oracle Data Pump, performance bottlenecks or unexplained hangs can be frustrating and time-consuming to investigate. Until now, DBAs had to rely heavily on Data Pump log files, trace files, and session-level v$ views to diagnose problems. With the introduction of the following views, Oracle has taken a major step toward simplifying that process:
The three new views are: GV$DATAPUMP_PROCESS_INFO, GV$DATAPUMP_PROCESSWAIT_INFO and GV$DATAPUMP_SESSIONWAIT_INFO
These views provide real-time information about the state of Data Pump processes, their wait events, and any session-level contentions. GV$ views return cluster-wide information in a RAC environment, while V$ views return information specific to the current instance.
GV$DATAPUMP_PROCESS_INFO – This view shows the current Data Pump processes, including both master and worker processes. It provides basic information like the program name, session ID, username, job name, status, and system process ID.
SELECT * FROM V$DATAPUMP_PROCESS_INFO;
CUR_DATE PROGRAM SESSIONID STATUS USERNAME JOBNAME SPID
2023-01-09 13:56:07 ude@orcl (TNS V1-V3) 42 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891480
2023-01-09 13:56:07 oracle@orcl (DW00) 48 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891500
GV$DATAPUMP_PROCESSWAIT_INFO – This view helps detect contention between Data Pump processes. It shows which sessions are waiting, what events they are waiting for, and which other sessions may be blocking them.
GV$DATAPUMP_SESSIONWAIT_INFO – Provides deep insights into session-level waits during Data Pump operations, including how long the session has been in the wait state and what it’s waiting on.
SELECT * FROM GV$DATAPUMP_SESSIONWAIT_INFO;
WAITING_SESSION EVENT DP_SECONDS_IN_WAIT DP_STATE_IN_WAIT DP_P1TEXT DP_P1
46 enq: TM - contention 8086 WAITING name mode
Before Oracle 23ai, debugging Data Pump jobs required checking logs, trace files, and manual session analysis. These new views provide real-time visibility into what each Data Pump process is doing, what it is waiting on, and where it might be blocked.
Use Cases:
If a Data Pump job appears hung, GV$DATAPUMP_PROCESSWAIT_INFO can help identify which process is waiting and what it is waiting on.
If sessions are slow or idle, GV$DATAPUMP_SESSIONWAIT_INFO provides detailed timing and wait reasons.
If parallel execution is used, GV$DATAPUMP_PROCESS_INFO shows how many worker processes are active and whether they are all functioning as expected.
The new Data Pump views in Oracle 23ai are a significant step forward for real-time performance diagnostics. These views provide valuable insights that were previously hard to obtain, allowing DBAs to troubleshoot and optimize Data Pump operations with much more confidence and clarity.
Have you used coe_load_sql_profile.sql before? I mean a lot of people uses coe_xfr_sql_profile.sql from SQLT and these two scripts deals with SQL profiles in Oracle, but their purposes and use cases differ. coe_xfr_sql_profile.sql is used to export and migrate an existing SQL Profile from one system to another, ensuring performance stability across environments. coe_load_sql_profile.sql is used to create a new SQL Profile by capturing the execution plan from a modified SQL query and applying it to the original query, forcing it to use the optimized plan.
Let me first explain a little bit more of the toolkit – Oracle SQLT (SQLTXPLAIN) which is a powerful tool designed to help DBAs analyze and troubleshoot SQL performance issues and all above mentioned scripts are part of the kit provided by Oracle and written by none other than Carlos Sierra.
A common question DBAs encounter is: Can we plug the execution plan of one SQL ID into another SQL ID? …. The answer is YES! This can be accomplished using the SQLT script coe_load_sql_profile.sql. In this blog, we will explore how to use this script to achieve plan stability by enforcing a preferred execution plan across different SQL IDs. It examines the memory and AWR both to look text of the SQL IDs you passed and then it queries GV$SQL_PLAN and DBA_HIST_SQL_PLAN to extract the execution plan hash value from the modified SQL. Once it’s done collecting that information, it performs a loop to extract optimizer hints of the modified SQL’s execution plan. Finally it creates a SQL Profile using DBMS_SQLTUNE.IMPORT_SQL_PROFILE.
Let’s give a quick demo … assume we have two SQL statements:
SQL ID 1: 78a1nbdabcba (Original SQL) …. SQL ID 2: 9na182nn2bnn (Modified SQL) Both queries are logically similar but produce different execution plans. Our goal is to take the execution plan from SQL ID 1 and apply it to SQL ID 2.
connect system/monkey123
SQL> @coe_load_sql_profile.sql
or
SQL> START coe_load_sql_profile.sql <ORIGINAL_SQL_ID> <MODIFIED_SQL_ID>
Parameter 1:
ORIGINAL_SQL_ID (required)
Enter value for 1: 78a1nbdabcba
Parameter 2:
MODIFIED_SQL_ID (required)
Enter value for 2: 9na182nn2bnn
PLAN_HASH_VALUE AVG_ET_SECS
-------------------- --------------------
1181381381 .003
Parameter 3:
PLAN_HASH_VALUE (required)
Enter value for 3: 1181381381
Values passed to coe_load_sql_profile:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ORIGINAL_SQL_ID: "78a1nbdabcba"
MODIFIED_SQL_ID: "9na182nn2bnn"
PLAN_HASH_VALUE: "1181381381"
.
.
.
ORIGINAL:78a1nbdabcba MODIFIED:9na182nn2bnn PHV:1181381381 SIGNATURE:16731003137917309319 CREATED BY COE_LOAD_SQL_PROFILE.SQL
SQL>SET ECHO OFF;
****************************************************************************
* Enter password to export staging table STGTAB_SQLPROF_78a1nbdabcba
****************************************************************************
Export: Release 19.0.0- Production on Sun Mar 08 14:45:47 2012
Copyright (c) 1982, 2024, Oracle and/or its affiliates. All rights reserved.
Password:
.
.
.
coe_load_sql_profile completed.
Run original query
SQL> select ename from DIXIT where ename='Name';
Plan hash value: 1181381381
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 3 (0)|
|* 1 | TABLE ACCESS FULL| DIXIT | 1 | 6 | 3 (0)|
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ENAME"='Name')
Note
-----
- SQL profile "78a1nbdabcba_1181381381" used for this statement
What are your experiences with enforcing execution plans in Oracle? Let me know in the comments!
This happened a while ago, but it was an interesting scenario during a database migration. We encountered a unique issue where the system timezone varied depending on how the database connection was established. Specifically, when connecting to the database using a service name, the session timezone was set to EST, whereas a direct connection (without using a service name) correctly showed the timezone as UTC. This discrepancy had the potential to cause severe inconsistencies in time-sensitive transactions and logging mechanisms, prompting an urgent need for resolution.
[oracle@xxxxxx ~]$ sqlplus monkey/xxxxxxx@MONKEYD
SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
NOW
-------------------
02-08-2025 13:20:07 -- Displaying EST timezone
SQL> !date
Sat Feb 8 18:20:10 UTC 2025 -- Server's system timezone is UTC
When connecting without specifying a service name:
[oracle@xxxxx 11.2.0.4]$ sqlplus monkey/xxxxxxx
SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
NOW
-------------------
02-08-2025 18:15:40 -- Correctly showing UTC timezone
SQL> !date
Sat Feb 8 18:15:42 UTC 2025 -- Server's system timezone remains UTC
Upon detailed investigation, we determined that the issue stemmed from the Oracle Clusterware (HAS) environment settings. Specifically:
Listener Start Method: When the listener was started via srvctl, the database sessions picked up the incorrect timezone (EST). When the listener was started manually, the correct timezone (UTC) was applied.
We reviewed the file /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig__env.txt, which is responsible for defining environmental variables for Oracle Clusterware. The file contained the incorrect timezone setting: TZ=America/New_York # Incorrect setting enforcing EST … This setting was overriding the expected system timezone when the listener was started via srvctl.
The previous production environment running Oracle 11g with Oracle Restart had the setting —> TZ=GMT-00:00 # Which correctly maintained UTC behavior The new setup on Oracle 19c had TZ=America/New_York, leading to the observed inconsistency.
To resolve the issue, we first took backup of existing file before updting the TZ parameter.
Before making any changes, we took a backup of the affected configuration file: cp /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig_env.txt \ /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig_env.txt.
Modified TZ=America/New_York in environment file /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig__env.txt and updated it to TZ=UTC
Since Oracle Restart enforces the timezone setting at the environment level, a restart of services was required. This step was planned within a scheduled outage window to avoid disruption.
After applying the changes and restarting the necessary services, we verified that the database sessions were now consistently reporting the correct timezone (UTC):
sqlplus monkey/xxxxxxx@MONKEYD SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
The output now correctly showed UTC time, confirming that the issue was successfully resolved.
This issue highlighted a subtle but critical misconfiguration that affected database session timezones in a clustered environment. The root cause was traced back to the Oracle Restart configuration file, which was enforcing the wrong timezone (America/New_York). By updating this setting to UTC and restarting the HAS services, we successfully restored consistency across all database sessions.
Moving forward, these insights will help ensure that similar issues are proactively identified and mitigated during future database migrations. Proper validation of environment settings, particularly in Oracle Clusterware and HA setups, is crucial to maintaining operational integrity and preventing timezone-related anomalies in mission-critical applications.
A long-standing challenge in Oracle EBS 12.1 involves journal imports taking an excessive amount of time due to exclusive locking on the GL.GL_CODE_COMBINATIONS table. This issue significantly slows down the entire journal import process by causing serialization, ultimately impacting system performance.
The problem arises during the journal import process when records are inserted into the GL_BALANCES table. Because of this, only one import process can proceed at a time, forcing subsequent imports to wait due to contention on the GL_CODE_COMBINATIONS table. The contention manifests as waits on the event ‘enq: TM – contention’, resulting from an exclusive table-level lock.
Understanding Profile Options: Flexfields:Validate on Server and Flexfields:Shared Table Lock
Flexfields: Validate on Server
This profile decides if flexfield validations take place on the server or client.
Turning this profile on (Yes) or off (No) only changes where validations run, not their validity.
If set to Yes, validations occur on the server, decreasing client-side workload.
Flexfields:Shared Table Lock
This setting determines the locking behavior during client-side validation.
When set to Yes, the flexfield table (GL_CODE_COMBINATIONS) is locked in shared mode.
When set to No, the table is locked in exclusive mode, leading to serialization and blocking other processes.
Note: If Flexfields:Validate on Server is set to No, the Flexfields:Shared Table Lock profile setting takes effect.
How These Profiles Impact Journal Import Performance : The Journal Import program is indirectly affected by these profile settings. The issue arises when the import process attempts to create new code combinations using the flexfield validation routine.
If Flexfields:Validate on Server is set to No, the system relies on client-side validation, which then enforces the Flexfields:Shared Table Lock setting.
If Flexfields:Shared Table Lock is set to No, the system applies exclusive locks on GL_CODE_COMBINATIONS, leading to serialization of imports.
To improve journal import performance and minimize locking issues, consider the following:
Set Flexfields:Validate on Server to Yes
This shifts validation to the server, reducing contention on GL_CODE_COMBINATIONS.
Ensure Flexfields:Shared Table Lock is set to Yes (if server-side validation is disabled)
This prevents exclusive table-level locks and allows multiple imports to run concurrently.
Optimize Code Combination Creation
Regularly clean up and archive unused or obsolete code combinations to reduce contention.
Implement proper indexing on frequently accessed flexfield tables to enhance performance.
Oracle Database 23c introduces a powerful new feature allowing concurrent refresh of materialized views. This enhancement improves performance by enabling multiple users to refresh the same materialized view at the same time.
Before Oracle 23c, refreshing a materialized view (to reflect changes in the base tables) could only happen sequentially. This meant:
If multiple users triggered a refresh at the same time, Oracle handled them one by one.
This could cause delays in environments where real-time data updates were needed.
For example, during month-end reporting, if multiple users tried to refresh a sales summary MV, they had to wait for each other, impacting productivity.
Oracle 23c allows concurrent refreshes, enabling multiple users to refresh the materialized view simultaneously. This improves efficiency, especially for fast-refresh MVs, which update incrementally. Let’s walk through a detailed example to demonstrate how to create, configure, and utilize concurrent refresh.
Next will create MV logs as Materialized view logs track changes to base tables, enabling fast refresh.
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE (sale_id, sale_date, amount, customer_id)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON customers
WITH ROWID, SEQUENCE (customer_id, customer_name)
INCLUDING NEW VALUES;
--- Create MView with concurrent refresh option.
CREATE MATERIALIZED VIEW sales_summary_mv
REFRESH FAST ON COMMIT
ENABLE CONCURRENT REFRESH
AS
SELECT c.customer_name,
COUNT(s.sale_id) AS total_sales,
SUM(s.amount) AS total_amount
FROM sales s
JOIN customers c ON s.customer_id = c.customer_id
GROUP BY c.customer_name;
-- collect stats to reflect row changes.
exec dbms_stats.gather_table_stats(null, 'sales_summary_mv');
Now we can trigger concurrent refresh.
EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'F'); -- Fast refresh
EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'C'); -- Complete refresh
-- In our case the concurrent_refresh_enabled returns Y, means concurrent refresh is enabled.
SELECT mview_name, concurrent_refresh_enabled
FROM user_mviews
WHERE mview_name = 'SALES_SUMMARY_MV';
MVIEW_NAME CONCURRENT_REFRESH_ENABLED
------------------------------ ------------------------------
SALES_SUMMARY_MV Y
Oracle 23c’s concurrent refresh for materialized views is a game-changer for data-intensive applications. By allowing multiple users to refresh views in parallel, businesses can significantly boost performance and ensure faster reporting.
Oracle Database 23ai introduces a groundbreaking feature that simplifies the renaming of LOB segments. This enhancement offers database administrators a more efficient way to manage large data objects without resorting to complex workarounds.
LOBs play a critical role in storing massive data types such as images, documents, and videos. As databases grow and evolve, LOB segment names can become outdated or misaligned with business needs. Previously, renaming required duplicating LOBs, migrating data, and deleting old segments—an inefficient process. Before Oracle 23ai:
Create a New LOB Column: A new LOB column had to be created in the table with the desired segment name.
Migrate Data: Data from the old LOB segment had to be copied to the new LOB column.
Drop Old LOB: Once the migration was completed, the old LOB column and segment were dropped.
Rename Indirectly: This indirect process often required downtime and posed risks of data loss or corruption during migration.
Example (Pre-23ai Approach):
ALTER TABLE media ADD (file_data_temp BLOB);
UPDATE media SET file_data_temp = file_data;
ALTER TABLE media DROP COLUMN file_data;
ALTER TABLE media RENAME COLUMN file_data_temp TO file_data;
This method was cumbersome and inefficient for large databases … With Oracle 23ai, renaming LOB segments is now as simple as running a single command 🙂 🙂
Renaming LOB segments can be achieved with the following straightforward SQL command:
ALTER TABLE <table_name> MODIFY LOB (<lob_column>) (RENAME TO <new_segment_name>);
Suppose you manage a media table with a LOB column file_data, currently named FILE_DATA_LOB. To rename it to MEDIA_ARCHIVE_LOB: —> ALTER TABLE media MODIFY LOB (file_data) (RENAME TO MEDIA_ARCHIVE_LOB);
Practical Applications:
Data Reorganization: Update segment names to align with evolving data categories.
Simplifying Audits: Ensure LOB segment names clearly represent their purpose.
Consistency Enforcement: Standardize naming conventions across different LOB segments.
Important Considerations
Dependencies: Check for application-level dependencies that reference LOB segment names.
User Privileges: Ensure you have the required ALTER permissions.
Tablespace Integrity: Segment renaming does not affect the tablespace or physical location.
Automating LOB Segment Renaming
For larger environments, automating LOB segment renaming can save significant time. The following PL/SQL script demonstrates how to rename multiple segments programmatically:
BEGIN
FOR r IN (SELECT table_name, column_name FROM user_lobs) LOOP
EXECUTE IMMEDIATE 'ALTER TABLE ' || r.table_name ||
' MODIFY LOB (' || r.column_name ||
') (RENAME TO ' || r.column_name || '_UPDATEDLOB)';
END LOOP;
END;
/
This script appends _UPDATEDLOB to all LOB segments in user tables.
Final Thoughts
The ability to rename LOB segments directly in Oracle Database 23ai marks a significant step forward in database administration. By eliminating complex workarounds, this feature enhances efficiency and ensures database environments remain well-organized. Whether standardizing segment names or adapting to shifting business needs, this capability helps administrators to maintain cleaner, more streamlined data structures easily.
Oracle Database 23ai introduces a new parameter, SESSION_EXIT_ON_PACKAGE_STATE_ERROR, designed to enhance session management and prevent potential data corruption by enforcing a hard session exit when the session state becomes invalidated.
Why SESSION_EXIT_ON_PACKAGE_STATE_ERROR Matters ?
In typical Oracle database environments, stateful PL/SQL packages, MLE modules, or environments may be modified while sessions actively use them. This can lead to errors such as:
ORA-04068: Can occur when a PL/SQL package body is recompiled, invalidating the session state.
ORA-4106 / ORA-4107: Can be raisrd when an MLE module or environment is altered via DDL, invalidating the session.
By default, the session remains active and throws an error when the invalid package or module is called. However, many applications may not properly handle these errors, leading to silent data corruption or unexpected behavior.
The SESSION_EXIT_ON_PACKAGE_STATE_ERROR parameter mitigates this risk by forcing an immediate session exit instead of raising an error.
Some of the benefits of using the parameter.
Prevents Data Corruption: By terminating sessions with invalid state, the risk of silent data corruption is reduced.
Simplifies Error Handling: Many applications are better at handling session disconnects than catching specific errors like ORA-04068.
Consistency Across Sessions: Ensures that all sessions dealing with modified packages or MLE modules are treated consistently, minimizing inconsistencies.
How SESSION_EXIT_ON_PACKAGE_STATE_ERROR Works
When SESSION_EXIT_ON_PACKAGE_STATE_ERROR is set to TRUE, the following behavior is enforced:
PL/SQL Package Modification:
If a stateful PL/SQL package is modified, any active session that tries to invoke the package receives ORA-04068. With this parameter set to TRUE, the session exits immediately instead of raising the error.
MLE Module or Environment Modification:
If an MLE module or environment is modified via DDL, active sessions receive ORA-4106 or ORA-4107. With SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE, these sessions are forcibly disconnected.
Application Handling:
Most applications are designed to capture session disconnects and reestablish connections, streamlining recovery from session invalidation.
Use Cases
High-Availability Environments: In systems where continuous uptime is critical, preventing data corruption is paramount.
Distributed Applications: Applications spread across multiple environments that frequently modify PL/SQL packages or MLE modules benefit from session termination to maintain data integrity.
Oracle RAC Deployments: Different instances in an Oracle RAC environment can independently configure this parameter, allowing fine-grained control based on workload requirements.
Configuring SESSION_EXIT_ON_PACKAGE_STATE_ERROR:
Examples: ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE; ALTER SESSION SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE; ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE SCOPE = SPFILE;
Considerations
Default Behavior: By default, this parameter is set to FALSE, meaning sessions will raise errors rather than exit.
Testing and Validation: Test this configuration in lower environments to ensure application compatibility.
Session Management: Monitor session disconnects to ensure that forced exits do not disrupt critical workflows.
Conclusion
SESSION_EXIT_ON_PACKAGE_STATE_ERROR is a powerful new feature in Oracle Database 23ai that enhances session management by enforcing session termination on package or module state invalidation. By using this parameter, Oracle environments can significantly reduce the risk of data corruption and streamline error handling processes across diverse applications. Whether managing PL/SQL packages or MLE modules, this parameter offers greater control and reliability for database administrators and developers both.
Migrations can be a horrifying experience —tricky, complex, time-intensive, and often riddled with unexpected challenges. This becomes even more evident when you’re migrating between older database versions, where architectural and component-level changes are significant. I remember one such encounter during a migration from Oracle 11g to 19c on a new infrastructure. Using RMAN DUPLICATE with the NOOPEN clause to restore source database backup in target before calling manual upgrade procedures, the process seemed smooth initially but soon wrapped into a host of issues with key database components.
The Problem
During the upgrade process, several critical components failed, leaving the database in an inconsistent state. The errors revolved around the following components:
COMP_ID
COMP_NAME
VERSION
STATUS
JAVAVM
JServer JAVA Virtual Machine
11.2.0.4.0
UPGRADING
XML
Oracle XDK
19.0.0.0.0
INVALID
CATJAVA
Oracle Database Java Packages
19.0.0.0.0
INVALID
The errors observed in dbupgrade runtime logs included:
ORA-29554: unhandled Java out of memory condition ORA-06512: at "SYS.INITJVMAUX", line 230 ORA-06512: at line 5 ORA-06512: : at "SYS.INITJVMAUX", line 230 ORA-06512: at line 5 [ORA-29548: Java system class reported: release of Java system classes in the database (11.2.0.4.190115) does not match that of the oracle executable (19.0.0.0.0 1.8)
These errors stemmed from a failure to allocate sufficient memory during the upgrade process. The Java Virtual Machine (JVM) ran out of memory, causing cascading errors that invalidated other components like Oracle XDK and Java Database Packages (CATJAVA). This wasn’t a mere inconvenience—it meant that critical database functionality was broken, making the system unusable for applications relying on these components.
Root Cause
Upon investigation, we found that the issue was caused by using a temporary RMAN parameter file during the restore process. This parameter file contained a minimal set of initialization parameters, which were insufficient to handle the resource-intensive operations required during the upgrade, particularly for recompiling and validating Java components.
Key memory areas like the SGA, shared pool, large pool, and Java pool were inadequately configured. These areas play a crucial role during the execution of upgrade scripts such as dbupgrade, catctl.pl, or catupgrd.sql. Without sufficient memory, the upgrade process for these components failed midway, leaving them in an invalid state.
The Fix
To resolve these issues and ensure the migration proceeded smoothly, the following steps were taken:
Step 1: Adjust Initialization Parameters
The first step was to allocate adequate memory for the Java components to prevent out-of-memory conditions. Critical parameters like the Java pool and other memory pools were adjusted to handle the load during the upgrade process:
ALTER SYSTEM SET java_jit_enabled = TRUE; ALTER SYSTEM SET "_system_trig_enabled" = TRUE; ALTER SYSTEM SET java_pool_size = 180M; -- Ensure at least 150 MB is allocated
Step 2: Recreate the Java Component
The next step was to drop and recreate the Java component in the database. This ensured that any inconsistencies caused by the previous upgrade failure were cleaned up:
CREATE OR REPLACE JAVA SYSTEM;
Step 3: Restart the Upgrade Scripts
After fixing the memory settings and recreating the Java component, the upgrade process was restarted using Oracle’s upgrade utilities:
dbupgrade: The recommended tool for 19c migrations.
catctl.pl: For manual control over the upgrade process.
catupgrd.sql: A fallback script for older methods.
Logs such as upg_summary.log were closely monitored during the process to catch any errors or exceptions in real-time.
Step 4: Verify the Upgrade
Once the upgrade process was completed, the status of all components was verified using the DBA_REGISTRY and DBA_REGISTRY_HISTORY views:
SELECT SUBSTR(comp_name, 1, 30) comp_name, SUBSTR(version, 1, 20) version, status FROM dba_registry ORDER BY comp_name;
Expected output:
COMP_NAME VERSION STATUS ------------------------------ -------------------- --------------- JServer JAVA Virtual Machine 19.0.0.0.0 UPGRADED
Key Takeaways
This experience highlighted several crucial lessons when handling database migrations, especially for major version upgrades like 11g to 19c:
1. Adequate Initialization Parameters Are Essential
The memory-related initialization parameters (java_pool_size, shared_pool_size, etc.) must be configured appropriately before starting the upgrade process. Using a minimal parameter file during RMAN DUPLICATE can lead to critical issues if not adjusted later.
2. Resource-Intensive Components Need Extra Attention
Components like JAVAVM, Oracle XDK, and CATJAVA are highly resource-intensive. Even slight memory misconfigurations can lead to cascading failures that disrupt the entire migration process.
3. Monitor Upgrade Logs Closely
Keeping an eye on upgrade runtime logs and the summary logs (upg_summary.log) is vital for catching errors early. This allows you to address issues promptly before they snowball into larger problems.
4. Understand Dependencies
Database components often have interdependencies. For instance, a failure in the Java Virtual Machine component affected both the Oracle XDK and CATJAVA packages. Understanding these dependencies is key to resolving issues effectively.
Conclusion
Database migrations are inherently challenging, especially when dealing with major version jumps. This particular experience from migrating Oracle 11g to 19c served as a valuable reminder of the importance of preparation, thorough testing, and paying close attention to resource configurations. With the right approach, even complex migrations can be navigated successfully, ensuring the database is ready for modern workloads and enhanced performance.
By addressing these pitfalls and being proactive, you can ensure a smoother upgrade process and avoid unnecessary downtime or functionality issues.
Let me know if this approach resonates with your migration experiences!
While addressing a recent materialized view refresh issue, I revisited a feature introduced in Oracle 12.1—out-of-place refresh. It’s a concept I’ve been familiar with for some time but hadn’t had the opportunity to explore practically until now. This innovation addresses key challenges faced during materialized view refreshes in data-heavy environments, offering a smarter and more efficient alternative to traditional methods.
Why Materialized Views Are Powerful (and Challenging) ?
Materialized views (MVs) simplify data retrieval by precomputing and storing query results as physical tables. They are especially useful for complex queries that join data from multiple tables. However, as materialized views grow in size, refreshing them becomes a major bottleneck.
The default refresh method, COMPLETE refresh, removes all existing data before repopulating the view. While effective for smaller datasets, this approach becomes painfully slow for large materialized views, as it relies on delete operations followed by insertions.
To mitigate this, some administrators opt for atomic_refresh = FALSE, which uses the faster TRUNCATE operation instead of delete statements. While truncate drastically improves performance, it comes with a trade-off: materialized views become inaccessible during refresh, impacting any dependent applications.
Out-of-Place Refresh to the Rescue
Oracle’s out-of-place refresh, introduced in 12.1, provides a great solution. Instead of working directly on the materialized view, the refresh process creates temporary outside tables. The necessary updates are performed on these temporary tables, which are then swapped with the materialized view once the refresh is complete. This approach minimizes downtime and ensures that materialized views remain accessible throughout the process.
Out-of-place refresh supports all major refresh methods, including FAST (‘F’), COMPLETE (‘C’), and PCT (‘P’). It is particularly effective in scenarios involving large data changes, where traditional methods struggle to scale. This feature also avoids issues like fragmentation or exposure of intermediate results during refresh.
Here’s how you can enable out-of-place refresh using the DBMS_MVIEW package:
In this example, Oracle first attempts an out-of-place FAST refresh. If that’s not possible, it tries PCT refresh and finally falls back to a complete refresh. In second example, it starts performing the complete refresh of MV.
During the refresh you will see some of the operations happening in the background on the auxiliary/temp table ..
/* MV_REFRESH (ITB) */INSERT /*+ APPEND */ INTO "FATDBA"."RV$B318VA" (SELECT /*+ PARALLEL ("xxx",10) */ xxxxx
-- This is where it starts copying rows into this temp table.
/* MV_REFRESH (CIDX) */ CREATE INDEX "FATDBA"."RV$KHA121" ON "FATDBA"."RV$B318VA" ("x", "y", "z") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE( INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "xxxx" PARALLEL 10
-- Once done with the data insertion in the temp table, it starts to create same indexes that are there on the existing MV base table.
Note :
The abbreviations ITB (Insert Temporary Table) and CIDX (Create Index) are typically used as internal identifiers in Oracle's materialized view refresh operations.
Points to Consider —
While the out-of-place refresh is a powerful tool, there are a few considerations to keep in mind:
Temporary Storage Needs: Adequate tablespace is required to hold two copies of the materialized view during refresh.
Recycle Bin Impact: If enabled, the recycle bin retains older copies of the table. It’s recommended to disable the recycle bin to avoid unnecessary space consumption.
Statistics Management: While index statistics are automatically computed during refresh, table statistics are not. Manual intervention may be needed.
Parallelism on Indexes : During the out-of-place refresh it always tries to recreate same indexes what you have on your existing MV base table, so its good to add some DOP (removed them later on once refresh is done), otherwise the creation process will go serial and the entire process will take lot of time to finish in case of bulky MVs.
Errors during table swap : If you have a very busy environment, you might hit ‘ORA_08103: object no longer exists‘ error message when the out-of-place method doing the swap of tables & its indexes.
Compatibility:
Unsupported for remote, cube, or object materialized views.
Not compatible with LOB columns.
Requires atomic_refresh = FALSE.
Final Thoughts
The out-of-place refresh feature in Oracle 12.1 is a significant improvement for managing materialized views in large databases. By executing refresh operations on temporary tables, it optimizes performance and ensures continuous availability, making it an invaluable tool for data-intensive environments.
If your materialized views are becoming cumbersome to manage, consider leveraging out-of-place refresh. It not only streamlines the refresh process but also enhances the overall efficiency and reliability of your database operations.