Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Resolving Performance Issues in Oracle EBS 12.1 Journal Imports Due to GL Code Combinations Exclusive Table Lock

Posted by FatDBA on February 18, 2025

A long-standing challenge in Oracle EBS 12.1 involves journal imports taking an excessive amount of time due to exclusive locking on the GL.GL_CODE_COMBINATIONS table. This issue significantly slows down the entire journal import process by causing serialization, ultimately impacting system performance.

The problem arises during the journal import process when records are inserted into the GL_BALANCES table. Because of this, only one import process can proceed at a time, forcing subsequent imports to wait due to contention on the GL_CODE_COMBINATIONS table. The contention manifests as waits on the event ‘enq: TM – contention’, resulting from an exclusive table-level lock.

Understanding Profile Options: Flexfields:Validate on Server and Flexfields:Shared Table Lock

  • Flexfields: Validate on Server
    • This profile decides if flexfield validations take place on the server or client.
    • Turning this profile on (Yes) or off (No) only changes where validations run, not their validity.
    • If set to Yes, validations occur on the server, decreasing client-side workload.
  • Flexfields:Shared Table Lock
    • This setting determines the locking behavior during client-side validation.
    • When set to Yes, the flexfield table (GL_CODE_COMBINATIONS) is locked in shared mode.
    • When set to No, the table is locked in exclusive mode, leading to serialization and blocking other processes.
    • Note: If Flexfields:Validate on Server is set to No, the Flexfields:Shared Table Lock profile setting takes effect.

How These Profiles Impact Journal Import Performance : The Journal Import program is indirectly affected by these profile settings. The issue arises when the import process attempts to create new code combinations using the flexfield validation routine.

  • If Flexfields:Validate on Server is set to No, the system relies on client-side validation, which then enforces the Flexfields:Shared Table Lock setting.
  • If Flexfields:Shared Table Lock is set to No, the system applies exclusive locks on GL_CODE_COMBINATIONS, leading to serialization of imports.

To improve journal import performance and minimize locking issues, consider the following:

  1. Set Flexfields:Validate on Server to Yes
    • This shifts validation to the server, reducing contention on GL_CODE_COMBINATIONS.
  2. Ensure Flexfields:Shared Table Lock is set to Yes (if server-side validation is disabled)
    • This prevents exclusive table-level locks and allows multiple imports to run concurrently.
  3. Optimize Code Combination Creation
    • Regularly clean up and archive unused or obsolete code combinations to reduce contention.
    • Implement proper indexing on frequently accessed flexfield tables to enhance performance.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Optimizing MySQL Redo Log Performance in OCI

Posted by FatDBA on February 16, 2025

When deploying databases in the cloud, ensuring efficient redo log handling is crucial for maintaining performance and consistency. Many organizations struggle with suboptimal commit times due to increased latency introduced by virtualized cloud environments. This article explores common redo log bottlenecks and provides practical strategies to optimize transaction commit performance in cloud-based databases.

In a database, redo logs play a vital role in maintaining durability and crash recovery. Every commit operation involves writing changes to redo logs before confirming the transaction to the user. In an on-premise setup, disk latencies are relatively stable and predictable. However, in the cloud, factors like shared storage, networked disks, and hypervisor scheduling introduce additional delays that impact redo log efficiency.

To diagnose redo log-related latency, you can use Oracle wait events or MySQL’s performance schema. In Oracle, the event “log file sync” can indicate significant wait times on redo logs. Similarly, in MySQL, the following query provides insights into log-related latencies:

SELECT EVENT_NAME, SUM_TIMER_WAIT / 1000000000 AS wait_time_ms
FROM performance_schema.events_waits_summary_global_by_event_name
WHERE EVENT_NAME LIKE 'wait/io/file/innodb_log%';

High wait times suggest potential issues with redo log performance that need addressing.

Optimizing Redo Log Writes in the Cloud

Use Low-Latency Storage Options – OCI offers various storage tiers, and selecting the right one is crucial for redo log performance. Use Block Storage with Auto-Tuning Performance or BM.DenseIO shapes for optimized log writes.

Additionally, placing redo logs on separate disks ensures that redo writes don’t compete with regular data file operations.

Tune Log Buffer Size In MySQL, adjusting innodb_log_buffer_size can reduce the frequency of disk writes:

SET GLOBAL innodb_log_buffer_size = 128M;

Optimize Commit Strategies For high-throughput applications, batching commits can reduce redo log contention. Instead of committing after each transaction, applications can group multiple changes into a single commit. Example in MySQL using autocommit OFF:

START TRANSACTION;
INSERT INTO orders VALUES (1, 'New Order');
INSERT INTO orders VALUES (2, 'Second Order');
COMMIT;

Increase Parallelism for Redo Processing – In MySQL, enabling multiple redo log writer threads can improve log writing efficiency:

SET GLOBAL innodb_log_write_ahead_size = 8192;

Leverage Cloud-Native Optimizations – OCI offers database-specific optimizations. Utilize Exadata Storage Indexes to minimize redo log writes and optimize disk I/O performance.

Redo log performance is a key factor in ensuring efficient and reliable database transactions in cloud environments. By selecting low-latency storage, tuning log buffer settings, optimizing commit strategies, increasing parallelism, and leveraging cloud-specific optimizations, you can significantly reduce commit latency and improve overall database responsiveness.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Leave a Comment »

ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ

Posted by FatDBA on February 16, 2025

Recently, I had a conversation with an old friend who was struggling with Data Pump import errors. He encountered the following error message:

"ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ."

The message itself provided a clear hint—the issue stemmed from a time zone (TZ) version mismatch between the source and target databases. He was trying to export data from a 19.20.0 database (DST 42) and import it into an older 19.18.0 database (DST 32).

This situation left him quite perplexed, as his database contained multiple time zone-dependent data, raising concerns about potential data inconsistencies and conversion errors.

How It All Started – Initially, when he created the database using Oracle 19.18.0, it was configured with DST version 32 by default.

SELECT * FROM v$timezone_file;
FILENAME               VERSION    CON_ID
----------------------------------------
timezlrg_32.dat             32         0

Later, he upgraded his Oracle home to 19.20.0 and applied patches to his existing 19.18.0 database. However, the DST version remained at 32, as patching alone does not update the DST version unless explicitly modified using AutoUpgrade.

The real issue arose when he created a brand-new database using the 19.20.0 home. Upon checking its DST version, he discovered that it had been assigned a newer DST version (42), which led to the compatibility problem.

SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_42.dat              42          0

The Core Problem
The mismatch between DST 42 (new database) and DST 32 (old database) meant that Data Pump could not process time zone-based data properly, causing the import failure.

Unfortunately, there is no direct solution available for this issue in Oracle 19c. However, starting from Oracle 21c and later, this problem could have been easily bypassed by setting the following hidden parameter, which overrides the strict time zone check:

alter system set "_datapump_bypass_tstz_check"=TRUE scope=both;

Since my friend was working with Oracle 19c, we explored several potential workarounds:

Kamil’s Solution: One promising approach was suggested by Kamil Stawiarski, who detailed a method to resolve the issue in his blog post:
Solved: ORA-39405 – Oracle Data Pump does not support importing from a source database with TSTZ. However, this method requires OS-level access, which my friend did not have in his environment, making it an unfeasible option.

Preemptively Removing Newer DST Files: Another workaround we considered involved automating a cleanup process that periodically removes newer, unwanted DST files from all $ORACLE_HOME/oracore/zoneinfo directories and their subdirectories. This could be implemented as a scheduled cron job that runs regularly (or at least after each patch application) to prevent new databases from being created with newer DST files.

The downside? This approach would block access to newer time zone definitions, which could become problematic if applications or users require those updates in the future.

The Agreed Solution – After considering various options, we decided on the most practical and controlled approach:

When creating a new database, explicitly set the desired DST version at the time of creation.
This can be achieved by using DBCA (Database Configuration Assistant) or a custom script while ensuring that the ORA_TZFILE environment variable is set beforehand.

For example, using DBCA in silent mode, the database can be created with a specific DST version:

export ORA_TZFILE= <DESIRED TZ VERSION>
dbca -silent -createDatabase -source -timezoneVersion

By defining the DST version at the time of database creation, we can prevent mismatches and avoid compatibility issues when exporting/importing data across databases with different DST versions.

While there’s no perfect solution in Oracle 19c, proper planning and proactive DST version control can mitigate potential errors and ensure smoother database operations.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Materialized Views: Concurrent Refresh in Oracle 23c

Posted by FatDBA on December 28, 2024

Oracle Database 23c introduces a powerful new feature allowing concurrent refresh of materialized views. This enhancement improves performance by enabling multiple users to refresh the same materialized view at the same time.

Before Oracle 23c, refreshing a materialized view (to reflect changes in the base tables) could only happen sequentially. This meant:

  • If multiple users triggered a refresh at the same time, Oracle handled them one by one.
  • This could cause delays in environments where real-time data updates were needed.

For example, during month-end reporting, if multiple users tried to refresh a sales summary MV, they had to wait for each other, impacting productivity.

Oracle 23c allows concurrent refreshes, enabling multiple users to refresh the materialized view simultaneously. This improves efficiency, especially for fast-refresh MVs, which update incrementally. Let’s walk through a detailed example to demonstrate how to create, configure, and utilize concurrent refresh.

CREATE TABLE sales (
  sale_id NUMBER(10),
  sale_date DATE,
  amount NUMBER(10, 2),
  customer_id NUMBER(10),
  CONSTRAINT sales_pk PRIMARY KEY (sale_id)
);

CREATE TABLE customers (
  customer_id NUMBER(10),
  customer_name VARCHAR2(100),
  CONSTRAINT customers_pk PRIMARY KEY (customer_id)
);


INSERT INTO customers VALUES (1, 'Alice');
INSERT INTO customers VALUES (2, 'Bob');

INSERT INTO sales VALUES (101, '01-JAN-2024', 500, 1);
INSERT INTO sales VALUES (102, '05-JAN-2024', 300, 1);
INSERT INTO sales VALUES (103, '10-JAN-2024', 450, 2);

Next will create MV logs as Materialized view logs track changes to base tables, enabling fast refresh.

CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE (sale_id, sale_date, amount, customer_id)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON customers
WITH ROWID, SEQUENCE (customer_id, customer_name)
INCLUDING NEW VALUES;


--- Create MView with concurrent refresh option.
CREATE MATERIALIZED VIEW sales_summary_mv
REFRESH FAST ON COMMIT
ENABLE CONCURRENT REFRESH
AS
SELECT c.customer_name,
       COUNT(s.sale_id) AS total_sales,
       SUM(s.amount) AS total_amount
FROM sales s
JOIN customers c ON s.customer_id = c.customer_id
GROUP BY c.customer_name;

-- collect stats to reflect row changes.
exec dbms_stats.gather_table_stats(null, 'sales_summary_mv');

Now we can trigger concurrent refresh.

EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'F');  -- Fast refresh
EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'C');  -- Complete refresh


-- In our case the concurrent_refresh_enabled returns Y, means concurrent refresh is enabled.
SELECT mview_name, concurrent_refresh_enabled
FROM user_mviews
WHERE mview_name = 'SALES_SUMMARY_MV';

MVIEW_NAME                     CONCURRENT_REFRESH_ENABLED
------------------------------ ------------------------------
SALES_SUMMARY_MV              Y

Oracle 23c’s concurrent refresh for materialized views is a game-changer for data-intensive applications. By allowing multiple users to refresh views in parallel, businesses can significantly boost performance and ensure faster reporting.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

Effortless LOB Segment Renaming in Oracle 23ai: A Game-Changer for Database Management

Posted by FatDBA on December 28, 2024

Oracle Database 23ai introduces a groundbreaking feature that simplifies the renaming of LOB segments. This enhancement offers database administrators a more efficient way to manage large data objects without resorting to complex workarounds.

LOBs play a critical role in storing massive data types such as images, documents, and videos. As databases grow and evolve, LOB segment names can become outdated or misaligned with business needs. Previously, renaming required duplicating LOBs, migrating data, and deleting old segments—an inefficient process. Before Oracle 23ai:

  • Create a New LOB Column: A new LOB column had to be created in the table with the desired segment name.
  • Migrate Data: Data from the old LOB segment had to be copied to the new LOB column.
  • Drop Old LOB: Once the migration was completed, the old LOB column and segment were dropped.
  • Rename Indirectly: This indirect process often required downtime and posed risks of data loss or corruption during migration.

Example (Pre-23ai Approach):

ALTER TABLE media ADD (file_data_temp BLOB);
UPDATE media SET file_data_temp = file_data;
ALTER TABLE media DROP COLUMN file_data;
ALTER TABLE media RENAME COLUMN file_data_temp TO file_data;

This method was cumbersome and inefficient for large databases … With Oracle 23ai, renaming LOB segments is now as simple as running a single command 🙂 🙂

Renaming LOB segments can be achieved with the following straightforward SQL command:

ALTER TABLE <table_name> MODIFY LOB (<lob_column>) (RENAME TO <new_segment_name>);

Suppose you manage a media table with a LOB column file_data, currently named FILE_DATA_LOB. To rename it to MEDIA_ARCHIVE_LOB: —> ALTER TABLE media MODIFY LOB (file_data) (RENAME TO MEDIA_ARCHIVE_LOB);

Practical Applications:

  • Data Reorganization: Update segment names to align with evolving data categories.
  • Simplifying Audits: Ensure LOB segment names clearly represent their purpose.
  • Consistency Enforcement: Standardize naming conventions across different LOB segments.

Important Considerations

  • Dependencies: Check for application-level dependencies that reference LOB segment names.
  • User Privileges: Ensure you have the required ALTER permissions.
  • Tablespace Integrity: Segment renaming does not affect the tablespace or physical location.

Automating LOB Segment Renaming

For larger environments, automating LOB segment renaming can save significant time. The following PL/SQL script demonstrates how to rename multiple segments programmatically:

BEGIN
  FOR r IN (SELECT table_name, column_name FROM user_lobs) LOOP
    EXECUTE IMMEDIATE 'ALTER TABLE ' || r.table_name || 
                     ' MODIFY LOB (' || r.column_name || 
                     ') (RENAME TO ' || r.column_name || '_UPDATEDLOB)';
  END LOOP;
END;
/

This script appends _UPDATEDLOB to all LOB segments in user tables.

Final Thoughts

The ability to rename LOB segments directly in Oracle Database 23ai marks a significant step forward in database administration. By eliminating complex workarounds, this feature enhances efficiency and ensures database environments remain well-organized. Whether standardizing segment names or adapting to shifting business needs, this capability helps administrators to maintain cleaner, more streamlined data structures easily.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Oracle 23ai Tip: Use SESSION_EXIT_ON_PACKAGE_STATE_ERROR to Prevent Silent Data Corruption

Posted by FatDBA on December 28, 2024

Oracle Database 23ai introduces a new parameter, SESSION_EXIT_ON_PACKAGE_STATE_ERROR, designed to enhance session management and prevent potential data corruption by enforcing a hard session exit when the session state becomes invalidated.

Why SESSION_EXIT_ON_PACKAGE_STATE_ERROR Matters ?

In typical Oracle database environments, stateful PL/SQL packages, MLE modules, or environments may be modified while sessions actively use them. This can lead to errors such as:

  • ORA-04068: Can occur when a PL/SQL package body is recompiled, invalidating the session state.
  • ORA-4106 / ORA-4107: Can be raisrd when an MLE module or environment is altered via DDL, invalidating the session.

By default, the session remains active and throws an error when the invalid package or module is called. However, many applications may not properly handle these errors, leading to silent data corruption or unexpected behavior.

The SESSION_EXIT_ON_PACKAGE_STATE_ERROR parameter mitigates this risk by forcing an immediate session exit instead of raising an error.

Some of the benefits of using the parameter.

  • Prevents Data Corruption: By terminating sessions with invalid state, the risk of silent data corruption is reduced.
  • Simplifies Error Handling: Many applications are better at handling session disconnects than catching specific errors like ORA-04068.
  • Consistency Across Sessions: Ensures that all sessions dealing with modified packages or MLE modules are treated consistently, minimizing inconsistencies.

How SESSION_EXIT_ON_PACKAGE_STATE_ERROR Works

When SESSION_EXIT_ON_PACKAGE_STATE_ERROR is set to TRUE, the following behavior is enforced:

  1. PL/SQL Package Modification:
    • If a stateful PL/SQL package is modified, any active session that tries to invoke the package receives ORA-04068. With this parameter set to TRUE, the session exits immediately instead of raising the error.
  2. MLE Module or Environment Modification:
    • If an MLE module or environment is modified via DDL, active sessions receive ORA-4106 or ORA-4107. With SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE, these sessions are forcibly disconnected.
  3. Application Handling:
    • Most applications are designed to capture session disconnects and reestablish connections, streamlining recovery from session invalidation.

Use Cases

  • High-Availability Environments: In systems where continuous uptime is critical, preventing data corruption is paramount.
  • Distributed Applications: Applications spread across multiple environments that frequently modify PL/SQL packages or MLE modules benefit from session termination to maintain data integrity.
  • Oracle RAC Deployments: Different instances in an Oracle RAC environment can independently configure this parameter, allowing fine-grained control based on workload requirements.

Configuring SESSION_EXIT_ON_PACKAGE_STATE_ERROR:

Examples:
ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
ALTER SESSION SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE SCOPE = SPFILE;

Considerations

  • Default Behavior: By default, this parameter is set to FALSE, meaning sessions will raise errors rather than exit.
  • Testing and Validation: Test this configuration in lower environments to ensure application compatibility.
  • Session Management: Monitor session disconnects to ensure that forced exits do not disrupt critical workflows.

Conclusion

SESSION_EXIT_ON_PACKAGE_STATE_ERROR is a powerful new feature in Oracle Database 23ai that enhances session management by enforcing session termination on package or module state invalidation. By using this parameter, Oracle environments can significantly reduce the risk of data corruption and streamline error handling processes across diverse applications. Whether managing PL/SQL packages or MLE modules, this parameter offers greater control and reliability for database administrators and developers both.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

Database Migration Challenges : JServer JAVA Virtual Machine gets INVALID or UPGRADING during manual upgrade

Posted by FatDBA on December 14, 2024

Migrations can be a horrifying experience —tricky, complex, time-intensive, and often riddled with unexpected challenges. This becomes even more evident when you’re migrating between older database versions, where architectural and component-level changes are significant. I remember one such encounter during a migration from Oracle 11g to 19c on a new infrastructure. Using RMAN DUPLICATE with the NOOPEN clause to restore source database backup in target before calling manual upgrade procedures, the process seemed smooth initially but soon wrapped into a host of issues with key database components.

The Problem

During the upgrade process, several critical components failed, leaving the database in an inconsistent state. The errors revolved around the following components:

COMP_IDCOMP_NAMEVERSIONSTATUS
JAVAVMJServer JAVA Virtual Machine11.2.0.4.0UPGRADING
XMLOracle XDK19.0.0.0.0INVALID
CATJAVAOracle Database Java Packages19.0.0.0.0INVALID

The errors observed in dbupgrade runtime logs included:

ORA-29554: unhandled Java out of memory condition
ORA-06512: at "SYS.INITJVMAUX", line 230
ORA-06512: at line 5

ORA-06512: : at "SYS.INITJVMAUX", line 230 ORA-06512: at line 5
[ORA-29548: Java system class reported: release of Java system classes in the database (11.2.0.4.190115) does not match that of the oracle executable (19.0.0.0.0 1.8)

These errors stemmed from a failure to allocate sufficient memory during the upgrade process. The Java Virtual Machine (JVM) ran out of memory, causing cascading errors that invalidated other components like Oracle XDK and Java Database Packages (CATJAVA). This wasn’t a mere inconvenience—it meant that critical database functionality was broken, making the system unusable for applications relying on these components.

Root Cause

Upon investigation, we found that the issue was caused by using a temporary RMAN parameter file during the restore process. This parameter file contained a minimal set of initialization parameters, which were insufficient to handle the resource-intensive operations required during the upgrade, particularly for recompiling and validating Java components.

Key memory areas like the SGA, shared pool, large pool, and Java pool were inadequately configured. These areas play a crucial role during the execution of upgrade scripts such as dbupgrade, catctl.pl, or catupgrd.sql. Without sufficient memory, the upgrade process for these components failed midway, leaving them in an invalid state.

The Fix

To resolve these issues and ensure the migration proceeded smoothly, the following steps were taken:

Step 1: Adjust Initialization Parameters

The first step was to allocate adequate memory for the Java components to prevent out-of-memory conditions. Critical parameters like the Java pool and other memory pools were adjusted to handle the load during the upgrade process:

ALTER SYSTEM SET java_jit_enabled = TRUE;
ALTER SYSTEM SET "_system_trig_enabled" = TRUE;
ALTER SYSTEM SET java_pool_size = 180M; -- Ensure at least 150 MB is allocated

Step 2: Recreate the Java Component

The next step was to drop and recreate the Java component in the database. This ensured that any inconsistencies caused by the previous upgrade failure were cleaned up:

CREATE OR REPLACE JAVA SYSTEM;

Step 3: Restart the Upgrade Scripts

After fixing the memory settings and recreating the Java component, the upgrade process was restarted using Oracle’s upgrade utilities:

  • dbupgrade: The recommended tool for 19c migrations.
  • catctl.pl: For manual control over the upgrade process.
  • catupgrd.sql: A fallback script for older methods.

Logs such as upg_summary.log were closely monitored during the process to catch any errors or exceptions in real-time.

Step 4: Verify the Upgrade

Once the upgrade process was completed, the status of all components was verified using the DBA_REGISTRY and DBA_REGISTRY_HISTORY views:

SELECT SUBSTR(comp_name, 1, 30) comp_name, 
SUBSTR(version, 1, 20) version,
status
FROM dba_registry
ORDER BY comp_name;

Expected output:

COMP_NAME                      VERSION              STATUS
------------------------------ -------------------- ---------------
JServer JAVA Virtual Machine 19.0.0.0.0 UPGRADED

Key Takeaways

This experience highlighted several crucial lessons when handling database migrations, especially for major version upgrades like 11g to 19c:

1. Adequate Initialization Parameters Are Essential

The memory-related initialization parameters (java_pool_size, shared_pool_size, etc.) must be configured appropriately before starting the upgrade process. Using a minimal parameter file during RMAN DUPLICATE can lead to critical issues if not adjusted later.

2. Resource-Intensive Components Need Extra Attention

Components like JAVAVM, Oracle XDK, and CATJAVA are highly resource-intensive. Even slight memory misconfigurations can lead to cascading failures that disrupt the entire migration process.

3. Monitor Upgrade Logs Closely

Keeping an eye on upgrade runtime logs and the summary logs (upg_summary.log) is vital for catching errors early. This allows you to address issues promptly before they snowball into larger problems.

4. Understand Dependencies

Database components often have interdependencies. For instance, a failure in the Java Virtual Machine component affected both the Oracle XDK and CATJAVA packages. Understanding these dependencies is key to resolving issues effectively.

Conclusion

Database migrations are inherently challenging, especially when dealing with major version jumps. This particular experience from migrating Oracle 11g to 19c served as a valuable reminder of the importance of preparation, thorough testing, and paying close attention to resource configurations. With the right approach, even complex migrations can be navigated successfully, ensuring the database is ready for modern workloads and enhanced performance.

By addressing these pitfalls and being proactive, you can ensure a smoother upgrade process and avoid unnecessary downtime or functionality issues.

Let me know if this approach resonates with your migration experiences!

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

Enhancing Materialized View Refreshes with Oracle 12.1’s Out-of-Place Refresh

Posted by FatDBA on November 23, 2024

While addressing a recent materialized view refresh issue, I revisited a feature introduced in Oracle 12.1—out-of-place refresh. It’s a concept I’ve been familiar with for some time but hadn’t had the opportunity to explore practically until now. This innovation addresses key challenges faced during materialized view refreshes in data-heavy environments, offering a smarter and more efficient alternative to traditional methods.

Why Materialized Views Are Powerful (and Challenging) ?

Materialized views (MVs) simplify data retrieval by precomputing and storing query results as physical tables. They are especially useful for complex queries that join data from multiple tables. However, as materialized views grow in size, refreshing them becomes a major bottleneck.

The default refresh method, COMPLETE refresh, removes all existing data before repopulating the view. While effective for smaller datasets, this approach becomes painfully slow for large materialized views, as it relies on delete operations followed by insertions.

To mitigate this, some administrators opt for atomic_refresh = FALSE, which uses the faster TRUNCATE operation instead of delete statements. While truncate drastically improves performance, it comes with a trade-off: materialized views become inaccessible during refresh, impacting any dependent applications.

Out-of-Place Refresh to the Rescue

Oracle’s out-of-place refresh, introduced in 12.1, provides a great solution. Instead of working directly on the materialized view, the refresh process creates temporary outside tables. The necessary updates are performed on these temporary tables, which are then swapped with the materialized view once the refresh is complete. This approach minimizes downtime and ensures that materialized views remain accessible throughout the process.

Out-of-place refresh supports all major refresh methods, including FAST (‘F’), COMPLETE (‘C’), and PCT (‘P’). It is particularly effective in scenarios involving large data changes, where traditional methods struggle to scale. This feature also avoids issues like fragmentation or exposure of intermediate results during refresh.

Here’s how you can enable out-of-place refresh using the DBMS_MVIEW package:

DBMS_MVIEW.REFRESH('FATDBA.MY_TEST_MVIEW', method => '?', atomic_refresh => FALSE, out_of_place => TRUE);
or 
DBMS_MVIEW.REFRESH('FATDBA.MY_TEST_MVIEW', method => 'C', atomic_refresh => FALSE, out_of_place => TRUE);

In this example, Oracle first attempts an out-of-place FAST refresh. If that’s not possible, it tries PCT refresh and finally falls back to a complete refresh. In second example, it starts performing the complete refresh of MV.

During the refresh you will see some of the operations happening in the background on the auxiliary/temp table ..

/* MV_REFRESH (ITB) */INSERT /*+ APPEND */ INTO "FATDBA"."RV$B318VA"   (SELECT /*+ PARALLEL ("xxx",10) */ xxxxx
-- This is where it starts copying rows into this temp table. 

/* MV_REFRESH (CIDX) */ CREATE INDEX "FATDBA"."RV$KHA121" ON "FATDBA"."RV$B318VA" ("x", "y", "z")	PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
STORAGE( INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645  PCTINCREASE 0  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "xxxx"  PARALLEL 10
-- Once done with the data insertion in the temp table, it starts to create same indexes that are there on the existing MV base table.



Note : 
The abbreviations ITB (Insert Temporary Table) and CIDX (Create Index) are typically used as internal identifiers in Oracle's materialized view refresh operations. 

Points to Consider

While the out-of-place refresh is a powerful tool, there are a few considerations to keep in mind:

  • Temporary Storage Needs: Adequate tablespace is required to hold two copies of the materialized view during refresh.
  • Recycle Bin Impact: If enabled, the recycle bin retains older copies of the table. It’s recommended to disable the recycle bin to avoid unnecessary space consumption.
  • Statistics Management: While index statistics are automatically computed during refresh, table statistics are not. Manual intervention may be needed.
  • Parallelism on Indexes : During the out-of-place refresh it always tries to recreate same indexes what you have on your existing MV base table, so its good to add some DOP (removed them later on once refresh is done), otherwise the creation process will go serial and the entire process will take lot of time to finish in case of bulky MVs.
  • Errors during table swap : If you have a very busy environment, you might hit ‘ORA_08103: object no longer exists‘ error message when the out-of-place method doing the swap of tables & its indexes.
  • Compatibility:
    • Unsupported for remote, cube, or object materialized views.
    • Not compatible with LOB columns.
    • Requires atomic_refresh = FALSE.

Final Thoughts

The out-of-place refresh feature in Oracle 12.1 is a significant improvement for managing materialized views in large databases. By executing refresh operations on temporary tables, it optimizes performance and ensures continuous availability, making it an invaluable tool for data-intensive environments.

If your materialized views are becoming cumbersome to manage, consider leveraging out-of-place refresh. It not only streamlines the refresh process but also enhances the overall efficiency and reliability of your database operations.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

How I see vector indexing and search in Oracle 23ai

Posted by FatDBA on November 1, 2024

Hi everyone!

It’s been a while since Oracle 23c (or 23c AI) was released, and there’s already a ton of great content out there on one of its coolest new features: Vector Indexes and Searches. This feature has opened up exciting possibilities for AI-driven applications, from advanced recommendations to similarity searches within databases. Now, it’s finally my turn to dive in and put it to the test!

Vector search in Oracle 23c AI is a feature designed for searching, comparing, and finding similarities between data represented as vectors (numerical arrays). This capability is particularly useful for AI and machine learning applications, where data like images, text, and products are often encoded as vector embeddings. Oracle’s vector search enables the database to efficiently process these embeddings, allowing users to perform similarity-based queries within the database. Oracle’s vector search in 23c AI makes it possible to bring advanced similarity-based search and recommendation capabilities directly into the database, facilitating powerful new applications that leverage the latest in AI and machine learning.

The base or the foundation of this feature is VECTOR data type, a powerful tool for storing and managing vector embeddings directly within the database. This capability allows Oracle to support advanced AI and machine learning applications, such as similarity searches and recommendation engines, by efficiently handling multi-dimensional arrays that represent data features, semantics, or other complex characteristics.

Key Advantages of the VECTOR Data Type

  1. Compact Multi-Dimensional Array Storage:
  • VECTOR allows fixed-length arrays of numbers (often floating-point) to be stored compactly in a single column. A 100-dimensional vector, for instance, might represent an embedding for text, an image, or a user profile, capturing nuanced data in a manageable form.
  1. Efficient Similarity Calculations:
  • With Oracle 23c AI, VECTOR data can be processed for similarity searches directly in the database. Functions like VECTOR_DISTANCE make it easy to calculate similarity scores or distances between vectors, enabling Oracle to quickly identify similar items without needing external tools or complex data transformations.
  1. Tailored for AI and Machine Learning:
  • VECTOR is ideal for applications that require semantic searches, such as finding similar products, media, or documents, and for AI-driven recommendations. This makes it well-suited for Natural Language Processing (NLP), recommendation engines, and image recognition tasks.
  1. Support for Various Distance Metrics:
  • Oracle’s VECTOR data type is compatible with common distance metrics, such as Euclidean and cosine similarity, which measure how close or far two vectors are from each other. This enables efficient and flexible similarity calculations, essential for applications relying on vector comparisons.
  1. Scalability with Vector Indexing:
  • Oracle 23c AI allows for VECTOR data indexing through VECTOR_INDEX, optimizing performance for vector-based queries. This makes real-time similarity searches on large datasets feasible, supporting applications that need quick response times, such as real-time AI insights and personalized recommendations.

The VECTOR data type in Oracle 23c AI is a robust addition for data scientists and developers, enabling high-performance similarity searches, efficient data storage, and a foundation for complex AI and machine learning workloads directly within the Oracle database.

Alright, so now when the stage is set, lets stay tuned as I explore how Vector Indexes and Searches can bring a whole new dimension to Oracle 23c. Here’s a detailed, step-by-step demo to help you showcase vector search and indexes in Oracle Database 23c AI, which will be ideal for your blog post. This example will simulate a product recommendation system using vector embeddings to represent product features.

1. Set Up the Environment

Ensure you’re using Oracle 23c AI, as vector indexes and the VECTOR data type are new features available in this release.

2. Create a Table with Vector Embeddings

In this example, each product is represented by a 10-dimensional vector embedding. This vector could represent a combination of product attributes, like color, category, brand, and other numerical encodings.

CREATE TABLE products (
    product_id NUMBER PRIMARY KEY,
    product_name VARCHAR2(100),
    product_description VARCHAR2(255),
    product_embedding VECTOR(10) -- Vector with 10 dimensions
);

3. Insert Sample Data

Populate the table with sample products and their vector embeddings. Here’s an example where each vector represents the product’s characteristics:

INSERT INTO products (product_id, product_name, product_description, product_embedding) VALUES 
    (1, 'Laptop A', 'High-performance laptop for professionals', VECTOR(0.5, 0.2, 0.7, 0.1, 0.4, 0.3, 0.8, 0.6, 0.9, 0.3)),
    (2, 'Smartphone B', 'Latest smartphone with advanced features', VECTOR(0.6, 0.1, 0.8, 0.5, 0.2, 0.7, 0.4, 0.3, 0.6, 0.1)),
    (3, 'Tablet C', 'Compact and powerful tablet for multimedia', VECTOR(0.4, 0.9, 0.5, 0.7, 0.3, 0.6, 0.2, 0.8, 0.1, 0.9)),
    (4, 'Laptop D', 'Affordable laptop with reliable performance', VECTOR(0.7, 0.6, 0.4, 0.9, 0.8, 0.2, 0.5, 0.3, 0.1, 0.4)),
    (5, 'Headphones E', 'Noise-canceling headphones with high fidelity', VECTOR(0.3, 0.8, 0.1, 0.9, 0.6, 0.4, 0.7, 0.2, 0.5, 0.6));

4. Create a Vector Index for Efficient Searches

Creating a vector index allows Oracle to search these embeddings efficiently based on similarity, which is ideal for recommendation systems, image retrieval, and text similarity searches.

CREATE INDEX product_vector_idx ON products (product_embedding)
    INDEXTYPE IS VECTOR_INDEX;

5. Perform a Vector Similarity Search

With the vector index in place, you can now perform searches to find products that are similar to a given vector. For example, suppose a user has shown interest in products that correspond to a vector representation of [0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6]. You can use this vector to find products with similar embeddings.

SELECT product_name, product_description, product_embedding
FROM products
ORDER BY VECTOR_DISTANCE(product_embedding, VECTOR(0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6)) ASC
FETCH FIRST 5 ROWS ONLY;

In this query:

  • The VECTOR_DISTANCE function calculates the similarity (distance) between the target vector and each product’s embedding. Sorting by VECTOR_DISTANCE allows you to retrieve the closest matches.
  • This example fetches the top 5 most similar products.

6. Use Case: Personalized Product Recommendation

This setup is ideal for applications such as a product recommendation engine. Suppose you have a user profile represented as a vector, you could use a vector search to find products similar to that profile, offering highly relevant recommendations.

7. Handling Updates to Vector Embeddings

If a product’s features change, you may need to update its embedding. Here’s how you could update the vector for a specific product:

UPDATE products
SET product_embedding = VECTOR(0.7, 0.3, 0.9, 0.5, 0.6, 0.4, 0.8, 0.2, 0.3, 0.5)
WHERE product_id = 2;

8. Advanced Use Case: Nearest Neighbor Search with Filtering

Let’s extend the search by filtering for specific criteria. For example, if you only want to recommend products that contain “Laptop” in the name, you can add a WHERE clause to the query.

SELECT product_name, product_description, product_embedding
FROM products
WHERE product_name LIKE '%Laptop%'
ORDER BY VECTOR_DISTANCE(product_embedding, VECTOR(0.6, 0.2, 0.5, 0.4, 0.3, 0.5, 0.7, 0.8, 0.2, 0.6)) ASC
FETCH FIRST 5 ROWS ONLY;

9. Explain the Vector Distance Metric

For your blog, it’s useful to explain the concept of vector similarity:

  • Vector Distance (Similarity): This metric measures how similar or dissimilar two vectors are. A smaller distance implies a higher similarity. Oracle 23c AI’s VECTOR_DISTANCE function calculates this distance to identify closely related items, making it ideal for applications in recommendations, content search, and clustering.

10. Additional Notes for Optimization and Use Cases

  • Index Maintenance: Like other indexes, a vector index should be maintained for performance. Ensure to monitor the index usage and rebuild if necessary for large datasets.
  • Scalability: Oracle 23c AI’s vector search is optimized for large-scale deployments, making it suitable for extensive recommendation systems, content-based searches, and personalized user experiences.

This detailed walkthrough should provide a comprehensive demo for your blog post, showcasing Oracle 23c AI’s new vector search and indexing features. Let me know if you need any additional details or further customization!

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , , | Leave a Comment »

Adapting Data Pump for the Cloud: Trailer Blocks in 23ai

Posted by FatDBA on November 1, 2024

Oracle Database 23c AI introduces a new trailer block format for Data Pump dump files, moving away from the traditional header block approach used in earlier versions like 21c. This new structure is designed specifically to support cloud storage environments, particularly within Oracle Cloud Infrastructure (OCI), and makes it easier to manage and transfer Data Pump files to object stores.

What’s Different with Trailer Blocks?

Trailer blocks are written only once the export process completes, contrasting with header blocks, which required upfront metadata and continuous updates throughout the job. This deferred approach means trailer blocks accumulate necessary metadata during the export process, allowing it to be stored in a more cloud-compatible format at the end. The switch to trailer blocks by default enables seamless storage of dump files in cloud object stores and reduces the limitations associated with local-only file systems.

Note : Trailer block files are compatible only with Oracle Database 23c AI and later. To ensure backward compatibility for previous Oracle versions, you can use the VERSION parameter to revert to the legacy header block format.

How to Use Trailer Blocks with Cloud Object Storage

When running Data Pump in Oracle 23c AI, trailer blocks allow straightforward use with object storage by configuring these essential parameters:

  1. Authenticate with Cloud Credentials: Ensure you have a credential object in place to establish the required connection with your object store.
  2. Configure DUMPFILE and CREDENTIAL Parameters: List export files using DUMPFILE and specify the credential to direct these files to the cloud.
  3. Set Log File Storage: Use the DEFAULT_DIRECTORY parameter to control log file storage. Failing to set up a valid cloud credential for object store URIs will lead to errors in dump file writing.

Steps for Cloud Storage Access

To export files directly to a cloud object store, ensure you have cloud credentials stored in an Oracle Wallet, with the location specified in your sqlnet.ora configuration. You will also need to add a certificate for authentication using orapki:

orapki wallet add -wallet <path_to_wallet> -trusted_cert -cert <certificate_file> -pwd <wallet_password>

This setup allows the Data Pump to use the CREDENTIAL parameter with the export command, establishing secure access to cloud storage.

Header Blocks vs. Trailer Blocks

In Oracle Database versions prior to 23c, Data Pump files used header blocks, where metadata was embedded at the start of each dump file. This format was limited to local storage and required metadata updates during export, which could be resource-intensive.

With Oracle Database 23c AI, trailer blocks are the default. This approach shifts metadata to the end of the file, streamlining the process for cloud storage. By using the VERSION parameter, you can control whether to retain the newer trailer block format or revert to header blocks based on compatibility needs.

Types of Trailer Blocks in Oracle 23c AI

Oracle 23c AI introduces two types of trailer blocks for handling Data Pump files:

  • Disk-Based Trailer Blocks: Directly written at the end of each primary table’s data in the dump file, providing a compact storage format.
  • Table-Based Trailer Blocks: Stored externally within the export’s primary table, conserving space by omitting individual trailer blocks for each data object. This approach efficiently manages storage while maintaining essential metadata for cloud-based and local export jobs.

Trailer blocks make Data Pump in Oracle Database 23c AI a flexible and cloud-ready solution, enabling efficient data movement across hybrid and cloud infrastructures. This updated approach supports larger cloud-based workflows by improving performance and storage efficiency for Data Pump exports.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »