Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘oracle-database’

ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ

Posted by FatDBA on February 16, 2025

Recently, I had a conversation with an old friend who was struggling with Data Pump import errors. He encountered the following error message:

"ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ."

The message itself provided a clear hint—the issue stemmed from a time zone (TZ) version mismatch between the source and target databases. He was trying to export data from a 19.20.0 database (DST 42) and import it into an older 19.18.0 database (DST 32).

This situation left him quite perplexed, as his database contained multiple time zone-dependent data, raising concerns about potential data inconsistencies and conversion errors.

How It All Started – Initially, when he created the database using Oracle 19.18.0, it was configured with DST version 32 by default.

SELECT * FROM v$timezone_file;
FILENAME               VERSION    CON_ID
----------------------------------------
timezlrg_32.dat             32         0

Later, he upgraded his Oracle home to 19.20.0 and applied patches to his existing 19.18.0 database. However, the DST version remained at 32, as patching alone does not update the DST version unless explicitly modified using AutoUpgrade.

The real issue arose when he created a brand-new database using the 19.20.0 home. Upon checking its DST version, he discovered that it had been assigned a newer DST version (42), which led to the compatibility problem.

SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_42.dat              42          0

The Core Problem
The mismatch between DST 42 (new database) and DST 32 (old database) meant that Data Pump could not process time zone-based data properly, causing the import failure.

Unfortunately, there is no direct solution available for this issue in Oracle 19c. However, starting from Oracle 21c and later, this problem could have been easily bypassed by setting the following hidden parameter, which overrides the strict time zone check:

alter system set "_datapump_bypass_tstz_check"=TRUE scope=both;

Since my friend was working with Oracle 19c, we explored several potential workarounds:

Kamil’s Solution: One promising approach was suggested by Kamil Stawiarski, who detailed a method to resolve the issue in his blog post:
Solved: ORA-39405 – Oracle Data Pump does not support importing from a source database with TSTZ. However, this method requires OS-level access, which my friend did not have in his environment, making it an unfeasible option.

Preemptively Removing Newer DST Files: Another workaround we considered involved automating a cleanup process that periodically removes newer, unwanted DST files from all $ORACLE_HOME/oracore/zoneinfo directories and their subdirectories. This could be implemented as a scheduled cron job that runs regularly (or at least after each patch application) to prevent new databases from being created with newer DST files.

The downside? This approach would block access to newer time zone definitions, which could become problematic if applications or users require those updates in the future.

The Agreed Solution – After considering various options, we decided on the most practical and controlled approach:

When creating a new database, explicitly set the desired DST version at the time of creation.
This can be achieved by using DBCA (Database Configuration Assistant) or a custom script while ensuring that the ORA_TZFILE environment variable is set beforehand.

For example, using DBCA in silent mode, the database can be created with a specific DST version:

export ORA_TZFILE= <DESIRED TZ VERSION>
dbca -silent -createDatabase -source -timezoneVersion

By defining the DST version at the time of database creation, we can prevent mismatches and avoid compatibility issues when exporting/importing data across databases with different DST versions.

While there’s no perfect solution in Oracle 19c, proper planning and proactive DST version control can mitigate potential errors and ensure smoother database operations.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Materialized Views: Concurrent Refresh in Oracle 23c

Posted by FatDBA on December 28, 2024

Oracle Database 23c introduces a powerful new feature allowing concurrent refresh of materialized views. This enhancement improves performance by enabling multiple users to refresh the same materialized view at the same time.

Before Oracle 23c, refreshing a materialized view (to reflect changes in the base tables) could only happen sequentially. This meant:

  • If multiple users triggered a refresh at the same time, Oracle handled them one by one.
  • This could cause delays in environments where real-time data updates were needed.

For example, during month-end reporting, if multiple users tried to refresh a sales summary MV, they had to wait for each other, impacting productivity.

Oracle 23c allows concurrent refreshes, enabling multiple users to refresh the materialized view simultaneously. This improves efficiency, especially for fast-refresh MVs, which update incrementally. Let’s walk through a detailed example to demonstrate how to create, configure, and utilize concurrent refresh.

CREATE TABLE sales (
  sale_id NUMBER(10),
  sale_date DATE,
  amount NUMBER(10, 2),
  customer_id NUMBER(10),
  CONSTRAINT sales_pk PRIMARY KEY (sale_id)
);

CREATE TABLE customers (
  customer_id NUMBER(10),
  customer_name VARCHAR2(100),
  CONSTRAINT customers_pk PRIMARY KEY (customer_id)
);


INSERT INTO customers VALUES (1, 'Alice');
INSERT INTO customers VALUES (2, 'Bob');

INSERT INTO sales VALUES (101, '01-JAN-2024', 500, 1);
INSERT INTO sales VALUES (102, '05-JAN-2024', 300, 1);
INSERT INTO sales VALUES (103, '10-JAN-2024', 450, 2);

Next will create MV logs as Materialized view logs track changes to base tables, enabling fast refresh.

CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE (sale_id, sale_date, amount, customer_id)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON customers
WITH ROWID, SEQUENCE (customer_id, customer_name)
INCLUDING NEW VALUES;


--- Create MView with concurrent refresh option.
CREATE MATERIALIZED VIEW sales_summary_mv
REFRESH FAST ON COMMIT
ENABLE CONCURRENT REFRESH
AS
SELECT c.customer_name,
       COUNT(s.sale_id) AS total_sales,
       SUM(s.amount) AS total_amount
FROM sales s
JOIN customers c ON s.customer_id = c.customer_id
GROUP BY c.customer_name;

-- collect stats to reflect row changes.
exec dbms_stats.gather_table_stats(null, 'sales_summary_mv');

Now we can trigger concurrent refresh.

EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'F');  -- Fast refresh
EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'C');  -- Complete refresh


-- In our case the concurrent_refresh_enabled returns Y, means concurrent refresh is enabled.
SELECT mview_name, concurrent_refresh_enabled
FROM user_mviews
WHERE mview_name = 'SALES_SUMMARY_MV';

MVIEW_NAME                     CONCURRENT_REFRESH_ENABLED
------------------------------ ------------------------------
SALES_SUMMARY_MV              Y

Oracle 23c’s concurrent refresh for materialized views is a game-changer for data-intensive applications. By allowing multiple users to refresh views in parallel, businesses can significantly boost performance and ensure faster reporting.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

Oracle 23ai Tip: Use SESSION_EXIT_ON_PACKAGE_STATE_ERROR to Prevent Silent Data Corruption

Posted by FatDBA on December 28, 2024

Oracle Database 23ai introduces a new parameter, SESSION_EXIT_ON_PACKAGE_STATE_ERROR, designed to enhance session management and prevent potential data corruption by enforcing a hard session exit when the session state becomes invalidated.

Why SESSION_EXIT_ON_PACKAGE_STATE_ERROR Matters ?

In typical Oracle database environments, stateful PL/SQL packages, MLE modules, or environments may be modified while sessions actively use them. This can lead to errors such as:

  • ORA-04068: Can occur when a PL/SQL package body is recompiled, invalidating the session state.
  • ORA-4106 / ORA-4107: Can be raisrd when an MLE module or environment is altered via DDL, invalidating the session.

By default, the session remains active and throws an error when the invalid package or module is called. However, many applications may not properly handle these errors, leading to silent data corruption or unexpected behavior.

The SESSION_EXIT_ON_PACKAGE_STATE_ERROR parameter mitigates this risk by forcing an immediate session exit instead of raising an error.

Some of the benefits of using the parameter.

  • Prevents Data Corruption: By terminating sessions with invalid state, the risk of silent data corruption is reduced.
  • Simplifies Error Handling: Many applications are better at handling session disconnects than catching specific errors like ORA-04068.
  • Consistency Across Sessions: Ensures that all sessions dealing with modified packages or MLE modules are treated consistently, minimizing inconsistencies.

How SESSION_EXIT_ON_PACKAGE_STATE_ERROR Works

When SESSION_EXIT_ON_PACKAGE_STATE_ERROR is set to TRUE, the following behavior is enforced:

  1. PL/SQL Package Modification:
    • If a stateful PL/SQL package is modified, any active session that tries to invoke the package receives ORA-04068. With this parameter set to TRUE, the session exits immediately instead of raising the error.
  2. MLE Module or Environment Modification:
    • If an MLE module or environment is modified via DDL, active sessions receive ORA-4106 or ORA-4107. With SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE, these sessions are forcibly disconnected.
  3. Application Handling:
    • Most applications are designed to capture session disconnects and reestablish connections, streamlining recovery from session invalidation.

Use Cases

  • High-Availability Environments: In systems where continuous uptime is critical, preventing data corruption is paramount.
  • Distributed Applications: Applications spread across multiple environments that frequently modify PL/SQL packages or MLE modules benefit from session termination to maintain data integrity.
  • Oracle RAC Deployments: Different instances in an Oracle RAC environment can independently configure this parameter, allowing fine-grained control based on workload requirements.

Configuring SESSION_EXIT_ON_PACKAGE_STATE_ERROR:

Examples:
ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
ALTER SESSION SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE SCOPE = SPFILE;

Considerations

  • Default Behavior: By default, this parameter is set to FALSE, meaning sessions will raise errors rather than exit.
  • Testing and Validation: Test this configuration in lower environments to ensure application compatibility.
  • Session Management: Monitor session disconnects to ensure that forced exits do not disrupt critical workflows.

Conclusion

SESSION_EXIT_ON_PACKAGE_STATE_ERROR is a powerful new feature in Oracle Database 23ai that enhances session management by enforcing session termination on package or module state invalidation. By using this parameter, Oracle environments can significantly reduce the risk of data corruption and streamline error handling processes across diverse applications. Whether managing PL/SQL packages or MLE modules, this parameter offers greater control and reliability for database administrators and developers both.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

Enhancing Materialized View Refreshes with Oracle 12.1’s Out-of-Place Refresh

Posted by FatDBA on November 23, 2024

While addressing a recent materialized view refresh issue, I revisited a feature introduced in Oracle 12.1—out-of-place refresh. It’s a concept I’ve been familiar with for some time but hadn’t had the opportunity to explore practically until now. This innovation addresses key challenges faced during materialized view refreshes in data-heavy environments, offering a smarter and more efficient alternative to traditional methods.

Why Materialized Views Are Powerful (and Challenging) ?

Materialized views (MVs) simplify data retrieval by precomputing and storing query results as physical tables. They are especially useful for complex queries that join data from multiple tables. However, as materialized views grow in size, refreshing them becomes a major bottleneck.

The default refresh method, COMPLETE refresh, removes all existing data before repopulating the view. While effective for smaller datasets, this approach becomes painfully slow for large materialized views, as it relies on delete operations followed by insertions.

To mitigate this, some administrators opt for atomic_refresh = FALSE, which uses the faster TRUNCATE operation instead of delete statements. While truncate drastically improves performance, it comes with a trade-off: materialized views become inaccessible during refresh, impacting any dependent applications.

Out-of-Place Refresh to the Rescue

Oracle’s out-of-place refresh, introduced in 12.1, provides a great solution. Instead of working directly on the materialized view, the refresh process creates temporary outside tables. The necessary updates are performed on these temporary tables, which are then swapped with the materialized view once the refresh is complete. This approach minimizes downtime and ensures that materialized views remain accessible throughout the process.

Out-of-place refresh supports all major refresh methods, including FAST (‘F’), COMPLETE (‘C’), and PCT (‘P’). It is particularly effective in scenarios involving large data changes, where traditional methods struggle to scale. This feature also avoids issues like fragmentation or exposure of intermediate results during refresh.

Here’s how you can enable out-of-place refresh using the DBMS_MVIEW package:

DBMS_MVIEW.REFRESH('FATDBA.MY_TEST_MVIEW', method => '?', atomic_refresh => FALSE, out_of_place => TRUE);
or 
DBMS_MVIEW.REFRESH('FATDBA.MY_TEST_MVIEW', method => 'C', atomic_refresh => FALSE, out_of_place => TRUE);

In this example, Oracle first attempts an out-of-place FAST refresh. If that’s not possible, it tries PCT refresh and finally falls back to a complete refresh. In second example, it starts performing the complete refresh of MV.

During the refresh you will see some of the operations happening in the background on the auxiliary/temp table ..

/* MV_REFRESH (ITB) */INSERT /*+ APPEND */ INTO "FATDBA"."RV$B318VA"   (SELECT /*+ PARALLEL ("xxx",10) */ xxxxx
-- This is where it starts copying rows into this temp table. 

/* MV_REFRESH (CIDX) */ CREATE INDEX "FATDBA"."RV$KHA121" ON "FATDBA"."RV$B318VA" ("x", "y", "z")	PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
STORAGE( INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645  PCTINCREASE 0  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "xxxx"  PARALLEL 10
-- Once done with the data insertion in the temp table, it starts to create same indexes that are there on the existing MV base table.



Note : 
The abbreviations ITB (Insert Temporary Table) and CIDX (Create Index) are typically used as internal identifiers in Oracle's materialized view refresh operations. 

Points to Consider

While the out-of-place refresh is a powerful tool, there are a few considerations to keep in mind:

  • Temporary Storage Needs: Adequate tablespace is required to hold two copies of the materialized view during refresh.
  • Recycle Bin Impact: If enabled, the recycle bin retains older copies of the table. It’s recommended to disable the recycle bin to avoid unnecessary space consumption.
  • Statistics Management: While index statistics are automatically computed during refresh, table statistics are not. Manual intervention may be needed.
  • Parallelism on Indexes : During the out-of-place refresh it always tries to recreate same indexes what you have on your existing MV base table, so its good to add some DOP (removed them later on once refresh is done), otherwise the creation process will go serial and the entire process will take lot of time to finish in case of bulky MVs.
  • Errors during table swap : If you have a very busy environment, you might hit ‘ORA_08103: object no longer exists‘ error message when the out-of-place method doing the swap of tables & its indexes.
  • Compatibility:
    • Unsupported for remote, cube, or object materialized views.
    • Not compatible with LOB columns.
    • Requires atomic_refresh = FALSE.

Final Thoughts

The out-of-place refresh feature in Oracle 12.1 is a significant improvement for managing materialized views in large databases. By executing refresh operations on temporary tables, it optimizes performance and ensures continuous availability, making it an invaluable tool for data-intensive environments.

If your materialized views are becoming cumbersome to manage, consider leveraging out-of-place refresh. It not only streamlines the refresh process but also enhances the overall efficiency and reliability of your database operations.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

Adapting Data Pump for the Cloud: Trailer Blocks in 23ai

Posted by FatDBA on November 1, 2024

Oracle Database 23c AI introduces a new trailer block format for Data Pump dump files, moving away from the traditional header block approach used in earlier versions like 21c. This new structure is designed specifically to support cloud storage environments, particularly within Oracle Cloud Infrastructure (OCI), and makes it easier to manage and transfer Data Pump files to object stores.

What’s Different with Trailer Blocks?

Trailer blocks are written only once the export process completes, contrasting with header blocks, which required upfront metadata and continuous updates throughout the job. This deferred approach means trailer blocks accumulate necessary metadata during the export process, allowing it to be stored in a more cloud-compatible format at the end. The switch to trailer blocks by default enables seamless storage of dump files in cloud object stores and reduces the limitations associated with local-only file systems.

Note : Trailer block files are compatible only with Oracle Database 23c AI and later. To ensure backward compatibility for previous Oracle versions, you can use the VERSION parameter to revert to the legacy header block format.

How to Use Trailer Blocks with Cloud Object Storage

When running Data Pump in Oracle 23c AI, trailer blocks allow straightforward use with object storage by configuring these essential parameters:

  1. Authenticate with Cloud Credentials: Ensure you have a credential object in place to establish the required connection with your object store.
  2. Configure DUMPFILE and CREDENTIAL Parameters: List export files using DUMPFILE and specify the credential to direct these files to the cloud.
  3. Set Log File Storage: Use the DEFAULT_DIRECTORY parameter to control log file storage. Failing to set up a valid cloud credential for object store URIs will lead to errors in dump file writing.

Steps for Cloud Storage Access

To export files directly to a cloud object store, ensure you have cloud credentials stored in an Oracle Wallet, with the location specified in your sqlnet.ora configuration. You will also need to add a certificate for authentication using orapki:

orapki wallet add -wallet <path_to_wallet> -trusted_cert -cert <certificate_file> -pwd <wallet_password>

This setup allows the Data Pump to use the CREDENTIAL parameter with the export command, establishing secure access to cloud storage.

Header Blocks vs. Trailer Blocks

In Oracle Database versions prior to 23c, Data Pump files used header blocks, where metadata was embedded at the start of each dump file. This format was limited to local storage and required metadata updates during export, which could be resource-intensive.

With Oracle Database 23c AI, trailer blocks are the default. This approach shifts metadata to the end of the file, streamlining the process for cloud storage. By using the VERSION parameter, you can control whether to retain the newer trailer block format or revert to header blocks based on compatibility needs.

Types of Trailer Blocks in Oracle 23c AI

Oracle 23c AI introduces two types of trailer blocks for handling Data Pump files:

  • Disk-Based Trailer Blocks: Directly written at the end of each primary table’s data in the dump file, providing a compact storage format.
  • Table-Based Trailer Blocks: Stored externally within the export’s primary table, conserving space by omitting individual trailer blocks for each data object. This approach efficiently manages storage while maintaining essential metadata for cloud-based and local export jobs.

Trailer blocks make Data Pump in Oracle Database 23c AI a flexible and cloud-ready solution, enabling efficient data movement across hybrid and cloud infrastructures. This updated approach supports larger cloud-based workflows by improving performance and storage efficiency for Data Pump exports.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

Data Pump Troubleshooting Tips – My favorite 6

Posted by FatDBA on October 26, 2024

There are numerous utilities, options, and methods available for migrating and moving data between Oracle databases, yet Oracle Data Pump remains one of the most widely used tools. A significant number of DBAs are very comfortable with Data Pump, as it has been a trusted utility for a long time (originally as exp and imp). Its stability, user-friendliness, and robust capabilities make it a top choice for handling large data migrations, backup, and restore operations.

However, one area where DBAs still often face challenges is troubleshooting when issues arise. When a Data Pump job fails, performs poorly, or behaves unexpectedly, it can be unclear where to start, what logs to review, or what checks to perform. Many find it difficult to pinpoint the source of the problem and make adjustments to optimize performance or resolve issues.

Today’s post focuses on troubleshooting Data Pump performance and functionality issues, sharing the steps I typically follow when diagnosing problems. We’ll cover key areas to investigate, like log file analysis, parameter tuning, network considerations, and common bottlenecks. These steps aim to provide a practical guide to understanding and resolving Data Pump issues and optimizing your data movement processes.

Option 1: Generate an AWR Report to Assess Database Performance

Start by generating an AWR (Automatic Workload Repository) report to gain insight into the database’s overall performance during the relevant period. Adjusting the AWR snapshot interval to 15 minutes is recommended for a more granular view. This approach reduces the chances of averaging out short performance spikes, allowing you to capture transient issues more effectively.

exec dbms_workload_repository.modify_snapshot_settings(null, 15);
exec dbms_workload_repository.create_snapshot;

Option 2: Enable SQL Trace for Data Pump Processes or Specific SQL IDs
Optionally, you can enable SQL trace for the Data Pump processes (dm for the master process and dw for worker processes) or for specific SQL statements by SQL ID. This will help isolate SQL-level performance issues affecting the Data Pump job.

alter system set events 'sql_trace {process: pname = dw | process: pname = dm} level=8';
alter system set events 'sql_trace[SQL: 8krc88r46raff]';

Option 3: Run Data Pump Job with Detailed Trace Enabled
For enhanced tracing, run the Data Pump job with additional trace options, which provide more comprehensive output. Including metrics=yes, logtime=all, and trace=1FF0300 in the command enables detailed logging of both timing and activity metrics. Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of Export DataPump (expdp) or Import DataPump (impdp). The first three digits enable tracing for a specific Data Pump component, while the last four digits are usually: 0300.

expdp ... metrics=yes logtime=all trace=1FF0300
impdp ... metrics=yes logtime=all trace=1FF0300

Data Pump tracing can also be started with a line with EVENT 39089 in the initialization parameter file. This method should only be used to trace the Data Pump calls in an early state, e.g. if details are needed about the DBMS_DATAPUMP.OPEN API call. Trace level 0x300 will trace all Data Pump client processes.

-- Enable event
ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' ;
-- Disable event
ALTER SYSTEM SET EVENTS = '39089 trace name context off' ;

Option 4: Review Data Pump Trace Files
Locate and analyze the Data Pump trace files stored in the Oracle trace directory. The master control process file names typically contain *dm*, while worker process files include *dw*. These files provide insights into the processes, job details, and potential error sources during execution.

Option 5: Activate SQL_TRACE on specific Data Pump process with higher trace level.
Lets assume we see that the Data Pump Master process (DM00) has SID: 143 and serial#: 50 and the Data Pump Worker process (DW01) has SID: 150 and serial#: 17. These details can be used to activate SQL tracing in SQL*Plus with DBMS_SYSTEM.SET_EV, e.g.:

-- In SQL*Plus, activate SQL tracing with DBMS_SYSTEM and SID/SERIAL#  
-- Syntax: DBMS_SYSTEM.SET_EV([SID],[SERIAL#],[EVENT],[LEVEL],'') 

-- Example to SQL_TRACE Worker process with level 4 (Bind values):   
execute sys.dbms_system.set_ev(150,17,10046,4,''); 

-- and stop tracing: 
execute sys.dbms_system.set_ev(150,17,10046,0,'');  


-- Example to SQL_TRACE Master Control process with level 8 (Waits):  
execute sys.dbms_system.set_ev(143,50,10046,8,'');  

-- and stop tracing:  
execute sys.dbms_system.set_ev(143,50,10046,0,'');

Option 6: Use the Data Pump Log Analyzer

I’ve personally used the Data Pump Log Analyzer for some time and have found it to be incredibly user-friendly, making it simple to understand the performance and runtime statistics of Data Pump jobs. This tool is highly effective in streamlining troubleshooting efforts, quickly identifying bottlenecks, and delivering clear insights into job performance. It’s a fantastic addition to a DBA’s toolkit and provides valuable capabilities that aren’t typically found in standard scripts. The Data Pump Log Analyzer has been tested with Data Pump log files across various database versions, including those generated by Data Pump client (expdp/impdp), Zero Downtime Migration (ZDM), OCI Database Migration Service (DMS), and Data Pump API (DBMS_DATAPUMP).The Data Pump Log Analyzer is a Python-based command-line utility designed for in-depth analysis of Oracle Data Pump log files. It goes beyond basic log review by offering detailed, structured insights into key performance metrics, errors, and process details. This tool can be particularly useful for DBAs needing a quick and comprehensive view of Data Pump job behavior, helping with issue diagnosis and performance optimization. Link to read and download or a more detailed guide on it’s usage Link

With the Data Pump Log Analyzer, you get:

  • Detailed Operations and Processing Metrics: Granular information on data operations for pinpoint analysis.
  • Error and ORA- Code Analysis: Summaries and explanations of encountered errors for easier troubleshooting.
  • Object-Type Breakdown and Processing Times: Insight into performance by object type, aiding in performance tuning.
  • Data Pump Worker Performance: Analyzes individual worker processes for any lagging tasks.
  • Summarized Schema, Table, Partition Details: Overview of data handled by each schema, table, or partition.
  • Instance-Based Data Analysis (for Oracle 21c and later): Statistics by instance for performance evaluation in multitenant setups.
  • Flexible Output Options: Filter, sort, and export analysis results to text or HTML for efficient sharing and record-keeping.

One below is with basic syntax to get operational details.

$ python3 dpla.py import.log
========================
Data Pump Log Analyzer
========================
...
Operation Details
~~~~~~~~~~~~~~~~~
Operation: Import
Data Pump Version: 19.23.0.0.0
DB Info: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0
Job Name: FATDBAJOB1
Status: COMPLETED
 Processing: -
Errors: 1301
 ORA- Messages: 1267
Start Time: 2024-08-21 01:30:45
End Time: 2024-08-21 11:43:11
Runtime: 35:03:06
Data Processing
~~~~~~~~~~~~~~~
Parallel Workers: 104
Schemas: 47
Objects: 224718
Data Objects: 188131
Overall Size: 19.11 TB

Use flag ‘-e’ to view all ORA- messages encountered during the Data Pump operation, or optionally you can filter our specific errors as well i.e. ‘-e ORA-39082 ORA-31684′.

python3 dpla.py import.log -e
========================
Data Pump Log Analyzer
========================
...
ORA- MESSAGES DETAILS
~~~~~~~~~~~~~~~~~~~~~
(sorted by count):
Message Count
--------------------------------------------------------------------------------------------------- ---------
ORA-39346: data loss in character set conversion for object COMMENT 919
ORA-39082: Object type PACKAGE BODY created with compilation warnings 136
ORA-39346: data loss in character set conversion for object PACKAGE_BODY 54
ORA-39082: Object type TRIGGER created with compilation warnings 36
ORA-39082: Object type PROCEDURE created with compilation warnings 29
ORA-31684: Object type USER already exists 27
ORA-39111: Dependent object type PASSWORD_HISTORY skipped, base object type USER already exists 27
ORA-39346: data loss in character set conversion for object PACKAGE 18
ORA-39082: Object type PACKAGE created with compilation warnings 10
ORA-39082: Object type VIEW created with compilation warnings 7
ORA-39346: data loss in character set conversion for object PROCEDURE 2
ORA-39082: Object type FUNCTION created with compilation warnings 2
--------------------------------------------------------------------------------------------------- ---------
Total 1267
--------------------------------------------------------------------------------------------------- ---------


Use flag ‘-o’ to see details about which types of database objects were involved in the Data Pump operation.

python3 dpla.py import.log -o
========================
Data Pump Log Analyzer
========================
...
Object                                  Count      Seconds      Workers     Duration
----------------------------------      ---------- -----------  ----------- ------------
SCHEMA_EXPORT/TABLE/TABLE_DATA             188296    6759219         128       6759219
CONSTRAINT                                    767      37253           1         37253
TABLE                                        2112       3225          51           156
COMMENT                                     26442        639         128            18
PACKAGE_BODY                                  197        125         128             5
OBJECT_GRANT                                 5279         25           1            25
TYPE                                          270          6           1             6
ALTER_PROCEDURE                               149          5           2             3
ALTER_PACKAGE_SPEC                            208          4           3             2
PACKAGE                                       208          3           3             1
PROCEDURE                                     149          2           2             1

...
---------------------------------- ---------- ----------- ----------- ------------
Total 224755 6800515 128 6796697
---------------------------------- ---------- ----------- ----------- ------------


Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , , , , | Leave a Comment »

Addressing Stuck Undo Segments : How to Safely Drop Problematic Undo Segments

Posted by FatDBA on October 14, 2024

Hi All,

This post discusses an intriguing issue we encountered recently on a 19.22 Oracle database following a CDB restart. After the restart, we observed a peculiar problem where all sessions performing DDL commands were getting locked and hung at the PDB level. This behavior was affecting the entire database, essentially halting all DDL operations.

During our analysis, we discovered that the SMON process was waiting on a latch, leading to high CPU resource consumption. Furthermore, we noticed that the MMON process was blocking SMON, causing additional delays. The alert log revealed multiple error messages, which further complicated the diagnosis.

This issue required a deep dive into Oracle’s background processes and system-level contention to resolve, as it was causing a significant disruption to database operations.

-- Fragments from alert log, smon/mmon process logs and standard diag traces.
kcbzib: encounter logical error ORA-1410, try re-reading from other mirror..
TRCMIR:kcf_reread     :start:3722012:0:+DATA/CDBMONKEY/AA82C21DD440449FE053B4146E0AA55B/DATAFILE/tablespace_test_dataaa.xxx.xxxx
TRCMIR:kcf_reread     :done :3722012:0:+DATA/CDBMONKEY/AA82C21DD440449FE053B4146E0AA55B/DATAFILE/tablespace_test_dataaa.xxx.xxxxx
kcbzibmlt: encounter logical error ORA-1410, try re-reading from other mirror..

---> SMON: Parallel transaction recovery tried
30317 error message received from server=1.70(P01Y) qref:0x8de103cf0 qrser:5121 qrseq:3 mh:0x97fdf9460
Parallel Transaction recovery caught exception 12801
Parallel Transaction recovery caught error 30317

*** 2024-08-19T20:38:23.297997-04:00 (PWS1E(3))
Parallel Transaction recovery caught exception 30319
Parallel Transaction recovery caught error 30319

*** 2024-08-19T20:38:50.613855-04:00 (PWS1E(3))
30317 error message received from server=1.57(P01L) qref:0x8de109fe8 qrser:11265 qrseq:3 mh:0x95fccd3c8
Parallel Transaction recovery caught exception 12801
Parallel Transaction recovery caught error 30317
Parallel Transaction recovery caught exception 30319

TEST1E(3):about to recover undo segment 98 status:6 inst:0
TEST1E(3):mark undo segment 98 as available status:6 ret:1

TEST1E(3):about to recover undo segment 46 status:6 inst:0
TEST1E(3):mark undo segment 46 as available status:6 ret:1

The logs and trace files also highlighted an issue with two specific undo segments, identified by segment numbers 98 and 46, from the UNDO tablespace. Upon further investigation, we found that both segments were in a ‘RECOVERING’ state. What was particularly concerning was that the recovery process for these segments was progressing extremely slowly, with the v$fast_start_transactions view showing an unusually high estimated recovery time.

In fact, based on the progress we monitored, it seemed like the recovery process wasn’t moving forward at all and appeared to be stuck in some kind of loop. This stagnation in recovery added to the overall system’s delay, compounding the performance issues we were already facing. It became clear that this problem was a significant bottleneck in restoring the database to normal operation.

SQL> select * from V$FAST_START_TRANSACTIONS;

USN SLT SEQ STATE UNDOBLOCKSDONE UNDOBLOCKSTOTAL PID CPUTIME PARENTUSN PARENTSLT PARENTSEQ XID PXID RCVSERVERS CON_ID
---------- ---------- ---------- ---------------- -------------- --------------- ---------- ---------- ---------- ---------- 
46 46 2313064 RECOVERING 505 24992423 77 5586 0 0 0 10001000684B2300 0000000000000000 1 0
98 25 1352150 RECOVERING 0 226231 78 5586 0 0 0 30001900D6A11400 0000000000000000 1 0
	

SQL> SELECT segment_name, tablespace_name  FROM dba_rollback_segs  WHERE segment_id IN (98, 46);

SEGMENT_NAME		       TABLESPACE_NAME
------------------------------ ------------------------------
_SYSSMU46_5249279471$	       UNDOTEST1
_SYSSMU98_5249279471$	       UNDOTEST1

We attempted to take the segments offline and ultimately drop them, as they were associated with a materialized view (MV) refresh and a bulk insert statement. These operations were part of an ad-hoc activity, so it was acceptable for them to be missed. However, despite our efforts, the segments remained in a ‘PARTLY AVAILABLE’ state, leaving us with no option to drop or take them offline. This left us in a situation where we were essentially stuck, unable to proceed with dropping the segments or the associated tablespace. The inability to release these segments further complicated our recovery efforts.

We’d even checked the status of the those two undo segments using base table x$ktuxe and the KTUXESTA (Status) was coming as ‘DEAD’, means the transaction has failed but is still holding resources and that gave ius more confidence about what happened under the hood.

SQL> select min(sample_time), max(sample_time), sql_id, xid, count(1) from dba_hist_active_sess_history 
where xid in ('10001000684B2300','30001900D6A11400') group by sql_id, xid;

MIN(SAMPLE_TIME) MAX(SAMPLE_TIME) SQL_ID XID COUNT(1)
--------------------------------------------------------------------------- -------------------
15-SEP-24 01.22.25.446 PM 15-SEP-24 05.51.22.340 PM 30001900D6A11400 3213
15-SEP-24 10.22.46.218 AM 15-SEP-24 01.22.15.440 PM ac5hhandj9fh1 30001980D6A11400 2158 --------------> 
13-SEP-24 08.31.54.374 PM 14-SEP-24 02.53.45.723 AM annqr822no0a1 10001090684B2300 4578 -------------->
14-SEP-24 02.53.55.731 AM 15-SEP-24 05.51.22.340 PM 10001000684B2300 27781

SQL> select sql_id, sql_text from dba_hist_sqltext where sql_id in ('annqr822no0a1','ac5hhandj9fh1o');

SQL_ID SQL_TEXT
------------- --------------------------------------------------------------------------------
annqr822no0a1 INSERT INTO monkey.ah_ah3_xaa_131C (
ac5hhandj9fh1o /* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "monkey"."test_


SQL> ALTER ROLLBACK SEGMENT "_SYSSMU46_5249279471$" offline;

Rollback segment altered.

SQL> ALTER ROLLBACK SEGMENT "_SYSSMU98_5249279471$" offline;

Rollback segment altered.


SQL> SELECT segment_name, status, tablespace_name
FROM dba_rollback_segs
WHERE segment_name IN ('_SYSSMU98_5249279471$', '_SYSSMU46_5249279471$');
  2    3
SEGMENT_NAME		       STATUS		TABLESPACE_NAME
------------------------------ ---------------- ------------------------------
_SYSSMU46_5249279471$	       PARTLY AVAILABLE UNDOTEST1
_SYSSMU98_5249279471$	       PARTLY AVAILABLE UNDOTEST1


SQL> SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */ KTUXESTA Status, KTUXECFL Flags FROM x$ktuxe 
WHERE ktuxesta!='INACTIVE' AND ktuxeusn=98;

KTUXEUSN KTUXESLT KTUXESQN STATUS FLAGS
---------- ---------- ---------- ---------------- ------------------------
98 25 1352150 ACTIVE DEAD

SQL> SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */ KTUXESTA Status, KTUXECFL Flags FROM x$ktuxe 
WHERE ktuxesta!='INACTIVE' AND ktuxeusn=46;

KTUXEUSN KTUXESLT KTUXESQN STATUS FLAGS
---------- ---------- ---------- ---------------- ------------------------
46 46 2313064 ACTIVE DEAD


Given that this is a critical production system, we couldn’t afford to wait for a complete recovery of the affected undo segments. To mitigate the issue, we created a new undo tablespace and designated it as the default for the database. This action enabled us to resume normal operations while the recovery of the problematic segments continued in the background.

However, the underlying mystery remains: why are we unable to drop these segments in the production environment? To investigate further, we cloned the production database and set up a test instance. To our surprise, we replicated the same situation, where both segments 46 and 98 appeared again in a ‘PARTLY AVAILABLE’ state, providing no options for us to drop them.

In our exploration, we first enabled the FAST_START_PARALLEL_ROLLBACK parameter, which determines the number of processes that participate in parallel rollback during transaction rollbacks, typically following an instance failure or a large manual rollback. We set this parameter to HIGH, as it significantly accelerates the rollback process for large transactions, particularly in scenarios involving instance failures or extensive operations requiring manual rollback.

Additionally, we experimented with the undocumented parameter _OFFLINE_ROLLBACK_SEGMENTS, which is intended to control the state of rollback segments.
Note: When dealing with hidden or undocumented parameters, it’s crucial to consult with Oracle support or rely on prior experience, as these settings can lead to unforeseen consequences in production environments.

Ran below query to dynamically get alter statements for segments which we need to set offline.

SQL>  select 'ALTER SYSTEM SET "_OFFLINE_ROLLBACK_SEGMENTS"='||listagg(''''||segment_name||'''',',') WITHIN GROUP (ORDER BY segment_name)||' scope=spfile;' from dba_rollback_segs 
where tablespace_name='UNDOTBS1' and status ='NEEDS RECOVERY'; 

Alter System set "_OFFLINE_ROLLBACK_SEGMENTS"="_SYSSMU98_5249279471$" scope=spfile;
Alter System set "_OFFLINE_ROLLBACK_SEGMENTS"="_SYSSMU46_5249279471$" scope=spfile;

Shutdown the database and startup as normal after setting the above parameter. 

shutdown immediate;
startup;

and finally the drop statements. 
SQL> select 'drop rollback segment "'||segment_name||'";' from dba_rollback_segs 
where tablespace_name='UNDOTBS1' and status ='NEEDS RECOVERY';

drop rollback segment "_SYSSMU98_5249279471$";
drop rollback segment "_SYSSMU46_5249279471$";
 

Issue above two drop rollback segemnts from the dfatabase and bounce the database again anf finally drop the problematic undo tablespace. Do not forget to reset the ‘_OFFLINE_ROLLBACK_SEGMENTS’ parameter and a one more bounce again.

SQL>  shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>  startup;


SQL>  drop tablespace UNDOTEST1;
Tablespace dropped.


SQL>  Alter System reset "_OFFLINE_ROLLBACK_SEGMENTS";
System altered.

SQL>  shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>  startup;

Although it was a lengthy and demanding process involving numerous experiments, the results were ultimately positive. We encountered no errors and successfully dropped the problematic segments, freeing the database from the issues that had plagued it. This experience not only resolved our immediate concerns but also provided valuable insights into managing similar challenges in the future.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »