Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘oracle’

Demo on how to use Oracle XStream Replication Setup Using Java Client

Posted by FatDBA on June 15, 2025

Oracle XStream is a feature of Oracle Database that allows for real-time data replication between databases. Unlike Oracle GoldenGate, which is more GUI-driven and feature-rich, XStream is more code-centric. This makes it highly customizable, especially suitable when you want a Java or C-based client to control how data flows.

XStream vs GoldenGate

  • XStream is older and less frequently updated compared to Oracle GoldenGate.
  • XStream is limited to Oracle-to-Oracle replication and does not support heterogeneous environments.
  • GoldenGate supports a wide range of sources and targets, is easier to manage with its GUI-based tools, and has better support for DDL replication, integrated capture, and coordinated replication.

When and Why to Use XStream

Use XStream when:

  • You need fine-grained control using custom applications written in Java or C.
  • Your environment is Oracle-to-Oracle only.
  • You want a lightweight, programmatic replication tool that can be embedded in your application logic.
  • Licensing or infrastructure limitations do not permit GoldenGate and licen sing fees is a concern.

Do not use XStream if:

  • You require GUI-driven replication monitoring and setup.
  • You need heterogeneous replication (e.g., Oracle to PostgreSQL, MySQL, etc.).
  • Your use case demands continuous support and new feature releases.

Lets do a quick demo on how to use it, I have setup a lab environment for the test.

  • Source Database: dixitdb on 192.168.68.79:1521
  • Target Database: targetdb on 192.168.68.85:1521

Java Application (xio.java) will:

  • Connect to XStream Outbound on the source DB
  • Connect to XStream Inbound on the target DB
  • Transfer changes using Logical Change Records (LCRs)

Source Database Setup (dixitdb)

Step 1: Enable Replication Features

ALTER SYSTEM SET enable_goldengate_replication = TRUE SCOPE=BOTH;

Step 2: Ensure Archive Logging is Enabled

archive log list;

Must show: Database log mode              Archive Mode

Step 3: Create and Grant User for XStream

CREATE USER xstream_admin IDENTIFIED BY Welcome123;
GRANT CONNECT, RESOURCE, DBA TO xstream_admin;

Step 4: Enable Supplemental Logging

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Step 5: Create Demo Table

CREATE TABLE xstream_admin.demo_table (
  id    NUMBER PRIMARY KEY,
  name  VARCHAR2(100),
  value NUMBER
);

Step 6: Grant XStream Admin Privilege

BEGIN
  DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'XSTREAM_ADMIN',
    privilege_type => 'CAPTURE',
    grant_select_privileges => TRUE
  );
END;
/

Step 7: Create Outbound Server

BEGIN
  DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
    server_name => 'XOUT_SRV',
    connect_user => 'XSTREAM_ADMIN'
  );
END;
/

Check status:

SELECT capture_name, status FROM dba_capture;


Target Database Setup (targetdb)

Step 1: Create the Same User

CREATE USER xstream_admin IDENTIFIED BY Welcome123;
GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO xstream_admin;

Step 2: Setup Queue

BEGIN
  DBMS_XSTREAM_ADM.SET_UP_QUEUE(
    queue_table => 'xstream_admin.XIN_SRV',
    queue_name  => 'xstream_admin.XIN_SRV'
  );
END;
/

Step 3: Create Inbound Server

BEGIN
  DBMS_XSTREAM_ADM.CREATE_INBOUND(
    server_name =>'XIN_SRV',
    queue_name  =>'xstream_admin.XIN_SRV',
    apply_user  =>'xstream_admin',
    comment     =>'xstream_admin in'
  );
END;
/

Step 4: Create Target Table

CREATE TABLE xstream_admin.demo_table (
  id    NUMBER PRIMARY KEY,
  name  VARCHAR2(100),
  value NUMBER
);

Step 5: Add Table Rules

VAR dml_rule VARCHAR2(30);
VAR ddl_rule VARCHAR2(30);
BEGIN
  DBMS_XSTREAM_ADM.ADD_TABLE_RULES(
    table_name => 'xstream_admin.demo_table',
    streams_type => 'APPLY',
    streams_name => 'XIN_SRV',
    queue_name => 'xstream_admin.XIN_SRV',
    source_database => 'dixitdb',
    dml_rule_name => :dml_rule,
    ddl_rule_name => :ddl_rule
  );
END;
/

Step 6: Start Apply Process

BEGIN
  DBMS_APPLY_ADM.START_APPLY(apply_name => 'XIN_SRV');
END;
/

Check status:
SELECT apply_name, status FROM dba_apply;

Java Application: xio.java

The xio.java program acts as a custom replication engine. It:

  • Connects to XStream Inbound and Outbound
  • Receives LCRs (Logical Change Records) from the source
  • Sends them to the target

Pre-Requisites:

  • ojdbc8-19.15.0.0.jar (Oracle JDBC driver)
  • xstreams.jar (from $ORACLE_HOME/rdbms/jlib/)
import oracle.streams.*;
import oracle.jdbc.internal.OracleConnection;
import oracle.jdbc.*;
import oracle.sql.*;
import java.sql.*;
import java.util.*;

public class xio
{
public static String xsinusername = null;
public static String xsinpasswd = null;
public static String xsinName = null;
public static String xsoutusername = null;
public static String xsoutpasswd = null;
public static String xsoutName = null;
public static String in_url = null;
public static String out_url = null;
public static Connection in_conn = null;
public static Connection out_conn = null;
public static XStreamIn xsIn = null;
public static XStreamOut xsOut = null;
public static byte[] lastPosition = null;
public static byte[] processedLowPosition = null;

public static void main(String args[])
{
// get connection url to inbound and outbound server
in_url = parseXSInArguments(args);
out_url = parseXSOutArguments(args);

// create connection to inbound and outbound server
in_conn = createConnection(in_url, xsinusername, xsinpasswd);
out_conn = createConnection(out_url, xsoutusername, xsoutpasswd);

// attach to inbound and outbound server
xsIn = attachInbound(in_conn);
xsOut = attachOutbound(out_conn);

// main loop to get lcrs
get_lcrs(xsIn, xsOut);

// detach from inbound and outbound server
detachInbound(xsIn);
detachOutbound(xsOut);
}

// parse the arguments to get the conncetion url to inbound db
public static String parseXSInArguments(String args[])
{
String trace, pref;
String orasid, host, port;

if (args.length != 12)
{
printUsage();
System.exit(0);
}

orasid = args[0];
host = args[1];
port = args[2];
xsinusername = args[3];
xsinpasswd = args[4];
xsinName = args[5];

System.out.println("xsin_host = "+host);
System.out.println("xsin_port = "+port);
System.out.println("xsin_ora_sid = "+orasid);

String in_url = "jdbc:oracle:oci:@"+host+":"+port+":"+orasid;
System.out.println("xsin connection url: "+ in_url);

return in_url;
}

// parse the arguments to get the conncetion url to outbound db
public static String parseXSOutArguments(String args[])
{
String trace, pref;
String orasid, host, port;

if (args.length != 12)
{
printUsage();
System.exit(0);
}

orasid = args[6];
host = args[7];
port = args[8];
xsoutusername = args[9];
xsoutpasswd = args[10];
xsoutName = args[11];


System.out.println("xsout_host = "+host);
System.out.println("xsout_port = "+port);
System.out.println("xsout_ora_sid = "+orasid);

String out_url = "jdbc:oracle:oci:@"+host+":"+port+":"+orasid;
System.out.println("xsout connection url: "+ out_url);

return out_url;
}

// print out sample program usage message
public static void printUsage()
{
System.out.println("");
System.out.println("Usage: java xio "+"<xsin_oraclesid> " + "<xsin_host> "
+ "<xsin_port> ");
System.out.println(" "+"<xsin_username> " + "<xsin_passwd> "
+ "<xsin_servername> ");
System.out.println(" "+"<xsout_oraclesid> " + "<xsout_host> "
+ "<xsout_port> ");
System.out.println(" "+"<xsout_username> " + "<xsout_passwd> "
+ "<xsout_servername> ");
}

// create a connection to an Oracle Database
public static Connection createConnection(String url,
String username,
String passwd)
{
try
{
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
return DriverManager.getConnection(url, username, passwd);
}
catch(Exception e)
{
System.out.println("fail to establish DB connection to: " +url);
e.printStackTrace();
return null;
}
}

// attach to the XStream Inbound Server
public static XStreamIn attachInbound(Connection in_conn)
{
XStreamIn xsIn = null;
try
{
xsIn = XStreamIn.attach((OracleConnection)in_conn, xsinName,
"XSDEMOINCLIENT" , XStreamIn.DEFAULT_MODE);

// use last position to decide where should we start sending LCRs
lastPosition = xsIn.getLastPosition();
System.out.println("Attached to inbound server:"+xsinName);
System.out.print("Inbound Server Last Position is: ");
if (null == lastPosition)
{
System.out.println("null");
}
else
{
printHex(lastPosition);
}
return xsIn;
}
catch(Exception e)
{
System.out.println("cannot attach to inbound server: "+xsinName);
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}

// attach to the XStream Outbound Server
public static XStreamOut attachOutbound(Connection out_conn)
{
XStreamOut xsOut = null;

try
{
// when attach to an outbound server, client needs to tell outbound
// server the last position.
xsOut = XStreamOut.attach((OracleConnection)out_conn, xsoutName,
lastPosition, XStreamOut.DEFAULT_MODE);
System.out.println("Attached to outbound server:"+xsoutName);
System.out.print("Last Position is: ");
if (lastPosition != null)
{
printHex(lastPosition);
}
else
{
System.out.println("NULL");
}
return xsOut;
}
catch(Exception e)
{
System.out.println("cannot attach to outbound server: "+xsoutName);
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}

// detach from the XStream Inbound Server
public static void detachInbound(XStreamIn xsIn)
{
byte[] processedLowPosition = null;
try
{
processedLowPosition = xsIn.detach(XStreamIn.DEFAULT_MODE);
System.out.print("Inbound server processed low Position is: ");
if (processedLowPosition != null)
{
printHex(processedLowPosition);
}
else
{
System.out.println("NULL");
}
}
catch(Exception e)
{
System.out.println("cannot detach from the inbound server: "+xsinName);
System.out.println(e.getMessage());
e.printStackTrace();
}
}

// detach from the XStream Outbound Server
public static void detachOutbound(XStreamOut xsOut)
{
try
{
xsOut.detach(XStreamOut.DEFAULT_MODE);
}
catch(Exception e)
{
System.out.println("cannot detach from the outbound server: "+xsoutName);
System.out.println(e.getMessage());
e.printStackTrace();
}
}

public static void get_lcrs(XStreamIn xsIn, XStreamOut xsOut)
{
if (null == xsIn)
{
System.out.println("xstreamIn is null");
System.exit(0);
}

if (null == xsOut)
{
System.out.println("xstreamOut is null");
System.exit(0);
}

try
{
while(true)
{
// receive an LCR from outbound server
LCR alcr = xsOut.receiveLCR(XStreamOut.DEFAULT_MODE);

if (xsOut.getBatchStatus() == XStreamOut.EXECUTING) // batch is active
{
assert alcr != null;
// send the LCR to the inbound server
xsIn.sendLCR(alcr, XStreamIn.DEFAULT_MODE);

// also get chunk data for this LCR if any
if (alcr instanceof RowLCR)
{
// receive chunk from outbound then send to inbound
if (((RowLCR)alcr).hasChunkData())
{
ChunkColumnValue chunk = null;
do
{
chunk = xsOut.receiveChunk(XStreamOut.DEFAULT_MODE);
xsIn.sendChunk(chunk, XStreamIn.DEFAULT_MODE);
} while (!chunk.isEndOfRow());
}
}
processedLowPosition = alcr.getPosition();
}
else // batch is end
{
assert alcr == null;
// flush the network
xsIn.flush(XStreamIn.DEFAULT_MODE);
// get the processed_low_position from inbound server
processedLowPosition =
xsIn.getProcessedLowWatermark();
// update the processed_low_position at oubound server
if (null != processedLowPosition)
xsOut.setProcessedLowWatermark(processedLowPosition,
XStreamOut.DEFAULT_MODE);
}
}
}
catch(Exception e)
{
System.out.println("exception when processing LCRs");
System.out.println(e.getMessage());
e.printStackTrace();
}
}

public static void printHex(byte[] b)
{
for (int i = 0; i < b.length; ++i)
{
System.out.print(
Integer.toHexString((b[i]&0xFF) | 0x100).substring(1,3));
}
System.out.println("");
}
}


—-> Downloaded compatibile version of ojdbc driver ojdbc8-19.15.0.0.jar from oracle’s website
—- $ORACLE_HOME/rdbms/jlib/xstreams.jar
—-> copy ‘xstreams.jar’ from the ORACLE_HOME
—-> Compile the java code to create classes.
javac -cp “ojdbc8-19.15.0.0.jar:xstreams.jar:.” xio.java

— Once complied, call the java program now.
— Run Java Replication App
— This does: Connect to outbound server, Read LCRs (Logical Change Record), Send LCRs to inbound server and Confirm position/flush status

[oracle@oracleontario lib]$ java -cp "ojdbc8-19.15.0.0.jar:xstreams.jar:." xio targetdb
192.168.68.85 1521 XSTREAM_ADMIN Welcome123 XIN_SRV dixitdb 192.168.68.79 1521
XSTREAM_ADMIN Welcome123 XOUT_SRV
xsin_host = 192.168.68.85
xsin_port = 1521
xsin_ora_sid = targetdb
xsin connection url: jdbc:oracle:oci:@192.168.68.85:1521:targetdb
xsout_host = 192.168.68.79
xsout_port = 1521
xsout_ora_sid = dixitdb
xsout connection url: jdbc:oracle:oci:@192.168.68.79:1521:dixitdb
Attached to inbound server:XIN_SRV
Inbound Server Last Position is: null
Attached to outbound server:XOUT_SRV
Last Position is: NULL

Now when the code is running and source and target is ready. Lets try to do an insert and see if we gets it on the target.

-- on source database
SQL> select * from xstream_admin.demo_table;
ID
NAME
VALUE
----------
-----------------------------------------------------------------------------------------
----------- ----------
 10
Maine


SQL> insert into xstream_admin.demo_table (ID, NAME, VALUE) values (101, 'Calgary', 100);

1 row created.

SQL> commit;

Commit complete.

SQL> /

ID
NAME
VALUE
650 ----------
-----------------------------------------------------------------------------------------
----------- ----------
10
Maine
800
101
Calgary
100



--- on target database
SQL>
SQL> select * from xstream_admin.demo_table;
ID
NAME
VALUE
----------
-----------------------------------------------------------------------------------------
----------- ----------
10
Maine
800
101
Calgary
100

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , , | Leave a Comment »

Oracle Support New AI-Powered Search

Posted by FatDBA on April 8, 2025

In a major upgrade to the user experience, Oracle Support has introduced a new AI-powered search feature on its support websites … This enhancement aims to make information discovery faster, smarter, and more relevant, solving one of the most common frustrations: finding the right document among millions.

Oracle Support’s new search interface is now powered by AI, providing:

  • Faster Search Results: As you start typing your query, the AI intelligently predicts what you’re looking for.
  • Relevant Suggestions: It offers instant access to related Knowledge Base articles, Community discussions, and Official Documentation.
  • Context-Aware Results: Instead of just matching keywords, the search engine understands the intent behind your query.
  • Cleaner Interface: A streamlined design shows results by category: Knowledge Base | Community | Documentation.

In the screenshot example (see above), when you search for “How to Find the Oracle BI Publisher version,” the AI doesn’t just return a random list … it shows an exact match article explaining where to find version info, depending on whether you use a standalone install or an embedded one.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

All new Oracle 23ai new views for enhanced Data Pump troubleshooting

Posted by FatDBA on April 2, 2025

Oracle 23ai (also known as Oracle 23c with AI features) introduces several powerful enhancements aimed at improving observability, diagnostics, and performance debugging. Among these enhancements are three new dynamic performance views designed specifically to help DBAs and developers troubleshoot Data Pump performance issues more efficiently and in real-time.

When dealing with large data exports or imports using Oracle Data Pump, performance bottlenecks or unexplained hangs can be frustrating and time-consuming to investigate. Until now, DBAs had to rely heavily on Data Pump log files, trace files, and session-level v$ views to diagnose problems. With the introduction of the following views, Oracle has taken a major step toward simplifying that process:

The three new views are: GV$DATAPUMP_PROCESS_INFO, GV$DATAPUMP_PROCESSWAIT_INFO and GV$DATAPUMP_SESSIONWAIT_INFO

These views provide real-time information about the state of Data Pump processes, their wait events, and any session-level contentions. GV$ views return cluster-wide information in a RAC environment, while V$ views return information specific to the current instance.

  • GV$DATAPUMP_PROCESS_INFO – This view shows the current Data Pump processes, including both master and worker processes. It provides basic information like the program name, session ID, username, job name, status, and system process ID.
SELECT * FROM V$DATAPUMP_PROCESS_INFO;

CUR_DATE	PROGRAM	SESSIONID	STATUS	USERNAME	JOBNAME	SPID
2023-01-09 13:56:07	ude@orcl (TNS V1-V3)	42	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891480
2023-01-09 13:56:07	oracle@orcl (DW00)	48	ACTIVE	SYSTEM	SYS_EXPORT_FULL_01	3891500

  • GV$DATAPUMP_PROCESSWAIT_INFO – This view helps detect contention between Data Pump processes. It shows which sessions are waiting, what events they are waiting for, and which other sessions may be blocking them.
    SELECT * FROM GV$DATAPUMP_PROCESSWAIT_INFO;
    
    WAITING_SESSION	HOLDING_SESSION	EVENT	PROGRAM_WAITSESSION	PROGRAM_HOLDINGDSESSION
    174	57	enq: TM - contention	oracle@orcl (DM00)	oracle@orcl (DW00)
    
    • GV$DATAPUMP_SESSIONWAIT_INFO – Provides deep insights into session-level waits during Data Pump operations, including how long the session has been in the wait state and what it’s waiting on.
    SELECT * FROM GV$DATAPUMP_SESSIONWAIT_INFO;
    
    WAITING_SESSION	EVENT	DP_SECONDS_IN_WAIT	DP_STATE_IN_WAIT	DP_P1TEXT	DP_P1
    46	enq: TM - contention	8086	WAITING	name	mode
    
    

    Before Oracle 23ai, debugging Data Pump jobs required checking logs, trace files, and manual session analysis. These new views provide real-time visibility into what each Data Pump process is doing, what it is waiting on, and where it might be blocked.

    Use Cases:

    1. If a Data Pump job appears hung, GV$DATAPUMP_PROCESSWAIT_INFO can help identify which process is waiting and what it is waiting on.
    2. If sessions are slow or idle, GV$DATAPUMP_SESSIONWAIT_INFO provides detailed timing and wait reasons.
    3. If parallel execution is used, GV$DATAPUMP_PROCESS_INFO shows how many worker processes are active and whether they are all functioning as expected.

    The new Data Pump views in Oracle 23ai are a significant step forward for real-time performance diagnostics. These views provide valuable insights that were previously hard to obtain, allowing DBAs to troubleshoot and optimize Data Pump operations with much more confidence and clarity.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , | Leave a Comment »

    Leveraging SQLT to transfer execution plans between SQL IDs using coe_load_sql_profile.sql

    Posted by FatDBA on March 11, 2025

    Hi All,

    Have you used coe_load_sql_profile.sql before? I mean a lot of people uses coe_xfr_sql_profile.sql from SQLT and these two scripts deals with SQL profiles in Oracle, but their purposes and use cases differ. coe_xfr_sql_profile.sql is used to export and migrate an existing SQL Profile from one system to another, ensuring performance stability across environments. coe_load_sql_profile.sql is used to create a new SQL Profile by capturing the execution plan from a modified SQL query and applying it to the original query, forcing it to use the optimized plan.

    Let me first explain a little bit more of the toolkit – Oracle SQLT (SQLTXPLAIN) which is a powerful tool designed to help DBAs analyze and troubleshoot SQL performance issues and all above mentioned scripts are part of the kit provided by Oracle and written by none other than Carlos Sierra.

    A common question DBAs encounter is: Can we plug the execution plan of one SQL ID into another SQL ID? …. The answer is YES! This can be accomplished using the SQLT script coe_load_sql_profile.sql. In this blog, we will explore how to use this script to achieve plan stability by enforcing a preferred execution plan across different SQL IDs. It examines the memory and AWR both to look text of the SQL IDs you passed and then it queries GV$SQL_PLAN and DBA_HIST_SQL_PLAN to extract the execution plan hash value from the modified SQL. Once it’s done collecting that information, it performs a loop to extract optimizer hints of the modified SQL’s execution plan. Finally it creates a SQL Profile using DBMS_SQLTUNE.IMPORT_SQL_PROFILE.

    Let’s give a quick demo … assume we have two SQL statements:

    SQL ID 1: 78a1nbdabcba (Original SQL) …. SQL ID 2: 9na182nn2bnn (Modified SQL)
    Both queries are logically similar but produce different execution plans.
    Our goal is to take the execution plan from SQL ID 1 and apply it to SQL ID 2.

    connect system/monkey123
    SQL> @coe_load_sql_profile.sql 
    or 
    SQL> START coe_load_sql_profile.sql <ORIGINAL_SQL_ID> <MODIFIED_SQL_ID>
    
    
    Parameter 1:
    ORIGINAL_SQL_ID (required)
    
    Enter value for 1: 78a1nbdabcba
    
    Parameter 2:
    MODIFIED_SQL_ID (required)
    
    Enter value for 2: 9na182nn2bnn
    
    
         PLAN_HASH_VALUE          AVG_ET_SECS
    -------------------- --------------------
              1181381381                 .003
    
    Parameter 3:
    PLAN_HASH_VALUE (required)
    
    Enter value for 3: 1181381381
    
    Values passed to coe_load_sql_profile:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ORIGINAL_SQL_ID: "78a1nbdabcba"
    MODIFIED_SQL_ID: "9na182nn2bnn"
    PLAN_HASH_VALUE: "1181381381"
    
    .
    .
    .
    
    ORIGINAL:78a1nbdabcba MODIFIED:9na182nn2bnn PHV:1181381381 SIGNATURE:16731003137917309319 CREATED BY COE_LOAD_SQL_PROFILE.SQL
    SQL>SET ECHO OFF;
    
    ****************************************************************************
    * Enter password to export staging table STGTAB_SQLPROF_78a1nbdabcba
    ****************************************************************************
    
    Export: Release 19.0.0- Production on Sun Mar 08 14:45:47 2012
    
    Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.
    
    Password:
    .
    .
    .
    
    coe_load_sql_profile completed.
    
    
    
    Run original query
    SQL> select ename from DIXIT where ename='Name';
    
    Plan hash value: 1181381381
    
    ---------------------------------------------------------------
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| 
    ---------------------------------------------------------------
    |   0 | SELECT STATEMENT  |        |     1 |     6 |     3   (0)|
    |*  1 |  TABLE ACCESS FULL| DIXIT  |     1 |     6 |     3   (0)|
    ---------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("ENAME"='Name')
    
    Note
    -----
       - SQL profile "78a1nbdabcba_1181381381" used for this statement
       
    


    What are your experiences with enforcing execution plans in Oracle?
    Let me know in the comments!

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , | 4 Comments »

    Resolving an interesting timezone mismatch in database sessions after migration

    Posted by FatDBA on March 8, 2025

    This happened a while ago, but it was an interesting scenario during a database migration. We encountered a unique issue where the system timezone varied depending on how the database connection was established. Specifically, when connecting to the database using a service name, the session timezone was set to EST, whereas a direct connection (without using a service name) correctly showed the timezone as UTC. This discrepancy had the potential to cause severe inconsistencies in time-sensitive transactions and logging mechanisms, prompting an urgent need for resolution.

    [oracle@xxxxxx ~]$ sqlplus monkey/xxxxxxx@MONKEYD
    SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
    
    NOW
    -------------------
    02-08-2025 13:20:07  -- Displaying EST timezone
    
    SQL> !date
    Sat Feb 8 18:20:10 UTC 2025  -- Server's system timezone is UTC
    
    
    
    When connecting without specifying a service name:
    [oracle@xxxxx 11.2.0.4]$ sqlplus monkey/xxxxxxx
    SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;
    
    NOW
    -------------------
    02-08-2025 18:15:40  -- Correctly showing UTC timezone
    
    SQL> !date
    Sat Feb 8 18:15:42 UTC 2025  -- Server's system timezone remains UTC
    

    Upon detailed investigation, we determined that the issue stemmed from the Oracle Clusterware (HAS) environment settings. Specifically:

    Listener Start Method: When the listener was started via srvctl, the database sessions picked up the incorrect timezone (EST). When the listener was started manually, the correct timezone (UTC) was applied.

    We reviewed the file /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig__env.txt, which is responsible for defining environmental variables for Oracle Clusterware. The file contained the incorrect timezone setting: TZ=America/New_York # Incorrect setting enforcing EST … This setting was overriding the expected system timezone when the listener was started via srvctl.

    The previous production environment running Oracle 11g with Oracle Restart had the setting —> TZ=GMT-00:00 # Which correctly maintained UTC behavior
    The new setup on Oracle 19c had TZ=America/New_York, leading to the observed inconsistency.

    To resolve the issue, we first took backup of existing file before updting the TZ parameter.

    Before making any changes, we took a backup of the affected configuration file:
    cp /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig_env.txt \ /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig_env.txt.

    Modified TZ=America/New_York in environment file /app/oracle/testdb/mygrid/19.0/crs/install/s_crsconfig__env.txt and updated it to TZ=UTC

    Since Oracle Restart enforces the timezone setting at the environment level, a restart of services was required. This step was planned within a scheduled outage window to avoid disruption.

    srvctl stop listener
    srvctl stop database -d MONKEYDB
    crsctl stop has
    crsctl start has
    srvctl start database -d MONKEYDB
    srvctl start listener
    

    After applying the changes and restarting the necessary services, we verified that the database sessions were now consistently reporting the correct timezone (UTC):

    sqlplus monkey/xxxxxxx@MONKEYD
    SQL> SELECT TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') AS "NOW" FROM DUAL;

    The output now correctly showed UTC time, confirming that the issue was successfully resolved.

    This issue highlighted a subtle but critical misconfiguration that affected database session timezones in a clustered environment. The root cause was traced back to the Oracle Restart configuration file, which was enforcing the wrong timezone (America/New_York). By updating this setting to UTC and restarting the HAS services, we successfully restored consistency across all database sessions.

    Moving forward, these insights will help ensure that similar issues are proactively identified and mitigated during future database migrations. Proper validation of environment settings, particularly in Oracle Clusterware and HA setups, is crucial to maintaining operational integrity and preventing timezone-related anomalies in mission-critical applications.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

    Resolving Performance Issues in Oracle EBS 12.1 Journal Imports Due to GL Code Combinations Exclusive Table Lock

    Posted by FatDBA on February 18, 2025

    A long-standing challenge in Oracle EBS 12.1 involves journal imports taking an excessive amount of time due to exclusive locking on the GL.GL_CODE_COMBINATIONS table. This issue significantly slows down the entire journal import process by causing serialization, ultimately impacting system performance.

    The problem arises during the journal import process when records are inserted into the GL_BALANCES table. Because of this, only one import process can proceed at a time, forcing subsequent imports to wait due to contention on the GL_CODE_COMBINATIONS table. The contention manifests as waits on the event ‘enq: TM – contention’, resulting from an exclusive table-level lock.

    Understanding Profile Options: Flexfields:Validate on Server and Flexfields:Shared Table Lock

    • Flexfields: Validate on Server
      • This profile decides if flexfield validations take place on the server or client.
      • Turning this profile on (Yes) or off (No) only changes where validations run, not their validity.
      • If set to Yes, validations occur on the server, decreasing client-side workload.
    • Flexfields:Shared Table Lock
      • This setting determines the locking behavior during client-side validation.
      • When set to Yes, the flexfield table (GL_CODE_COMBINATIONS) is locked in shared mode.
      • When set to No, the table is locked in exclusive mode, leading to serialization and blocking other processes.
      • Note: If Flexfields:Validate on Server is set to No, the Flexfields:Shared Table Lock profile setting takes effect.

    How These Profiles Impact Journal Import Performance : The Journal Import program is indirectly affected by these profile settings. The issue arises when the import process attempts to create new code combinations using the flexfield validation routine.

    • If Flexfields:Validate on Server is set to No, the system relies on client-side validation, which then enforces the Flexfields:Shared Table Lock setting.
    • If Flexfields:Shared Table Lock is set to No, the system applies exclusive locks on GL_CODE_COMBINATIONS, leading to serialization of imports.

    To improve journal import performance and minimize locking issues, consider the following:

    1. Set Flexfields:Validate on Server to Yes
      • This shifts validation to the server, reducing contention on GL_CODE_COMBINATIONS.
    2. Ensure Flexfields:Shared Table Lock is set to Yes (if server-side validation is disabled)
      • This prevents exclusive table-level locks and allows multiple imports to run concurrently.
    3. Optimize Code Combination Creation
      • Regularly clean up and archive unused or obsolete code combinations to reduce contention.
      • Implement proper indexing on frequently accessed flexfield tables to enhance performance.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , | Leave a Comment »

    Materialized Views: Concurrent Refresh in Oracle 23c

    Posted by FatDBA on December 28, 2024

    Oracle Database 23c introduces a powerful new feature allowing concurrent refresh of materialized views. This enhancement improves performance by enabling multiple users to refresh the same materialized view at the same time.

    Before Oracle 23c, refreshing a materialized view (to reflect changes in the base tables) could only happen sequentially. This meant:

    • If multiple users triggered a refresh at the same time, Oracle handled them one by one.
    • This could cause delays in environments where real-time data updates were needed.

    For example, during month-end reporting, if multiple users tried to refresh a sales summary MV, they had to wait for each other, impacting productivity.

    Oracle 23c allows concurrent refreshes, enabling multiple users to refresh the materialized view simultaneously. This improves efficiency, especially for fast-refresh MVs, which update incrementally. Let’s walk through a detailed example to demonstrate how to create, configure, and utilize concurrent refresh.

    CREATE TABLE sales (
      sale_id NUMBER(10),
      sale_date DATE,
      amount NUMBER(10, 2),
      customer_id NUMBER(10),
      CONSTRAINT sales_pk PRIMARY KEY (sale_id)
    );
    
    CREATE TABLE customers (
      customer_id NUMBER(10),
      customer_name VARCHAR2(100),
      CONSTRAINT customers_pk PRIMARY KEY (customer_id)
    );
    
    
    INSERT INTO customers VALUES (1, 'Alice');
    INSERT INTO customers VALUES (2, 'Bob');
    
    INSERT INTO sales VALUES (101, '01-JAN-2024', 500, 1);
    INSERT INTO sales VALUES (102, '05-JAN-2024', 300, 1);
    INSERT INTO sales VALUES (103, '10-JAN-2024', 450, 2);
    
    

    Next will create MV logs as Materialized view logs track changes to base tables, enabling fast refresh.

    CREATE MATERIALIZED VIEW LOG ON sales
    WITH ROWID, SEQUENCE (sale_id, sale_date, amount, customer_id)
    INCLUDING NEW VALUES;
    
    CREATE MATERIALIZED VIEW LOG ON customers
    WITH ROWID, SEQUENCE (customer_id, customer_name)
    INCLUDING NEW VALUES;
    
    
    --- Create MView with concurrent refresh option.
    CREATE MATERIALIZED VIEW sales_summary_mv
    REFRESH FAST ON COMMIT
    ENABLE CONCURRENT REFRESH
    AS
    SELECT c.customer_name,
           COUNT(s.sale_id) AS total_sales,
           SUM(s.amount) AS total_amount
    FROM sales s
    JOIN customers c ON s.customer_id = c.customer_id
    GROUP BY c.customer_name;
    
    -- collect stats to reflect row changes.
    exec dbms_stats.gather_table_stats(null, 'sales_summary_mv');
    
    

    Now we can trigger concurrent refresh.

    EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'F');  -- Fast refresh
    EXEC DBMS_MVIEW.REFRESH('SALES_SUMMARY_MV', 'C');  -- Complete refresh
    
    
    -- In our case the concurrent_refresh_enabled returns Y, means concurrent refresh is enabled.
    SELECT mview_name, concurrent_refresh_enabled
    FROM user_mviews
    WHERE mview_name = 'SALES_SUMMARY_MV';
    
    MVIEW_NAME                     CONCURRENT_REFRESH_ENABLED
    ------------------------------ ------------------------------
    SALES_SUMMARY_MV              Y
    

    Oracle 23c’s concurrent refresh for materialized views is a game-changer for data-intensive applications. By allowing multiple users to refresh views in parallel, businesses can significantly boost performance and ensure faster reporting.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

    Effortless LOB Segment Renaming in Oracle 23ai: A Game-Changer for Database Management

    Posted by FatDBA on December 28, 2024

    Oracle Database 23ai introduces a groundbreaking feature that simplifies the renaming of LOB segments. This enhancement offers database administrators a more efficient way to manage large data objects without resorting to complex workarounds.

    LOBs play a critical role in storing massive data types such as images, documents, and videos. As databases grow and evolve, LOB segment names can become outdated or misaligned with business needs. Previously, renaming required duplicating LOBs, migrating data, and deleting old segments—an inefficient process. Before Oracle 23ai:

    • Create a New LOB Column: A new LOB column had to be created in the table with the desired segment name.
    • Migrate Data: Data from the old LOB segment had to be copied to the new LOB column.
    • Drop Old LOB: Once the migration was completed, the old LOB column and segment were dropped.
    • Rename Indirectly: This indirect process often required downtime and posed risks of data loss or corruption during migration.

    Example (Pre-23ai Approach):

    ALTER TABLE media ADD (file_data_temp BLOB);
    UPDATE media SET file_data_temp = file_data;
    ALTER TABLE media DROP COLUMN file_data;
    ALTER TABLE media RENAME COLUMN file_data_temp TO file_data;
    

    This method was cumbersome and inefficient for large databases … With Oracle 23ai, renaming LOB segments is now as simple as running a single command 🙂 🙂

    Renaming LOB segments can be achieved with the following straightforward SQL command:

    ALTER TABLE <table_name> MODIFY LOB (<lob_column>) (RENAME TO <new_segment_name>);

    Suppose you manage a media table with a LOB column file_data, currently named FILE_DATA_LOB. To rename it to MEDIA_ARCHIVE_LOB: —> ALTER TABLE media MODIFY LOB (file_data) (RENAME TO MEDIA_ARCHIVE_LOB);

    Practical Applications:

    • Data Reorganization: Update segment names to align with evolving data categories.
    • Simplifying Audits: Ensure LOB segment names clearly represent their purpose.
    • Consistency Enforcement: Standardize naming conventions across different LOB segments.

    Important Considerations

    • Dependencies: Check for application-level dependencies that reference LOB segment names.
    • User Privileges: Ensure you have the required ALTER permissions.
    • Tablespace Integrity: Segment renaming does not affect the tablespace or physical location.

    Automating LOB Segment Renaming

    For larger environments, automating LOB segment renaming can save significant time. The following PL/SQL script demonstrates how to rename multiple segments programmatically:

    BEGIN
      FOR r IN (SELECT table_name, column_name FROM user_lobs) LOOP
        EXECUTE IMMEDIATE 'ALTER TABLE ' || r.table_name || 
                         ' MODIFY LOB (' || r.column_name || 
                         ') (RENAME TO ' || r.column_name || '_UPDATEDLOB)';
      END LOOP;
    END;
    /
    

    This script appends _UPDATEDLOB to all LOB segments in user tables.

    Final Thoughts

    The ability to rename LOB segments directly in Oracle Database 23ai marks a significant step forward in database administration. By eliminating complex workarounds, this feature enhances efficiency and ensures database environments remain well-organized. Whether standardizing segment names or adapting to shifting business needs, this capability helps administrators to maintain cleaner, more streamlined data structures easily.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

    Oracle 23ai Tip: Use SESSION_EXIT_ON_PACKAGE_STATE_ERROR to Prevent Silent Data Corruption

    Posted by FatDBA on December 28, 2024

    Oracle Database 23ai introduces a new parameter, SESSION_EXIT_ON_PACKAGE_STATE_ERROR, designed to enhance session management and prevent potential data corruption by enforcing a hard session exit when the session state becomes invalidated.

    Why SESSION_EXIT_ON_PACKAGE_STATE_ERROR Matters ?

    In typical Oracle database environments, stateful PL/SQL packages, MLE modules, or environments may be modified while sessions actively use them. This can lead to errors such as:

    • ORA-04068: Can occur when a PL/SQL package body is recompiled, invalidating the session state.
    • ORA-4106 / ORA-4107: Can be raisrd when an MLE module or environment is altered via DDL, invalidating the session.

    By default, the session remains active and throws an error when the invalid package or module is called. However, many applications may not properly handle these errors, leading to silent data corruption or unexpected behavior.

    The SESSION_EXIT_ON_PACKAGE_STATE_ERROR parameter mitigates this risk by forcing an immediate session exit instead of raising an error.

    Some of the benefits of using the parameter.

    • Prevents Data Corruption: By terminating sessions with invalid state, the risk of silent data corruption is reduced.
    • Simplifies Error Handling: Many applications are better at handling session disconnects than catching specific errors like ORA-04068.
    • Consistency Across Sessions: Ensures that all sessions dealing with modified packages or MLE modules are treated consistently, minimizing inconsistencies.

    How SESSION_EXIT_ON_PACKAGE_STATE_ERROR Works

    When SESSION_EXIT_ON_PACKAGE_STATE_ERROR is set to TRUE, the following behavior is enforced:

    1. PL/SQL Package Modification:
      • If a stateful PL/SQL package is modified, any active session that tries to invoke the package receives ORA-04068. With this parameter set to TRUE, the session exits immediately instead of raising the error.
    2. MLE Module or Environment Modification:
      • If an MLE module or environment is modified via DDL, active sessions receive ORA-4106 or ORA-4107. With SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE, these sessions are forcibly disconnected.
    3. Application Handling:
      • Most applications are designed to capture session disconnects and reestablish connections, streamlining recovery from session invalidation.

    Use Cases

    • High-Availability Environments: In systems where continuous uptime is critical, preventing data corruption is paramount.
    • Distributed Applications: Applications spread across multiple environments that frequently modify PL/SQL packages or MLE modules benefit from session termination to maintain data integrity.
    • Oracle RAC Deployments: Different instances in an Oracle RAC environment can independently configure this parameter, allowing fine-grained control based on workload requirements.

    Configuring SESSION_EXIT_ON_PACKAGE_STATE_ERROR:

    Examples:
    ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
    ALTER SESSION SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE;
    ALTER SYSTEM SET SESSION_EXIT_ON_PACKAGE_STATE_ERROR = TRUE SCOPE = SPFILE;

    Considerations

    • Default Behavior: By default, this parameter is set to FALSE, meaning sessions will raise errors rather than exit.
    • Testing and Validation: Test this configuration in lower environments to ensure application compatibility.
    • Session Management: Monitor session disconnects to ensure that forced exits do not disrupt critical workflows.

    Conclusion

    SESSION_EXIT_ON_PACKAGE_STATE_ERROR is a powerful new feature in Oracle Database 23ai that enhances session management by enforcing session termination on package or module state invalidation. By using this parameter, Oracle environments can significantly reduce the risk of data corruption and streamline error handling processes across diverse applications. Whether managing PL/SQL packages or MLE modules, this parameter offers greater control and reliability for database administrators and developers both.

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

    Database Migration Challenges : JServer JAVA Virtual Machine gets INVALID or UPGRADING during manual upgrade

    Posted by FatDBA on December 14, 2024

    Migrations can be a horrifying experience —tricky, complex, time-intensive, and often riddled with unexpected challenges. This becomes even more evident when you’re migrating between older database versions, where architectural and component-level changes are significant. I remember one such encounter during a migration from Oracle 11g to 19c on a new infrastructure. Using RMAN DUPLICATE with the NOOPEN clause to restore source database backup in target before calling manual upgrade procedures, the process seemed smooth initially but soon wrapped into a host of issues with key database components.

    The Problem

    During the upgrade process, several critical components failed, leaving the database in an inconsistent state. The errors revolved around the following components:

    COMP_IDCOMP_NAMEVERSIONSTATUS
    JAVAVMJServer JAVA Virtual Machine11.2.0.4.0UPGRADING
    XMLOracle XDK19.0.0.0.0INVALID
    CATJAVAOracle Database Java Packages19.0.0.0.0INVALID

    The errors observed in dbupgrade runtime logs included:

    ORA-29554: unhandled Java out of memory condition
    ORA-06512: at "SYS.INITJVMAUX", line 230
    ORA-06512: at line 5

    ORA-06512: : at "SYS.INITJVMAUX", line 230 ORA-06512: at line 5
    [ORA-29548: Java system class reported: release of Java system classes in the database (11.2.0.4.190115) does not match that of the oracle executable (19.0.0.0.0 1.8)

    These errors stemmed from a failure to allocate sufficient memory during the upgrade process. The Java Virtual Machine (JVM) ran out of memory, causing cascading errors that invalidated other components like Oracle XDK and Java Database Packages (CATJAVA). This wasn’t a mere inconvenience—it meant that critical database functionality was broken, making the system unusable for applications relying on these components.

    Root Cause

    Upon investigation, we found that the issue was caused by using a temporary RMAN parameter file during the restore process. This parameter file contained a minimal set of initialization parameters, which were insufficient to handle the resource-intensive operations required during the upgrade, particularly for recompiling and validating Java components.

    Key memory areas like the SGA, shared pool, large pool, and Java pool were inadequately configured. These areas play a crucial role during the execution of upgrade scripts such as dbupgrade, catctl.pl, or catupgrd.sql. Without sufficient memory, the upgrade process for these components failed midway, leaving them in an invalid state.

    The Fix

    To resolve these issues and ensure the migration proceeded smoothly, the following steps were taken:

    Step 1: Adjust Initialization Parameters

    The first step was to allocate adequate memory for the Java components to prevent out-of-memory conditions. Critical parameters like the Java pool and other memory pools were adjusted to handle the load during the upgrade process:

    ALTER SYSTEM SET java_jit_enabled = TRUE;
    ALTER SYSTEM SET "_system_trig_enabled" = TRUE;
    ALTER SYSTEM SET java_pool_size = 180M; -- Ensure at least 150 MB is allocated

    Step 2: Recreate the Java Component

    The next step was to drop and recreate the Java component in the database. This ensured that any inconsistencies caused by the previous upgrade failure were cleaned up:

    CREATE OR REPLACE JAVA SYSTEM;

    Step 3: Restart the Upgrade Scripts

    After fixing the memory settings and recreating the Java component, the upgrade process was restarted using Oracle’s upgrade utilities:

    • dbupgrade: The recommended tool for 19c migrations.
    • catctl.pl: For manual control over the upgrade process.
    • catupgrd.sql: A fallback script for older methods.

    Logs such as upg_summary.log were closely monitored during the process to catch any errors or exceptions in real-time.

    Step 4: Verify the Upgrade

    Once the upgrade process was completed, the status of all components was verified using the DBA_REGISTRY and DBA_REGISTRY_HISTORY views:

    SELECT SUBSTR(comp_name, 1, 30) comp_name, 
    SUBSTR(version, 1, 20) version,
    status
    FROM dba_registry
    ORDER BY comp_name;

    Expected output:

    COMP_NAME                      VERSION              STATUS
    ------------------------------ -------------------- ---------------
    JServer JAVA Virtual Machine 19.0.0.0.0 UPGRADED

    Key Takeaways

    This experience highlighted several crucial lessons when handling database migrations, especially for major version upgrades like 11g to 19c:

    1. Adequate Initialization Parameters Are Essential

    The memory-related initialization parameters (java_pool_size, shared_pool_size, etc.) must be configured appropriately before starting the upgrade process. Using a minimal parameter file during RMAN DUPLICATE can lead to critical issues if not adjusted later.

    2. Resource-Intensive Components Need Extra Attention

    Components like JAVAVM, Oracle XDK, and CATJAVA are highly resource-intensive. Even slight memory misconfigurations can lead to cascading failures that disrupt the entire migration process.

    3. Monitor Upgrade Logs Closely

    Keeping an eye on upgrade runtime logs and the summary logs (upg_summary.log) is vital for catching errors early. This allows you to address issues promptly before they snowball into larger problems.

    4. Understand Dependencies

    Database components often have interdependencies. For instance, a failure in the Java Virtual Machine component affected both the Oracle XDK and CATJAVA packages. Understanding these dependencies is key to resolving issues effectively.

    Conclusion

    Database migrations are inherently challenging, especially when dealing with major version jumps. This particular experience from migrating Oracle 11g to 19c served as a valuable reminder of the importance of preparation, thorough testing, and paying close attention to resource configurations. With the right approach, even complex migrations can be navigated successfully, ensuring the database is ready for modern workloads and enhanced performance.

    By addressing these pitfalls and being proactive, you can ensure a smoother upgrade process and avoid unnecessary downtime or functionality issues.

    Let me know if this approach resonates with your migration experiences!

    Hope It Helped!
    Prashant Dixit

    Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »