Tales From A Lazy Fat DBA

Loves all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Archive for August, 2021

Oracle zero downtime migration 21.2

Posted by FatDBA on August 31, 2021

I am happy, excited and I guess this time no much writing and explanation is needed, as this sole image is enough 🙂
Pic courtesy: Oracle Corp

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

When and why the optimizer switched to RBO from CBO mode … Why I cannot see that in 10053 trace ??

Posted by FatDBA on August 28, 2021

Hi Guys,

Recently I was working on one performance issue where one critical SQL statement started consuming more time. After a quick check I saw a switch was happened to RBO mode from CBO, but wasn’t sure on when and why the optimizer mode was switched. And the expected answer to my quest is to generate the debug 10053 trace file to get some insight about the cost-based optimizer internal calculations and to check the cardinality, selectivity and draw a good parallel with the way cost of table, index or sort etc. may be calculated.

Usually the best way to work out what’s going on in this situation is to look at the optimizer debug trace event, 10053 trace file. I always prefer to generate optimizer traces in such situations when the mighty optimizer messed up things. Being a performance consultant, it had saved me so many times in the past, always a best bet for me.

But this time it was looking little different, I couldn’t see details about why optimizer switched the mode in the ‘Query‘ section of the trace. I was totally perplexed, I mean this was not the first time I was looking for that information in the trace file. Why it’s not there, what happened .. 😦

This was Oracle 19.3.0.0.0 database running on RHEL, I tried metalink and found one document specific to this issue and luckily this was happening all due to a known bug 31130156. The problem was later on solved after we applied the bug-fix patch and interpreted the reason of the mode switch (I will write another post about the core problem) …

Note: It can be very difficult to interpret the 10053 optimizer trace if you don’t have any prior experience on it. I recommend readers to check one great document written by Wolfgang Breitling with title ‘A Look Under The Hood Of CBO’.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

OGG-01201 Error reported by MGR Access denied

Posted by FatDBA on August 22, 2021

Hi Guys,

Last week encountered a problem with one old GG setup running on v12.2 where the extract was failing with errors OGG-01201/OGG-01668 when doing Initial load.

ERROR   OGG-01201  Oracle GoldenGate Capture for Oracle, exld1.prm:  Error reported by MGR : Access denied
ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, exld1.prm:  PROCESS ABENDING

This ‘access denied’ error was there even when the login information was correct for both source and target systems. I was overly confused and wasn’t sure what was causing the issue!

What I come to know after reading a particular piece of documentation, in version 12.2 of GG, the default behavior is the MANAGER and related EXTRACT/REPLICAT cannot be started or stopped remotely as by default there is only deny rule. And while I was trying to do the initial load on the source server and attempts to starts the replicat on target server, I hit the error. This is a security feature and is to prevent unauthorized access to Oracle GoldenGate manager processes and the processes under its control.

Solution to the problem is add “ACCESSRULE, PROG *, IPADDR *, ALLOW” to your manager parameter file on the target system, something like below. The ACCESSRULE parameter restricts the remote system access.

-- GoldenGate Manager Parameter File (mgr.prm) on Target system
--
userid xxxxxxx, password xxxxxxx
PORT 7810
ACCESSRULE, PROG REPLICAT, IPADDR 10.11.01.15, ALLOW
PURGEOLDEXTRACTS ./dirdat/rp*, USECHECKPOINTS, MINKEEPHOURS 4

Here you can also set priority using PRI (0-99) which specifies the priority. The PROG parameter could be anything like GGSCI, GUI, MGR/MANAGER, REPLICAT, COLLECTOR|SERVER and * for all options (default). IPADDR specifies from which IP can access the specified program. Login_ID specifies with RMTHOST configuration and ALLOW | DENY specifies allow or deny the access.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

New JSON features – The old lovely database is new again with 21c

Posted by FatDBA on August 21, 2021

Oracle Database supports relational, graph, spatial, text, OLAP, XML, and JSON data – yes, all at once in one database.
The Oracle Database 21c provides a native JSON data type in binary format. This data type can be used in tables, uses less space and is faster. It’s uniquely designed Oracle Binary JSON format (OSON) is able to speed up both OLAP and OLTP workloads over JSON documents.

I’d recently did some tests and found JSON datatype is now fully integrated into all components of the 21c database and have few new things added to improve its performance. This post is all about JSON datatype in the Oracle 21c Database eco-system, new features, improvements etc.

So, before I move ahead, would like to first build the foundation for the readers, lets create a table with with JSON Data and do some examples.

-- Create a table with JSON datatype.
CREATE TABLE testOrder
 (did NUMBER PRIMARY KEY, jdoc JSON)


-- Let's insert some data to the table
INSERT INTO testOrder
VALUES (1, ' {"testOrder": {
"podate": "2015-06-03",
"shippingAddress": {"street": "3467 35th Ave",
 "city" : "Clara", “state”: “CA”, "zip":
94612},
"comments" : "Discounted sales Foundation Day",
"sparse_id" :"PFHA35",
"items": [
 {"name" : "TV", "price": 341.55, "quantity": 2,
 "parts": [
 {"partName": "remoteCon", "partQuantity": 1},
 {"partName": "antenna”, "partQuantity": 2}]},
 {"name": “PC”, “price”: 441.78, "quantity": 10,
 "parts": [
 {"partName": "mousepad", "partQuantity": 2},
 {"partName": "keyboard", "partQuantity": 1}]}
]}}');



-- Do some SELECT ops
SELECT did,
 po.jdoc.testOrder.podate.date(),
 po.jdoc.testOrder.shippingAddress,
 po.jdoc.testOrder.items[*].count(),
 po.jdoc.testOrder.item[1]
FROM testOrder po
WHERE po.jdoc.testOrder.podate.date() =
TO_DATE(‘2015-06-03’,'YYYY-MM-DD') AND
po.jdoc.testOrder.shippingAddress.zip.number()
BETWEEN 84610 AND 84620;


SELECT JSON {
 ‘name’ : li.itemName,
 ‘sales’ : li.price * li.quantity
}
FROM lineItems_rel li 




-- That's how to UPDATE 
UPDATE testOrder po
SET jdoc = JSON_TRANSFORM(jdoc,
 REPLACE
‘$.testOrder.shippingAddress.city’
 = ‘Oakland’,
 REPLACE ‘$.testOrder.shippingAddress.zip’
 = 94607,
 SET '$.testOrder.contactPhone' =
 JSON('["(415)-867-8560","(500)312-8198"]'),
 REMOVE ‘$.testOrder.sparse_id’,
 APPEND ‘$.testOrder.items’ =
 JSON(‘{“items” :[{“name”:”iphone”,
 “price” : 635.54, “quantity” :2}]}’))
WHERE po.jdoc,testOrder.podate.date() =
 TO_DATE(‘2019-07-01’); 

So, that’s how you can create, query, update your JSON data in any table, pretty cool right 🙂

Okay, coming back to the purpose of the post – What is new in Oracle 21c in terms of JSON support ?

  • Though JSON data type was added in Oracle 20c to provide native support, but is generally available in version 21c.
  • Earlier to 21c, users can only use a single-value functional index to accelerate JSON_VALUE() predicate evaluation. Antecedently, a functional index was bounded to index at most one value per row for JSON that meant a field value having at most one occurrence. In 21c, a user can create a multi-value functional index on a JSON datatype column to index elements within a JSON array. This speeds up the rating of JSON_EXISTS() – an operator allowing the use array of equivalence predicates of the SQL/JSON path language.
  • Oracle 21c includes several other enhancements to the JSON functionality in the database JSON_SCALAR function, that creates an instance of a JSON type from a SQL scalar value.
  • JSON_TRANSFORM function was introduced in Oracle Database 21c to make JSON data alterations simpler or easier in complexity.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

What’s new in Golden Gate version 21c ?

Posted by FatDBA on August 20, 2021

Hi Guys,

Oracle has recently released Golden Gate version 21.1, this happened immediately after they released database version 21c (21.3) for on-prem. Today’s post is all about new features and changes happened with this new GG version.

  • Oracle GoldenGate is available with Microservices Architecture : This release of Oracle GoldenGate is available with Microservices Architecture only.
  • This release of Oracle GoldenGate is available with Microservices Architecture only.
  • Automatic Extract of tables with supplemental logging is supported : Oracle GoldenGate provides a new auto_capture mode to capture changes for all the tables that are enabled for logical replication. You can list the tables enabled for auto-capture using the LIST TABLES AUTO_CAPTURE command option. Use the TRANLOGOPTIONS INTEGRATEDPARAMS auto_capture option to set up automatic capture.
  • Oracle native JSON datatype is supported : Oracle GoldenGate capture and apply processes now support the new native JSON datatype, which is supported by Oracle Database 21c and higher.
  • Enhanced Automatic Conflict Detection and Resolution for Oracle Database 21c
  • Autonomous Database Extract is supported : Oracle GoldenGate can now capture from the Autonomous Databases in OCI.
  • Large DDL (greater than 4 MB) replication is supported : DDLs that are greater than 4 MB in size will be provided replication support.
  • DB_UNIQUE_NAME with heartbeat table : DB_UNIQUE_NAME is available with the heartbeat table to allow users to uniquely identify the source of the heartbeat.
  • Oracle GoldenGate binaries are no longer installed on a shared drive : Oracle always recommended installing the Oracle GoldenGate binaries (OGG_HOME) on a local file system as a best practice. From this release onward, it is a requirement. The binaries must be installed on local drives.
  • Partition Filtering
  • A new Extract needs to be created when the DB timezone is changed : You need to create new Extract if DB timezone is changed, especially in case of Oracle Cloud deployment.
  • DB_UNIQUE_NAME with trail file header : DB_UNIQUE_NAME is added in the trail file header along with DB_NAME, which helps in troubleshooting replication in active-active environments, where mostly all replicas have the same DB_NAME but identify each replica site uniquely using the DB_UNIQUE_NAME.
  • Per PDB Capture
  • Parallel Replicat Core Infrastructure Support for Heterogeneous Databases : Parallel Replicat is supported with SQL Server, DB2 z/OS, and MySQL.

Release announcement link : https://blogs.oracle.com/dataintegration/oracle-goldengate-21c-release-announcement

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Kafka Producer: Error while fetching metadata with correlation id INVALID_REPLICATION_FACTOR

Posted by FatDBA on August 14, 2021

Hi Guys,

Recently I was working on a replication project where we used Kafka to move data from source to target. I tried to create a test topic using Kafka producer console and immediately kicked out with error which says “INVALID_REPLICATION_FACTOR”. This we were doing on a test VM with single CPU and with limited system resources.

[root@cantowintert bin]#
[root@cantowintert bin]# kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic first_topic
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
>hello prashant
[2021-07-26 08:18:30,051] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 40 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,154] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 41 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,260] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 42 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,367] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 43 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,471] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 44 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,576] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 45 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,681] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 46 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,788] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 47 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,896] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 48 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:31,000] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 49 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:31,103] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 50 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:31,221] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 51 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
^Corg.apache.kafka.common.KafkaException: Producer closed while send in progress
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:909)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
        at kafka.tools.ConsoleProducer$.send(ConsoleProducer.scala:71)
        at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:53)
        at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: Requested metadata update after close
        at org.apache.kafka.clients.producer.internals.ProducerMetadata.awaitUpdate(ProducerMetadata.java:126)
        at org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:1047)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:906)
        ... 4 more
[root@cantowintert bin]#

Lets check what is captured in Kafka server startup logs, and we found the hint that the RF is > than the available brokers.

org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
[2021-07-26 08:24:45,723] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (cantowintert.bcdomain/192.168.20.129:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:25:06,830] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (cantowintert.bcdomain/192.168.20.129:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:25:27,950] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (cantowintert.bcdomain/192.168.20.129:9092) could not be established. Broker may not be available. 

Solution to the problem is to uncomment this line and restart and try to edit the topic all over again.

listeners=PLAINTEXT://:9092

changed this to

listeners=PLAINTEXT://127.0.0.1:9092

Hope It helped
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

What is that strange looking TRANLOGOPTIONS EXCLUDETAG in parameter file ?

Posted by FatDBA on August 11, 2021

Hi Guys,

Recently someone asked why we have this strange looking entry in Golden Gate extract parameter file which reads ‘TRANLOGOPTIONS EXCLUDETAG 00‘. Why is that, what are those numbers ? I was able to explain him the purpose, would like to write a short post about it.

OGG v12.1.2 has a new EXTRACT parameter TRANLOGOPTIONS EXCLUDETAG. This is typically used to exclude the REPLICAT user in bi-directional configurations. When Extract is in classic or integrated capture mode, use the TRANLOGOPTIONS parameter with the EXCLUDETAG tag option. This parameter directs the Extract process to ignore transactions that are tagged with the specified redo tag. For example:

extract exttestpd
useridalias clouduser
EXTTRAIL ./dirdat/rp, format release 12.1
ddl include all
ddloptions addtrandata, report
tranlogoption excludetag 00 
TABLE dixituser.*;

Changes made by Integrated REP are tagged by default in redo as 00. So adding the EXTRACT parameter TRANLOGOPTIONS EXCLUDETAG 00 Would exclude those operations. The tag can also be explicitly set in REPLICAT using:

DBOPTIONS SETTAG 0885

Then in EXTRACT param:

TRANLOGOPTIONS EXCLUDETAG 0885

The TRANLOGOPTION EXCLUDETAG 00 prevents Golden Gate extract from capturing transactions from the replication which are by default tagged with “00”. The excludetag will ensure that the we don’t run into problems with ping-pong updates.

Some other possible examples of using this parameter are …

TRANLOGOPTIONS EXCLUDETAG 00
TRANLOGOPTIONS EXCLUDETAG +
TRANLOGOPTIONS EXCLUDETAG 0991
TRANLOGOPTIONS EXCLUDETAG 2222 4444

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

 
%d bloggers like this: