Tales From A Lazy Fat DBA

Fan of Oracle DB & Performance, PostgreSQL & Cassandra … \,,/

  • Likes

    • 244,219
  • Archives

  • Categories

  • Subscribe

Connecting Oracle database from PostgreSQL using ORACLE_FDW (Foreign Data Wrappers)

Posted by FatDBA on August 27, 2020

Hi All,

Today when I received a call from my team mate where he was facing issues with PostgreSQL when trying to access a remote table that exists on an Oracle database, I I instantly recalled something similar I tried few years back using Oracle Foreign Data Wrappers (oracle_FDW). So, I tried to perform a quick demo showing steps on how to do that using FDW.

Source: EnterpriseDB 10.12.20 on x86_64-pc-linux-gnu
Target: Oracle Enterprise 12c Release 2 (12.2.0.1.0)

Software Used:
Oracle Instant Clients (Need both BASIC and DEVEL packages)
Link: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

Oracle FDW Software: Version 2.2.1)
Link: https://pgxn.org/dist/oracle_fdw/2.2.1/

Below are the files that I have downloaded.


-rw-rw-r--.  1 enterprisedb enterprisedb 143K Aug 27 17:12 oracle_fdw-2.2.1.zip
-rw-rw-r--.  1 enterprisedb enterprisedb  51M Aug 27 17:26 oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
-rw-rw-r--.  1 enterprisedb enterprisedb 593K Aug 27 17:27 oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
drwxrwxr-x.  5 enterprisedb enterprisedb 4.0K Aug 27 22:43 oracle_fdw-2.2.1

[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
 


Let’s install both if the two instant clients on the server.


[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
[sudo] password for enterprisedb:
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ sudo -Uvh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
sudo: unknown user: vh
[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
 


now let’s quickly check if all required libraries are created.


[oracle@fatdba client64]$ cd lib/
[oracle@fatdba lib]$ pwd
/usr/lib/oracle/12.2/client64/lib
[oracle@fatdba lib]$
[oracle@fatdba lib]$
[oracle@fatdba lib]$ ls -ll
total 216568
lrwxrwxrwx. 1 root root        21 Aug 27 22:59 libclntshcore.so -> libclntshcore.so.12.1
-rw-rwxr--. 1 root root   8033199 Jan 26  2017 libclntshcore.so.12.1
lrwxrwxrwx. 1 root root        17 Aug 27 22:59 libclntsh.so -> libclntsh.so.12.1
-rw-rwxr--. 1 root root  71638263 Jan 26  2017 libclntsh.so.12.1
-rw-rwxr--. 1 root root   2981501 Jan 26  2017 libipc1.so
-rw-rwxr--. 1 root root    539065 Jan 26  2017 libmql1.so
-rw-rwxr--. 1 root root   6568149 Jan 26  2017 libnnz12.so
lrwxrwxrwx. 1 root root        15 Aug 27 22:59 libocci.so -> libocci.so.12.1
-rw-rwxr--. 1 root root   2218687 Jan 26  2017 libocci.so.12.1
-rw-rwxr--. 1 root root 124771800 Jan 26  2017 libociei.so
-rw-rwxr--. 1 root root    158543 Jan 26  2017 libocijdbc12.so
-rw-rwxr--. 1 root root    380996 Jan 26  2017 libons.so
-rw-rwxr--. 1 root root    116563 Jan 26  2017 liboramysql12.so
-rw-r--r--. 1 root root   3984814 Jan 26  2017 ojdbc8.jar
-rw-rwxr--. 1 root root    312974 Jan 26  2017 ottclasses.zip
-rw-r--r--. 1 root root     37494 Jan 26  2017 xstreams.jar
[oracle@fatdba lib]$
 


Yup, they all are there, perfect!
Now lets set the installation library path using variable LD_LIBRARY_PATH, same as below.
This is to avoid errors i.e.

ERROR:  could not load library "/opt/edb/as10/lib/oracle_fdw.so": libclntsh.so.12.1: cannot open shared object file: No such file or directory 


[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$ export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
[enterprisedb@fatdba oracle_fdw-2.2.1]$
 


Now let’s do a quick restart of the postgresql instance.


[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$ pg_ctl -D /opt/edb/as10/data restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2020-08-27 23:06:33 CEST [3641]: [1-1] user=,db=,remote= LOG:  listening on IPv4 address "0.0.0.0", port 5444
2020-08-27 23:06:33 CEST [3641]: [2-1] user=,db=,remote= LOG:  listening on IPv6 address "::", port 5444
2020-08-27 23:06:33 CEST [3641]: [3-1] user=,db=,remote= LOG:  listening on Unix socket "/tmp/.s.PGSQL.5444"
2020-08-27 23:06:33 CEST [3641]: [4-1] user=,db=,remote= LOG:  redirecting log output to logging collector process
2020-08-27 23:06:33 CEST [3641]: [5-1] user=,db=,remote= HINT:  Future log output will appear in directory "custompg_logdir".
 done
server started
[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$ psql -d enterprisedb -U enterprisedb
psql.bin (10.12.20)
Type "help" for help.
 


All good, lets created the EXTENSION in postgresql.


enterprisedb=#
enterprisedb=#
enterprisedb=# create extension oracle_fdw;
CREATE EXTENSION
enterprisedb=#
 


Now next will create the foreign server for the remote database which I need to connect to, Oracle 12.2 in my case.
Note: Here 10.0.0.130 is the oracle_server machine IP address where Oracle is running and ‘fatdb’ is the instance name.


enterprisedb=#
enterprisedb=#
enterprisedb=#
enterprisedb=# create server oracle foreign data wrapper oracle_fdw options (dbserver '//10.0.0.130/fatdb' );
CREATE SERVER
enterprisedb=#
 


Next is to create the USER MAPPING for the specific user which I would like to access, in my case it was ‘migr’ user, and finally pass the password of the user.


enterprisedb=# create user mapping for enterprisedb server oracle options (user 'migr', password 'oracle90');
CREATE USER MAPPING
enterprisedb=#
enterprisedb=#
 


All set, you can now try to query the remote table.


enterprisedb=#
enterprisedb=# select * from migr.bigtab1;
  id  |    created_date    | lookup_id |            data
------+--------------------+-----------+----------------------------
  320 | 19-MAY-19 02:10:38 |         1 | This is some data for 320
  321 | 19-MAY-18 02:10:38 |         2 | This is some data for 321
  322 | 19-MAY-19 02:10:38 |         1 | This is some data for 322
  323 | 19-MAY-20 02:10:38 |         3 | This is some data for 323
  324 | 19-MAY-18 02:10:38 |         2 | This is some data for 324
  325 | 19-MAY-20 02:10:38 |         3 | This is some data for 325
  326 | 19-MAY-19 02:10:38 |         1 | This is some data for 326
  327 | 19-MAY-18 02:10:38 |         2 | This is some data for 327
  328 | 19-MAY-19 02:10:38 |         1 | This is some data for 328
  329 | 19-MAY-20 02:10:38 |         3 | This is some data for 329
  330 | 19-MAY-18 02:10:38 |         2 | This is some data for 330
  331 | 19-MAY-20 02:10:38 |         3 | This is some data for 331
  332 | 19-MAY-19 02:10:38 |         1 | This is some data for 332
 


Let’s do some more fun, lets try to do one insertion.


enterprisedb=#
enterprisedb=# select count(*) from migr.bigtab1;
 count
-------
  1000
(1 row)

enterprisedb=# insert into migr.bigtab1 values(1001, '19-MAY-19 02:10:38', 4, 'This is some data for prashant');
INSERT 0 1
enterprisedb=# select count(*) from migr.bigtab1;
 count
-------
  1001
(1 row)

enterprisedb=# select * from migr.bigtab1 where id=1001;
  id  |    created_date    | lookup_id |              data
------+--------------------+-----------+--------------------------------
 1001 | 19-MAY-19 02:10:38 |         4 | This is some data for prashant
(1 row)

enterprisedb=#
enterprisedb=#
 

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | 1 Comment »

Datastax Certified Cassandra Administrator, some tips & more

Posted by FatDBA on August 21, 2020

Hi Guys,

With a sharp rise in NoSQL databases, many of the organizations are making a transition from traditional databases to distributed and high performance databases like ‘Cassandra’. Cassandra has become Apache’s one of the most popular projects. Though there are multiple NoSQL databases available in the market but no one has the features like peer-to-peer architecture, HA and Fault tolerant, Column based, Highly perform-ant, Schema Less, tunable consistency, great analytical possibilities, easy to scale-up & scale-down, distributed and the list goes on and on and on.

Cassandra already proved it’s mettle and is magical for IoT, Sensor data, Event based, Time series data, voucher generation systems and with other data models. Datastax provides best in class database management software and wide-range services with 24×7 support to get more from your Cassandra. Alongside comes some really cool features and tools i.e. opscenter (GUI), Nodesync (for enti entropy repairs), great SOLR integration, dsetool (similar to nodetool with more capabilities), sstableloader, pre-flight check tool, yaml file compare tools, stress tools, extra commands i.e. dsefs and many more.

DataStax is a pioneer and they have their own Cassandra certification path/track to prove you have valid credentials to work with Cassandra database either as a developer or an administrator. Now question comes where to start ?? – In fact many of you have asked me about my latest credentials ‘Datastax Apache Cassandra 3.x Administrator Associate‘, I was getting questions like how to prepare, how to book the exam and many other related questions. So, this post will be all about covering topics like how to prepare and book exam along with few tips.

I would always prefer to go point wise to make things more ordered and easy to digest.

1. Create your account on Datastax Academy.
Link: https://auth.cloud.datastax.com/auth/realms/CloudUsers/login-actions/registration?client_id=absorb&tab_id=lv4-57nRbu4

2. Go to the option ‘Catalog’ to lookout for courses available.
You have to choose between the Administrator (3 course based curriculum) or Developer (3 Courses based curriculum) track. I have completed the ADMIN path and it has three courses DS101 (Introduction), DS201 (Foundations) and DS210 (Operations with Apache Cassandra). All of the courses are beautifully designed, contains large numbers of demos, presentations, guides, quiz and a pre-build Ubuntu VM where you can all exercises.

Though the presentations and program covers every topic and all major parameters and topics but still if you want to read in depth, they have their own document collection and can be accessed through their website https://docs.datastax.com/en/landing_page/doc/landing_page/current.html or from https://cassandra.apache.org/doc/latest/

Note: There are few other specialized courses available too within the catalog i.e. Kafka connectors, DSE Graph, DSE Analytics, DSE Search etc.

3. Other learning platforms
Github: https://github.com/datastax
Can be very useful specially if you are preparing for developer track.
Youtube: Full of some great presentations, videos and some precious workshops and demos.
https://www.youtube.com/user/DataStaxMedia
Twitter: For news (about webinars etc.), press releases and other exciting information.
https://twitter.com/DataStax (@DataStax)

4. All set!
Once you are done with your all three of your courses under ADMIN track, you are done and ready for the certification. Go to ‘Datastax Certification’ widget within catalog and book your exam by creating your profile on their certification website.
https://certification.mettl.com/datastax/applicant/signup

Currently they are giving one free exam vouchers and those will be issued at the end of the series for participants of the workshop.

5. Once registered you have to choose your exam type – Admin or Developer.
Both of the exams has 60 questions that you have to complete within 90 minutes, exam fees (right now) is $145
Note: It’s good that you check your system comparability before the exam, for more details follow their official guidelines.

So, don’t wait, go and enroll for the course and grab a chance for giving free certification and more importantly stand out from the crowd. These widely accepted and recognized credentials will help you in your continued professional development and is an ideal way to gain a greater understanding of your industry, and to enhance your knowledge and skills. It also offers excellent chances to network among Cassandra geeks.

Hope It Helps!
Prashant Dixit

Posted in Basics, Uncategorized | Tagged: | Leave a Comment »

Passed ‘Datastax Apache Cassandra 3.x Administrator Associate’ Certification

Posted by FatDBA on August 21, 2020

Hi Guys,

News to share, today itself I’ve cleared the ‘Datastax Apache Cassandra 3.x Administrator Associate‘ Certification. Was working towards Cassandra from past few months now, and it’s always good to get certified from the market leaders. Good way to stand out from the crowd!

Will soon write a blog about the preparation and the process to get certified from Datastax, stay tuned!

Hope It Helps!
Prashant D

Posted in Basics | Tagged: | Leave a Comment »

Cassandra node refused to start – MismatchedInputException: No content to map due to end-of-input

Posted by FatDBA on August 5, 2020

Hi All,

This one will be a quick one, kind of error and solution approach.
This time it’s about Cassandra statup error which I have faced while doing some testing on a POC box and how I resolved it.

In this case I was using Datastax Cassandra 6.8.1 on RHEL6, I tried starting the instance but failed with error which says “MismatchedInputException: No content to map due to end-of-input”.
I’ve attempted to start it few more times but ended up with the same error message. Below is the complete/full error details.


WARN  [main] 2020-08-05 13:43:46,277  DatabaseDescriptor.java:1517 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO  [main] 2020-08-05 13:43:46,281  DseDelegateSnitch.java:39 - Setting my workloads to [Cassandra]
INFO  [main] 2020-08-05 13:43:46,358  YamlConfigurationLoader.java:77 - Configuration location: file:/home/cassandra/dse-6.8.1_node1/resources/cassandra/conf/cassandra.yaml
INFO  [main] 2020-08-05 13:43:46,360  DseDelegateSnitch.java:41 - Initialized DseDelegateSnitch with workloads [Cassandra], delegating to com.datastax.bdp.snitch.DseSimpleSnitch
INFO  [main] 2020-08-05 13:43:46,424  YamlConfigurationLoader.java:77 - Configuration location: file:/home/cassandra/dse-6.8.1_node1/resources/cassandra/conf/cassandra.yaml
ERROR [main] 2020-08-05 13:43:47,904  DseModule.java:126 - Unable to start server. Exiting...
org.apache.cassandra.io.FSReadError: com.fasterxml.jackson.databind.exc.MismatchedInputException: No content to map due to end-of-input
 at [Source: (BufferedInputStream); line: -1, column: 0]
        at com.datastax.bdp.db.nodes.Nodes.transactionalRead(Nodes.java:195)
        at com.datastax.bdp.db.nodes.Nodes.access$100(Nodes.java:76)
        at com.datastax.bdp.db.nodes.Nodes$Local.(Nodes.java:477)
        at com.datastax.bdp.db.nodes.Nodes.(Nodes.java:97)
        at com.datastax.bdp.db.nodes.Nodes$Instance.setup(Nodes.java:911)
        at org.apache.cassandra.config.DatabaseDescriptor.applyMetadata(DatabaseDescriptor.java:587)
        at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:234)
        at org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:249)
        at com.datastax.bdp.DseCoreModule.(DseCoreModule.java:84)
        at com.datastax.bdp.DseModule.getRequiredModules(DseModule.java:163)
        at com.datastax.bdp.server.AbstractDseModule.configure(AbstractDseModule.java:27)
        at com.datastax.bdp.DseModule.configure(DseModule.java:89)
        at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
        at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
        at com.google.inject.spi.Elements.getElements(Elements.java:110)
        at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
        at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
        at com.google.inject.Guice.createInjector(Guice.java:96)
        at com.google.inject.Guice.createInjector(Guice.java:73)
        at com.google.inject.Guice.createInjector(Guice.java:62)
        at com.datastax.bdp.ioc.DseInjector.get(DseInjector.java:36)
        at com.datastax.bdp.DseModule.main(DseModule.java:103)
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: No content to map due to end-of-input
 at [Source: (BufferedInputStream); line: -1, column: 0]
 


Now, let’s talk about the fix that I have applied to get rid of the problem, but before that I recalled that the database was abruptly shutdown last time due to server/hsot crashed. This file contains local startup information like data center, release info, scheam version, native transport address, port info (JMX, storage etc.), broadcast address etc.


[cassandra@fatdba-doccass bin]$ cd /var/lib/cassandra/metadata/nodes/
[cassandra@fatdba-doccass nodes]$ ls
local  peers  snapshots
[cassandra@fatdba-doccass nodes]$ rm -rf local
[cassandra@fatdba-doccass nodes]$
[cassandra@fatdba-doccass nodes]$
[cassandra@fatdba-doccass nodes]$ ls -ltrh
total 4.0K
drwxrwxr-x. 2 cassandra cassandra 6 Jul 27 16:07 snapshots
-rw-rw-r--. 1 cassandra cassandra 1 Aug  4 18:29 peers
 

Now let’s try to start the node once again.


[cassandra@fatdba-doccass bin]$
[cassandra@fatdba-doccass bin]$ ./dse cassandra
 


And it got started after clearning the local details/information which was there when node was crashed.

Hope It Helps
Prashant Dixit

Posted in Advanced, troubleshooting | Tagged: , | Leave a Comment »

High Latch Free waits due to Result Cache: RC Latch contention

Posted by FatDBA on June 16, 2020

Hi Everyone,

This is one of the latest issue that I will be discussing next where in one of the 12c (12.1.0.2.0) Non-RAC production box we suddenly started seeing a huge spike in database workload (or AAS). As usual I started with some live monitoring using scripts, OEM Performance page and tools and saw huge numbers of sessions waiting on ‘Latch Free’ (Others classed event) waits. Next I saw the event wait were contributing more than 87% of the total DBTime % with exceptionally high latch wait times of 54 ms (Yes, that’s too much for latch gets) and what else caught my eye was ‘enq: RC – Result Cache: Contention’ with average wait times of 19.31 ms.

But lets not jump directly to the conclusion, next I checked SQLs with excessive buffer scans (SQL Ordered by Gets) and saw the source statement that caused the mess with exceptionally high number of buffer scan requests and as you must be aware that each scan that you do in memory buffers (db buffer cache) you will have to acquire a latch for your scan. This one SQL was responsible for around 85% of total buffer reads or gets and interestingly within three hours of AWR report not even a single execution was completed.

Alright now next task was to identify the type of latch which was causing ‘latch free’ or latch shortage in the database. And for that I checked ‘Latch Statistics’ section and ‘Latch Sleep Breakdown’ subsection to see what latch was frequently missed to get and with more sleep requests, and I have got the culprit, ‘Result Cache: RC Latch’ was coming with huge Miss and sleep ratio. The latch was sleeping mainly for ‘Result Cache: Serialization12’ and little bit on ‘Result Cache: Serialization01’.

Next just to be double sure, I checked for P2 value for ‘Latch Free’ event which was coming as 559. So I queried V$Latchname to see if it’s matching the same what we identified so far. And yes, it was same RC Latch!


> SELECT latch#,name FROM gv$latchname WHERE latch#=559;

    LATCH# NAME
---------- ----------------------------------------------------------------
       559 Result Cache: RC Latch
 

You can do more deep down analysis and troubleshooting on such issues, Tanel Poder has written a great presentation for all who want to dig deep and want to pin point the addresses, code etc.
Download the copy of the document, click next –> Oracle_Latch_And_Mutex_Contention_Troubleshooting

Alright, now when the latch name, SQL is identified lets check more about the RESULT CACHE, and it’s set to MANUAL (parameter result_cache_mode) and this is strange. Next we saw that the SQL using the RC contains DS_SVC hints and are part of dynamic statistics activity.
Then I thought to look out for some official documentation on the same, and luckily found one official note from Oracle Doc ID 2002089.1 which is matching almost same with my problem.

And as per the document the workload we can apply is to disable the Adaptive Dynamic Statistics mechanism.


alter system set "_optimizer_ads_use_result_cache" = FALSE;
 

or else One-off Patch 26436717 can be requested and applied to fix the issue for a permanent fix.

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

Golden Gate 19.1.0 – VIEW REPORT & VIEW PARAMS command failed

Posted by FatDBA on March 24, 2020

Hi Folks,

Would like to share one of the latest problem that I’ve faced with Golden Gate release 19.1.0 where two of the most frequently used commands, VIEW REPORT and VIEW PARAMS command simply fails in the administration client console. This is a Windows server environment where I was doing some testing with this new release.
I mean it produces no additional logs or information while doing that, it simply tries to wake up and crashes in next few seconds.

So, with no idea about the problem, plus this being a new release of Oracle GG (19.1) there wasn’t any metalink note or public documentation available, so reached Oracle support. They gave a very simple but logical solution to handle this issue. They asked me to once again set the default text viewer program for viewing parameter and report files by passing following command and it worked


SET PAGER notepad 


They later on declared this issue as a BUG with code 30427030, but bug details are yet to be published on Metalink website.


Stay inside, learn something new during this self isolation & stop the spread of Covid-19.
Stay Healthy and Safe Everyone!

Prashant Dixit
 

Posted in troubleshooting | Tagged: , | Leave a Comment »

How to enable RAT feature and some Troubleshooting

Posted by FatDBA on February 28, 2020

Hi Everyone,

This post of all about preparing your database before you start using RAT on it, as it’s little tricky. You have to enable the feature at the binary level before you start using the RAT replay plus you have to patch your database with few of the RAT related mandatory patches.

This post is all about enabling the feature at the binary level and one of the issue that I have faced immediately after that.

My database which is 11.2.0.4.0 (2 Node RAC) has the RAT feature set to FALSE, which means that the option is not yet enabled.


SQL> select * from v$option where parameter like 'Real Application Testing';

PARAMETER                                                        VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
Real Application Testing                                         FALSE
 

Alright, so next step is to enable the feature at the binary level, and to do that you have to run ‘make’ command and bind rat_on to your Oracle Home. Here I have a 2 Node RAC and I am starting with Node 1 and once done will repeat steps on Node 2.


[oracle@Testdb lib]$ make -f ins_rdbms.mk rat_on ioracle
/usr/bin/ar d /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/libknlopt.a kecnr.o
/usr/bin/ar cr /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/kecwr.o
chmod 755 /u01/app/oracle/product/11.2.0/dbhome_1/bin

 - Linking Oracle
rm -f /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/oracle
gcc  -o /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/oracle -m64 -z noexecstack -L/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/11.2.0/dbhome_1/lib/ 
-L/u01/app/oracle/product/11.2.0/dbhome_1/lib/stubs/   -Wl,-E /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/ttcsoi.o  -Wl,--whole-archive -lperfsrv11 -Wl,--no-whole-archive /u01/app/oracle/product/11.2.0/dbhome_1/lib/nautab.o /u01/app/oracle/product/11.2.0/dbhome_1/lib/naeet.o /u01/app/oracle/product/11.2.0/dbhome_1/lib/naect.o /u01/app/oracle/product/11.2.0/dbhome_1/lib/naedhs.o /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/config.o  -lserver11 -lodm11 -lcell11 -lnnet11 -lskgxp11 -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11  -lvsn11 -lcommon11 -lgeneric11 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap11" ; fi` -lslax11 -lpls11  -lrt -lplp11 -lserver11 -lclient11  -lvsn11 -lcommon11 -lgeneric11 `if [ -f /u01/app/oracle/product/11.2.0/dbhome_1/lib/libavserver11.a ] ; then echo "-lavserver11" ; else echo "-lavstub11"; fi` `if [ -f /u01/app/oracle/product/11.2.0/dbhome_1/lib/libavclient11.a ] ; then echo "-lavclient11" ; fi` -lknlopt -lslax11 -lpls11  -lrt -lplp11 -ljavavm11 -lserver11  -lwwg  `cat /u01/app/oracle/product/11.2.0/dbhome_1/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/product/11.2.0/dbhome_1/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lmm -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lztkg11 `cat /u01/app/oracle/product/11.2.0/dbhome_1/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/product/11.2.0/dbhome_1/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11   -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `if /usr/bin/ar tv /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo11"; fi` -L/u01/app/oracle/product/11.2.0/dbhome_1/ctx/lib/ -lctxc11 -lctx11 -lzx11 -lgx11 -lctx11 -lzx11 -lgx11 -lordimt11 -lclsra11 -ldbcfg11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -locr11 -locrb11 -locrutl11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11  -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lsnls11 -lunls11  -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lasmclnt11 -lcommon11 -lcore11 -laio    `cat /u01/app/oracle/product/11.2.0/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/11.2.0/dbhome_1/lib -lm    `cat /u01/app/oracle/product/11.2.0/dbhome_1/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/11.2.0/dbhome_1/lib

test ! -f /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle ||\
           mv -f /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracleO
mv /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle
[oracle@Testdb lib]$
 

Alright, so it’s done. No errors during the enablement process. Let’s connect with the database and see if the status of the feature.


[oracle@Testdb ~]$ !sql
sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Sep 16 22:14:59 2019

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected.
SQL> alter system switch logfile;

ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0
 

Oops, why is that. I have set the environmental variables and everything is in place then why it’s not allowing me to connect with the database and say not logged on. This is strange!

Let’s check if there is anything captured withing alert logs and other traces.


Mon Jan 16 22:19:48 2019
WARNING: Oracle executable binary mismatch detected.
 Binary of new process does not match binary which started instance
issue alter system set "_disable_image_check" = true to disable these messages
Process J000 died, see its trace file
kkjcre1p: unable to spawn jobq slave process
Errors in file /u01/app/oracle/diag/rdbms/Testdb/Testdb1/trace/Testdb1_cjq0_176885.trc:
Mon Jan 16 22:19:49 2019
WARNING: Oracle executable binary mismatch detected.
 Binary of new process does not match binary which started instance
issue alter system set "_disable_image_check" = true to disable these messages
Mon Jan 16 22:19:49 2019
Errors in file /u01/app/oracle/diag/rdbms/Testdb/Testdb1/trace/Testdb1_j000_1389.trc:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54322 (dba), current egid = 54321 (oinstall)
Mon Jan 16 22:19:49 2019
WARNING: Oracle executable binary mismatch detected.
 Binary of new process does not match binary which started instance
issue alter system set "_disable_image_check" = true to disable these messages
 

But it’s all good, no error except a general warning of “_disable_image_check” which happens as we have made a live change and it’s reporting that a mismatch is there at the binary level which were used to start the database instance. This can be easily suppressed by setting the parameter “_disable_image_check” to true.

Alright, so no idea or any hint from alerts and other logs.
Next I have checked the permissions of executable ‘oracle’ under ORACLE_HOME/bin folder.


[oracle@Testdb bin]$ ls -ll oracle
-rwsr-s--x 1 oracle oinstall 228395182 Sep 16 23:02 oracle
 


Why the group has changed from ‘dba’ to ‘oinstall’. This is strange!
Let me revert back the group name and re-assign it back to the older group.


[oracle@Testdb bin]$ chown oracle:dba oracle
 


Second I have noted the change in file permissions. It’s missing the ‘setuid’ access right flag from it, which is present in Node 2 where we didn’t made any change. I have revert back the permissions as well!


[oracle@Testdb bin]$ ls -ll oracle
-rwxr-x--x 1 oracle dba 228395182 Sep 16 23:02 oracle

[oracle@Testdb bin]$ chmod 6751 oracle

[oracle@Testdb bin]$ ls -ll oracle
-rwsr-s--x 1 oracle dba 228395182 Sep 16 23:02 oracle
 

And it worked, no errors this time and I was able to connect with the database using SQL Plus.


SQL> select * from v$option where parameter like 'Real Application Testing';

PARAMETER                                                        VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
Real Application Testing                                         TRUE

SQL> select name, open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
Testdb  READ WRITE

SQL> select name, open_mode from gv$database;

NAME      OPEN_MODE
--------- --------------------
Testdb  READ WRITE
Testdb  READ WRITE
 

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

RAT (Real Application Testing) Replay using Filters

Posted by FatDBA on February 18, 2020

Hi Everyone,

My fourth post in a row about one of Oracle’s most underutilized tool named Real Application Testing (RAT) which is no doubt an extremely cost-effective and easy-to-use proactive performance management solution that enables businesses to fully assess the outcome of a system changes in test or production.

Alright, today’s post is all about RAT Reply in circumstances when you want to filter something from your captured workload. I mean for example you have captured 1 hour of production workload which you want to play on your target database but not everything, I mean want to exclude few of the schemas.

Well, this can be easily done during the load capture time as well, but in case if you missed that then this can also be done during replay time. So, today’s post is all about that. Once again I would like to explain the method using step-wise approach.

Step 1:
Suppose you want to exclude user with name DIXIT
Create exclusion filter for username DIXIT.


BEGIN
DBMS_WORKLOAD_REPLAY.ADD_FILTER (fname => 'ignore_dixit30', fattribute => 'USER', fvalue => 'DIXIT');
END;
/
 

Step 2:
Next you need to create the filter set which will be used by the replay during execution. This is where you have to pass the replay directory details, Name of the filter set you want and the action to include or bind it with the replay.


exec DBMS_WORKLOAD_REPLAY.CREATE_FILTER_SET (replay_dir => 'RATOCT10', filter_set => 'MyReplayFilterdixit30', default_action => 'INCLUDE');
 

Step 3:
Next you need to initialize the replay.


execute DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY('REPLAY_30OCT19', 'RATOCT10');
 

Step 4:
The next step is to use the filter which we have created at step 2 above.
Note: The replay MUST be initialized first before calling USER_FILTER_SET


exec DBMS_WORKLOAD_REPLAY.USE_FILTER_SET (filter_set => 'MyReplayFilterdixit30');
 

Step 5:
Next you can verify if the filter set and filters are created or not. You can do that using below provided dynamic views relevant to REPLAY or RAT.


select CAPTURE_ID, SET_NAME, FILTER_NAME, VALUE from DBA_WORKLOAD_REPLAY_FILTER_SET;
select * from DBA_WORKLOAD_FILTERS where value='DIXIT';
 

Step 6:
Next proceed with the prepare and start the Replay

execute DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY(synchronization => 'SCN')
 

Step 7:
Start recommended number of workload clients on host/hosts.
Note: It’s not necessary to use the DEBUG & workdir parameters as we only used them to generate extra diagnostic logs or traces that we will use in case if replay fails. If not needed only mode, replaydir parameters are required.


RAC Node 1:
nohup wrc system/xxxxxx mode=replay replaydir=/exp_pd/ratreplaydiroct/ratoctober10th_main workdir=/exp/ratreplaydiroct/traces_forreplay_date10012019 debug=ON > wrc301.log 2>&1&

nohup wrc system/xxxxxx mode=replay replaydir=/exp_pd/ratreplaydiroct/ratoctober10th_main workdir=/exp/ratreplaydiroct/traces_forreplay_date10012019 debug=ON > wrc302.log 2>&1&


RAC Node 2:
nohup wrc system/xxxxxx mode=replay replaydir=/exp_pd/ratreplaydiroct/ratoctober10th_main workdir=/exp/ratreplaydiroct/traces_forreplay_date10012019 debug=ON > wrc303.log 2>&1&

nohup wrc system/xxxxxx mode=replay replaydir=/exp_pd/ratreplaydiroct/ratoctober10th_main workdir=/exp/ratreplaydiroct/traces_forreplay_date10012019 debug=ON > wrc304.log 2>&1&
 

Step 8:
With all above steps running fine and no errors reported, we can start the replay next which will use the created exclusion filters and play rest of the load on the target database.


execute DBMS_WORKLOAD_REPLAY.START_REPLAY();
 

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Oracle Real Application Testing (RAT) – Part 3 : What is Replay and how to do it ?

Posted by FatDBA on February 14, 2020

Hi Guys,

In the same continuation from where I left last time …
This is the Part 3 of the final part of this three post series on Oracle Real Application Testing or RAT. This part mainly deals with the replay side or the target end where you finally replays of triggers the captured workload.

Alright, so let’s get started!
I am starting immediately after step 6 (which we covered in Part 2).

Step 7:
Next move all files from capture directory to target system (12c POC VM box in this case). I have created the directory for REPLAY purposes there at the target to receive all these files and to play them later on.

Next transfer the files from source.

Match the count on target directory, should be same.

RAT Replay (General) Steps:

Step 1:
The first step in preparation is to process the CAPTURE in the replay directory.
Note: This will be a one-time activity and only do if you are doing it for the first time. If the load is already processed and you have replayed it earlier then there is not any use to follow this step.


BEGIN
DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir=>'RAT_TRIALRUN');
END;
/
 

Step 2:
RESTORE DB: At This point we need to restore the database prior to the point when the Capture was started Note: No need to perform this step if this was already performed using Import or GRPs.

Step 3:
Next is to put the DB in readiness state. We have to initialize the replay.

execute DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY('REPLAY_TRIALRUN1', 'RAT_TRIALRUN'); 

Step 4:
Next we will PREPARE the replay. There are many arguments that you can pass with the ‘synchronization’ parameter & many other parameters to tailor made your replay. We have finalized and opted PREPARE_REPLAY step i.e. synchronization specially.


execute DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY(synchronization => 'SCN'); 

Step 5:
Next to get the required numbers of workload clients to run and replay that load on target database.
For that we have to run wrc utility in calibrate mode.


wrc system/xxxx mode=calibrate replaydir=/nfs_exp/prodrattest/ratdir_02092019
 

Step 6:
Next is to fire the recommended numbers of wrc (workload) clients on the target database to replay the captured workload.
For example, if 8 is the recommended number of replay clients. Then we can
Ex:

nohup wrc system/oracle@ mode=replay replaydir=/home/replay/rat & 

On RAC Node 1:
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019

On RAC Node 2:
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
wrc system/XXXX mode=replay replaydir=/nfs_exp/prodrattest/ratdir_02092019
 

Step 7:
Next when all above steps are completed, next we will be replaying the workload.

execute DBMS_WORKLOAD_REPLAY.START_REPLAY(); 

Step 8:
Monitoring the Replay
There are many ways to perform the monitoring of the replay process, I will be sharing few scripts and discuss few of the methods that can be used to check/verify the replay progress.

Way 1:
Monitor the progress of replay using one of the dynamic view named DBA_WORKLOAD_REPLAYS


alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
SELECT ID, NAME,DBNAME,
PARALLEL,STATUS, to_char(START_TIME,'dd/mm/yyyy hh24:mi'),
to_char(END_TIME,'dd/mm/yyyy hh24:mi'),DURATION_SECS/60, NUM_CLIENTS, USER_CALLS, DBTIME, RAC_MODE
AWR_BEGIN_SNAP,AWR_END_SNAP,AWR_EXPORTED
FROM DBA_WORKLOAD_REPLAYS;
where id=2;
 


Way 2:
To check what’s all is happening in the database including the load pushed by WRC clients.
This is a join of gv$sqlarea, gv$session to get session and SQL details.


select x.inst_id,x.sid ,x.serial#,x.username,sqlarea.parsing_schema_name,x.sql_id
 ,sqlarea.plan_hash_value
 ,sqlarea.LAST_ACTIVE_TIME
 ,sqlarea.DISK_READS
 ,sqlarea.BUFFER_GETS
 ,sqlarea.ROWS_PROCESSED
 ,sqlarea.CPU_TIME
 ,sqlarea.LAST_LOAD_TIME
 ,x.event,x.osuser,x.status
 ,x.process,x.machine,x.program,x.module,x.action
 ,TO_CHAR(x.LOGON_TIME, 'MM-DD-YYYY HH24:MI:SS') logontime
 ,x.LAST_CALL_ET
 ,x.SECONDS_IN_WAIT,x.state,sql_text,
 ltrim(to_char(floor(x.LAST_CALL_ET/3600), '09')) || ':'
  || ltrim(to_char(floor(mod(x.LAST_CALL_ET, 3600)/60), '09')) || ':'
  || ltrim(to_char(mod(x.LAST_CALL_ET, 60), '09'))    RUNNING_SINCE
 from   gv$sqlarea sqlarea,gv$session x
 where  x.sql_hash_value = sqlarea.hash_value
 and    x.sql_address    = sqlarea.address
 and    sql_text not like '%select x.inst_id,x.sid ,x.serial# ,x.username ,x.sql_id ,x.event%'
 and    sql_text not like '%select x.inst_id,x.sid ,x.serial#,x.username,sqlarea.parsing_schema_name,x.sql_id ,sqlarea.plan_hash_value%'
 and    x.status='ACTIVE'
 and x.USERNAME is not null
 and x.SQL_ADDRESS    = sqlarea.ADDRESS
 and x.SQL_HASH_VALUE = sqlarea.HASH_VALUE
 order by RUNNING_SINCE desc;
 

Way 3:
Below is the method or a way to identify the replay progress in terms of percentage completed.
Code and text is pasted below.

replay_progress_percentage

Step 9:
Reporting
Next step is to generate some RAT specific reporting which will be helpful to replay performance and other statistics.


DECLARE
cap_id         NUMBER;
rep_id         NUMBER;
rep_rpt        CLOB;
rpt_len                 NUMBER;
BEGIN
rep_rpt :=DBMS_WORKLOAD_REPLAY.REPORT(replay_id => 1,format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);
dbms_output.put_line(DBMS_LOB.SUBSTR(rep_rpt,32767,1));
END;
/
 

Below is the code to generate Capture Vs Replay reports.


var report_bind clob;
begin
DBMS_WORKLOAD_REPLAY.COMPARE_PERIOD_REPORT (replay_id1 => 1, replay_id2=> null, format => 'HTML', result => :report_bind);
end;
/
 

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | Leave a Comment »

Oracle Real Application Testing (RAT) – Part 2: What is Capture & how to do it ?

Posted by FatDBA on February 10, 2020

Hi Folks,

Continuing the same subject/topic what I have started in my last post – Real Application Testing (RAT). This post is all about the ‘Capture‘ part what happens on the source database where we captures the workload which will later on replayed on the target database.

I have break it in to few easy steps to understand.

Step 1:
First we need to verify if the RAT option is installed and working fine. This you only need to verify upto 10Gr2 as all later versions comes with all features enabled by himself during the installation process, until you didn’t de-selected any specific feature during customized installation.
In case of 10g you need to check using v$option dynamic view for RAT parameter and there is a need to enable the parameter ‘pre_11g_enable_capture’.

Please see below the steps of verification!

Step 2:
Creating exclusion FILTERS (If Required) for capture.
This is the step where we create filters to exclude few of the system usernames and few of other schemas like SYSMAN, SYS and all of such schemas where you don’t want to capture load.
Please see the screenshot use to do the same.

Step 3:
Creation of RAT specific OS based directory.
This is the place where all CAPTURE files will be saved and should be created like below.

Step 4:
Next, we can now start the CAPTURE process, this should be done using the main RAT specific procedure DBMS_WORKLOAD_CAPTURE and its function START_CAPTURE. Here the main parameter to pass is name (Name of the capture you want to name), dir (directory which will hold all workload files, the same what I have created above).
There are few parameters which I have intentionally not used i.e. DURATION as there is a BUG in 10Gr2 database which causes the capture not to stop even after specific time and had to manually stop the process. So in below example I will be capturing XXXXX minutes of load from this database and will stop it explicitly.

This being a staging setup I am taking around 30 minutes of workload but in real time this could be anything between 10 or 15 minutes of peak hours.

You can monitor the progress using DBA_WORKLOAD_CAPTURES view. See below.

Next, you can get more details about this ongoing capture activity. See below.

Step 5:
Next when we are done with the capturing of load for the specified time, we can go and stop it now. In my case on RS staging I left that running and capturing workload for ~ 32 Minutes.

Next, you can verify the contents by going to the RAT directory. There you will see .rec (recording) files, .wmd and a special report (TEXT/HTML format) will generate and is specific to CAPTURE process only.

Now we have BEGIN SNAP Id and END SNAP (time duration for the capture runtime) we can generate the AWR report as well.
The same report can be fetched via text method as well.

Step 6:
Next we will export the AWR data. This will be later on used to generate the comparison report from REPLAY side. This will create two more files under the capture directory wcr_ca.log and wcr_ca.dmp

Next steps starts at the target end or the host and deserves a separate post.
I will soon be writing about Replay process in my next post. Till that time keep learning!

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: , | Leave a Comment »

 
%d bloggers like this: