Tales From A Lazy Fat DBA

Love all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Archive for the ‘Basics’ Category


Lets spread some Chaos – Gremlin Chaos Engineering Practitioner Certification

Posted by FatDBA on June 17, 2021

Chaos Engineering is a disciplined approach of identifying potential failures before they become outages, and its engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure (including Databases, storage or networking).

Gremlin provides a ‘failure-as-a-service’ testing platform or a toolset built to make systems more reliable. It turns failure into resilience by offering engineers a fully hosted solution to safely experiment on complex systems, in order to identify weaknesses before they impact customers and cause revenue loss. It can be easily tested on any of the infrastructure components to avoid single point of failures and to remove any FPs make system more HA and failsafe.

So, this you can recommend to any of your customer before they go live, should test the infra by generating intentional chaos to test respective zones, services, software  component, Storage (Disk space etc.), Databases (Cluster, replicas, standbys), Applications (Kubernetes etc.) and Networks.

About exam – This certification tests your knowledge on Chaos Engineering concepts like Gremlin platform, GameDay, MoD (Master of disaster) and other similar experiments and techniques.

Exam is free of cost and if you have any prior knowledge you can directly go and give the exam, but its good to attend their free prep session. There is not any time limit, and you have to answer 20 questions in total. Passing percentage required is 80% and you have 2 attempts in total to do that.

Direct link to certification: https://www.gremlin.com/blog/announcing-the-gremlin-chaos-engineering-practitioner-certificate-program/

Link for free prep session registration: https://www.gremlin.com/webinars/gremlin-certificate-prep-session/

Hope It Helped!
Prashant Dixit

Posted in Basics | Tagged: , | 2 Comments »

My favorite 5 SQLcl Features ….

Posted by FatDBA on June 13, 2021

Hi Guys,

Todays post is about one of the tool that I have been using from last few years now and which I really like when it comes to working on Oracle databases. This is called SQLcl or SQL Developer Command Line, we can call it as an advance version of SQL Plus as it comes with lot of cool and handy features/commands those are not available with default SQL Plus command line interface or SQL prompt. So, this post is about ‘TOP 5 features of SQLcl which I like the most’ ….

Few of you who don’t know what this SQLcl is ? – This is one of tool developed by Oracle’s SQL Developer team, lead by Jeff Smith (Thanks Jeff for that!) and it’s something that makes DBAs & Developers job easy with its range of commands and features that makes it very powerful and gives upper hand over SQL Plus.

So lets get started with top 5 features of SQLcl that I liked the most.

Note: All of the below tests I have performed on SQLcl version build:

[oracle@canttowin bin]$ ./sql

SQLcl: Release 21.1 Production on Sat Jun 12 23:29:31 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Username? (''?) system
Password? (**********?) ********
Last Successful login time: Sat Jun 12 2021 23:29:39 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release - Production

SQL> version
Oracle SQLDeveloper Command-Line (SQLcl) version: build:

Feature 1: ‘REPEAT’ command.
This is pretty useful command if you want to execute any specific query for a number of times to see results. I mean it’s just like what ‘watch’ is on Linux OS.
It simply executes the SQL command and provides output for number of times at a particular interval.

I.e. Below I have executed an SQL for 5 times with a gap of 5 seconds.

SQL> select count(*) from v$session where status='ACTIVE';


SQL> repeat 5 5
Running 1 of 5  @ 11:38:25.243 with a delay of 5s

Running 2 of 5  @ 11:38:30.251 with a delay of 5s

Running 3 of 5  @ 11:38:35.254 with a delay of 5s

Running 4 of 5  @ 11:38:40.258 with a delay of 5s

Running 5 of 5  @ 11:38:45.263 with a delay of 5s



Feature 2: ‘Quick DDL Generation’
Now with SQLcl you don’t have to run DBMS_METADATA.get_ddl to get the definition of your Table or Index or anything. You can simply use the DDL command with syntax DDL and you will have your complete object DDL.

SQL> ddl system.bigtab table

   (    "ID" NUMBER,
        "WEIGHT" NUMBER,
        "ADATE" DATE

Feature 3: ‘Collect Object Information’
Now no need to query dynamic views and DESC commands to get your table stats (rows, analyzed date, in memory status, comments and sample size) and table descriptions. This you can get using a single command of INFORMATION or INFO.

If you want to get more details about histograms on your table, then you have INFO+ command which presents more details to you.

SQL> information system.bigtab
         LAST ANALYZED:2021-06-12 23:45:33.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         COMMENTS     :This is a table for testing purposes

 ID          NUMBER      Yes
 WEIGHT      NUMBER      Yes
 ADATE       DATE        Yes


SQL> info+ system.bigtab
         LAST ANALYZED:2021-06-20 13:41:19.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         COMMENTS     :This is a table for testing purposes

 ID          NUMBER      Yes                  22                     198                   1000          FREQUENCY
 WEIGHT      NUMBER      Yes                  -2147337077           2147453933            67310          NONE
 ADATE       DATE        Yes                  2018.   2021.   66224          NONE

Feature 4: ‘CTAS easy and quick’
CTAS is very useful command and is quite frequently used because of its simplicity and purpose, now with SQLcl you don’t have to type the complete command or syntax to create a new tables using existing via CTAS.

SQL> ctas bigtab bigtable666

   (    "ID",
select * from BIGTAB

Feature 5: ‘Extended and more descriptive AUTOTRACING’.
This is by far one of the best feature of SQLcl in my opinion and specially for someone who have to tune and tweak databases every now and then. This provides more advance level details or statistics (overall 37 different stats) for any SQL statement where you set the AUTOTRACING feature ON. I am big fan of this feature!

SQL> set autotrace on
Autotrace Enabled
Shows the execution plan as well as statistics of the statement.

SQL> select * from system.bigtab where ID =588;

67 rows selected.

Explain Plan
Plan hash value: 441133017

| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |        |    76 |  2660 |    63   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| BIGTAB |    76 |  2660 |    63   (0)| 00:00:01 |

Predicate Information (identified by operation id):

   1 - filter("ID"=588)

   - dynamic statistics used: dynamic sampling (level=2)

               1  CPU used by this session
               1  CPU used when call started
               3  DB time
              47  Requests to/from client
              46  SQL*Net roundtrips to/from client
               4  buffer is not pinned count
            1329  bytes received via SQL*Net from client
           91534  bytes sent via SQL*Net to client
               5  calls to get snapshot scn: kcmgss
              11  calls to kcmgcs
             302  consistent gets
             302  consistent gets from cache
             302  consistent gets pin
             302  consistent gets pin (fastpath)
               2  enqueue releases
               2  enqueue requests
               3  execute count
         2473984  logical read bytes from cache
             293  no work - consistent read gets
              49  non-idle wait count
               3  opened cursors cumulative
               1  opened cursors current
               2  parse count (hard)
               3  parse count (total)
               1  parse time cpu
               2  parse time elapsed
              12  process last non-idle time
               5  recursive calls
               1  recursive cpu usage
             302  session logical reads
               1  sorts (memory)
            2010  sorts (rows)
             293  table scan blocks gotten
           86905  table scan disk non-IMC rows gotten
           86905  table scan rows gotten
               2  table scans (short tables)
              47  user calls

Hope It Helped!
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | 1 Comment »

Oracle Classic EXP/IMP and Data Pump possible with Oracle Instant Clients on Linux, starting from …

Posted by FatDBA on November 25, 2020

Hi Guys,

I have just noticed that few of the DBAs and most of the developers doesn’t know that starting from Oracle Instant client version, you can now use few of the useful utilities like EXPORT, IMPORT, DATAPUMP, SQL Loader, workload replay clients for Oracle RAT etc. You only need to download the Instant Client for Linux x86_64 (instantclient-tools) that has an additional package called ‘Tools’.
This was earlier not possible on systems where you do not have the proper/complete Oracle database installation i.e. Oracle clients installations (It was there with full client installations but not with Instant clients). This is very useful for your developers who want to take table level database backups using traditional utilities like export/import or new data pump.

It’s quite easy to install too, you just need to unzip the software and set few of the environmental variables and you are all set.
Let’s assume you have downloaded the package and unzipped, let’s set the environmental variables next.

#export PATH
export ORACLE_HOME=/home/oracle/app/oracle/product/12.1.0/client_1/
export TNS_ADMIN=$ORACLE_HOME/network/admin

Okay, we are all set, let’s try to call the classic export (EXP) utility and see how it goes.

[oracle@orainst2-test-monkey01 ~]$ exp

Export: Release - Production on Wed Nov 25 12:44:24 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.


Great, it worked. Now, let’s try to take a backup.

TABLES=TEST_TABLE1, TEST_TABLE2 FILE=/u01/app/testdb/dbc/backups/testdb_pdtest.dmp

Export: Release - Production on Wed Nov 25 12:44:51 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 250.8 MB
. . exported "TESTUSER"."TEST_TABLE1"                              190.9 MB       4819123 rows

Hope It Helped!
Prashant Dixit

Posted in Basics | Tagged: | Leave a Comment »

Connecting Oracle database from PostgreSQL using Public DB Links.

Posted by FatDBA on August 31, 2020

Hi Guys,

This post is in continuation of my last post where I showed how to connect to Oracle database from an PostgreSQL instance. Last post was all about accessing Oracle database using Foreign data wrappers. This post is about accessing/querying Oracle database from PostgreSQL using DB Links.

Software Used:
Oracle Instant Clients (Need both BASIC and DEVEL packages)
Link: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

For this method too, you have to install Oracle instant basic and devel packages.

[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-basic-
[sudo] password for enterprisedb:
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-devel-
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$

[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ ls -ll /usr/lib/oracle/12.2/client64/lib
total 216568
lrwxrwxrwx. 1 root root        21 Aug 27 22:59 libclntshcore.so -> libclntshcore.so.12.1
-rw-rwxr--. 1 root root   8033199 Jan 26  2017 libclntshcore.so.12.1
lrwxrwxrwx. 1 root root        17 Aug 27 22:59 libclntsh.so -> libclntsh.so.12.1
-rw-rwxr--. 1 root root  71638263 Jan 26  2017 libclntsh.so.12.1
-rw-rwxr--. 1 root root   2981501 Jan 26  2017 libipc1.so
-rw-rwxr--. 1 root root    539065 Jan 26  2017 libmql1.so
-rw-rwxr--. 1 root root   6568149 Jan 26  2017 libnnz12.so
lrwxrwxrwx. 1 root root        15 Aug 27 22:59 libocci.so -> libocci.so.12.1
-rw-rwxr--. 1 root root   2218687 Jan 26  2017 libocci.so.12.1
-rw-rwxr--. 1 root root 124771800 Jan 26  2017 libociei.so
-rw-rwxr--. 1 root root    158543 Jan 26  2017 libocijdbc12.so
-rw-rwxr--. 1 root root    380996 Jan 26  2017 libons.so
-rw-rwxr--. 1 root root    116563 Jan 26  2017 liboramysql12.so
-rw-r--r--. 1 root root   3984814 Jan 26  2017 ojdbc8.jar
-rw-rwxr--. 1 root root    312974 Jan 26  2017 ottclasses.zip
-rw-r--r--. 1 root root     37494 Jan 26  2017 xstreams.jar
[enterprisedb@fatdba ~]$

Cool, let’s connect with the PostgreSQL instance and create the DBLink. To create that you need Oracle username and its password which you want to connect and its IP address along with SID or database name, and you are done. And yes, don’t forget to set the LD_LIBRARY_PATH to the location of your Oracle instant client.

[enterprisedb@fatdba ~]$ export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ psql -d enterprisedb -U enterprisedb
psql.bin (10.12.20)
Type "help" for help.
enterprisedb=# select version();
 EnterpriseDB 10.12.20 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit
(1 row)

enterprisedb=# CREATE DATABASE LINK dixdroid CONNECT TO migr IDENTIFIED BY 'oracle90' USING oci '//';

Great, now time to test. Lets do a query on Oracle’s table from Postgresql instance using DB Link named ‘dixdroid’ which we have created earlier.

enterprisedb=# select * from migr.bigtab1@dixdroid;
  id  |    created_date    | lookup_id |            data
  320 | 19-MAY-19 02:10:38 |         1 | This is some data for 320
  321 | 19-MAY-18 02:10:38 |         2 | This is some data for 321
  322 | 19-MAY-19 02:10:38 |         1 | This is some data for 322
  323 | 19-MAY-20 02:10:38 |         3 | This is some data for 323
  324 | 19-MAY-18 02:10:38 |         2 | This is some data for 324
  325 | 19-MAY-20 02:10:38 |         3 | This is some data for 325
  326 | 19-MAY-19 02:10:38 |         1 | This is some data for 326
  327 | 19-MAY-18 02:10:38 |         2 | This is some data for 327
  328 | 19-MAY-19 02:10:38 |         1 | This is some data for 328
  329 | 19-MAY-20 02:10:38 |         3 | This is some data for 329
  330 | 19-MAY-18 02:10:38 |         2 | This is some data for 330
  331 | 19-MAY-20 02:10:38 |         3 | This is some data for 331

Great, it worked like a charm!

Hope It Helps
Prashant Dixit

Posted in Basics | Tagged: , | 1 Comment »

Datastax Certified Cassandra Administrator, some tips & more

Posted by FatDBA on August 21, 2020

Hi Guys,

With a sharp rise in NoSQL databases, many of the organizations are making a transition from traditional databases to distributed and high performance databases like ‘Cassandra’. Cassandra has become Apache’s one of the most popular projects. Though there are multiple NoSQL databases available in the market but no one has the features like peer-to-peer architecture, HA and Fault tolerant, Column based, Highly perform-ant, Schema Less, tunable consistency, great analytical possibilities, easy to scale-up & scale-down, distributed and the list goes on and on and on.

Cassandra already proved it’s mettle and is magical for IoT, Sensor data, Event based, Time series data, voucher generation systems and with other data models. Datastax provides best in class database management software and wide-range services with 24×7 support to get more from your Cassandra. Alongside comes some really cool features and tools i.e. opscenter (GUI), Nodesync (for enti entropy repairs), great SOLR integration, dsetool (similar to nodetool with more capabilities), sstableloader, pre-flight check tool, yaml file compare tools, stress tools, extra commands i.e. dsefs and many more.

DataStax is a pioneer and they have their own Cassandra certification path/track to prove you have valid credentials to work with Cassandra database either as a developer or an administrator. Now question comes where to start ?? – In fact many of you have asked me about my latest credentials ‘Datastax Apache Cassandra 3.x Administrator Associate‘, I was getting questions like how to prepare, how to book the exam and many other related questions. So, this post will be all about covering topics like how to prepare and book exam along with few tips.

I would always prefer to go point wise to make things more ordered and easy to digest.

1. Create your account on Datastax Academy.
Link: https://auth.cloud.datastax.com/auth/realms/CloudUsers/login-actions/registration?client_id=absorb&tab_id=lv4-57nRbu4

2. Go to the option ‘Catalog’ to lookout for courses available.
You have to choose between the Administrator (3 course based curriculum) or Developer (3 Courses based curriculum) track. I have completed the ADMIN path and it has three courses DS101 (Introduction), DS201 (Foundations) and DS210 (Operations with Apache Cassandra). All of the courses are beautifully designed, contains large numbers of demos, presentations, guides, quiz and a pre-build Ubuntu VM where you can all exercises.

Though the presentations and program covers every topic and all major parameters and topics but still if you want to read in depth, they have their own document collection and can be accessed through their website https://docs.datastax.com/en/landing_page/doc/landing_page/current.html or from https://cassandra.apache.org/doc/latest/

Note: There are few other specialized courses available too within the catalog i.e. Kafka connectors, DSE Graph, DSE Analytics, DSE Search etc.

3. Other learning platforms
Github: https://github.com/datastax
Can be very useful specially if you are preparing for developer track.
Youtube: Full of some great presentations, videos and some precious workshops and demos.
Twitter: For news (about webinars etc.), press releases and other exciting information.
https://twitter.com/DataStax (@DataStax)

4. All set!
Once you are done with your all three of your courses under ADMIN track, you are done and ready for the certification. Go to ‘Datastax Certification’ widget within catalog and book your exam by creating your profile on their certification website.

Currently they are giving one free exam vouchers and those will be issued at the end of the series for participants of the workshop.

5. Once registered you have to choose your exam type – Admin or Developer.
Both of the exams has 60 questions that you have to complete within 90 minutes, exam fees (right now) is $145
Note: It’s good that you check your system comparability before the exam, for more details follow their official guidelines.

So, don’t wait, go and enroll for the course and grab a chance for giving free certification and more importantly stand out from the crowd. These widely accepted and recognized credentials will help you in your continued professional development and is an ideal way to gain a greater understanding of your industry, and to enhance your knowledge and skills. It also offers excellent chances to network among Cassandra geeks.

Hope It Helps!
Prashant Dixit

Posted in Basics, Uncategorized | Tagged: | Leave a Comment »

Passed ‘Datastax Apache Cassandra 3.x Administrator Associate’ Certification

Posted by FatDBA on August 21, 2020

Hi Guys,

News to share, today itself I’ve cleared the ‘Datastax Apache Cassandra 3.x Administrator Associate‘ Certification. Was working towards Cassandra from past few months now, and it’s always good to get certified from the market leaders. Good way to stand out from the crowd!

Will soon write a blog about the preparation and the process to get certified from Datastax, stay tuned!

Hope It Helps!
Prashant D

Posted in Basics | Tagged: | Leave a Comment »

Postgres CREATE TABLESPACE failed with Permission denied

Posted by FatDBA on September 2, 2018

Hi Everyone,

This week i was busy preparing one Postgres database for migration purposes and have faced few basic problems while doing some of the rudimentary or elementary operations. Yesterday itself i was trying to create one Tablespace and was getting ‘Permission Denied’ for the defined path/location.
I tried to create this tablespace on different locations but failed every-time with same error related with permissions and interestingly permissions are okay (Directory owned by POSTGRES with RWX permissions). This left me confused about where exactly is the problem.

This is what i was trying and getting the error.

elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql/tbs';
ERROR:  could not set permissions on directory "/var/lib/pgsql/tbs": Permission denied
elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql';
ERROR:  could not set permissions on directory "/var/lib/pgsql": Permission denied
elephant=# CREATE TABLESPACE aleph location '/var';
ERROR:  could not set permissions on directory "/var": Permission denied

Then all of the sudden i remember something similar i have encountered in the past while working on Oracle Databases where SELinux Policy Prevents SQLPlus From Connecting to Oracle Database. So, i decided to turn off the SELinux status, and to effectively do it i ran setenforce 0 (Or you can use setenforce Permissive )
* The above commands will switch off SELinux enforcement temporarily until the machine is rebooted. If you would like to make it permanently, edit /etc/sysconfig/selinux, enter:
# vi /etc/sysconfig/selinux

And set / update it as follows:

[root@fatdba ~]# setenforce 0
[root@fatdba ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   permissive
Mode from config file:          enforcing
Policy version:                 26
Policy from config file:        targeted
[root@fatdba ~]#
[root@fatdba ~]#

Now after changing the setting of SELinux i once again tried the same step and BOOM, it worked!

elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql/tbs';

Hope It Helps
Prashant Dixit

Posted in Basics, troubleshooting | Tagged: | Leave a Comment »

Postgresql Error – could not connect to server, Is the server accepting connections on Unix domain socket “/tmp/.s.PGSQL.5432”?

Posted by FatDBA on August 31, 2018

Hey Mates,

I have got this new test box for some POCs related with Data Migration between Oracle and PostgreSQL, and this is the very first time i tried to connect with psql on this server and got an error. This was a newly provisioned server with Postgresql 8.4 installed.

Error is something related with the connection settings and is pasted below.

[root@fatdba ~]# psql

psql: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

I tried to initialize the server and started the services on the machine and it was started smoothly with no error.

[root@fatdba ~]# service postgresql initdb
Initializing database:
                                                           [  OK  ]
[root@fatdba ~]# service postgresql start
Starting postgresql service:

I tried to connect with the database once again and this time got a different set of error which is now related with Authentication which was Ident based. But this gave me an idea that something is related with the authentication protocol what’s bothering the database to start and to verify that I’ve checked pg_hba configuration file and changed the method to ‘Trust’ for this host connection type and did a restart of postgres serviced. And that worked!

[root@fatdba data]# psql -U postgres -W
Password for user postgres:
psql: FATAL:  Ident authentication failed for user "postgres"

[root@fatdba data]# vi pg_hba.conf
[root@fatdba data]#
# IPv4 local connections:
host    all         all          trust
host    all         all    trust

[root@fatdba data]# service postgresql restart
Stopping postgresql service:                               [  OK  ]
Starting postgresql service:                               [  OK  ]
[root@fatdba data]#

Posted in Basics | Tagged: | Leave a Comment »

No active extraction maps – Golden Gate STATS command, what’s that ?

Posted by FatDBA on April 8, 2018

Hey Mates,

Not sure if you guys have ever encountered a situation where you’ve got a return message “No active extraction maps” from Golden Gate STATS command against your Golden Gate process.
For GG newcomers, STATS command is used to display statistics for one or more Extract,Pump/Replicat groups and the output includes DML and DDL operations that are included in the Oracle GoldenGate configuration.

Now coming back to the error scenario. Let’s see where we got that message and what does that means.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 20> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
EXTRACT     RUNNING     GGTUNEX1    00:00:03      00:00:08
EXTRACT     RUNNING     GGTUNPU1    00:00:00      00:00:01
REPLICAT    RUNNING     GGTUNRP1    00:00:00      00:00:03

Now when i tried to check statistics for my processes, we’ve got a message which says that there are “No active extraction maps”

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 18> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 22> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

So what does that means ??
The message that is displayed indicates that nothing has been processed in the extract or replicat. Once data for the tables in the extract and replicat are processed (captured from or applied to DB) the STATS command produces processing statistics.

Let me try to do some manipulations at the source table which is part of replication and see if this brings something to STATS results.

SQL> update emp set ENAME='KARTIKEY' where EMPNO=8090;
1 row updated.

SQL> commit;
Commit complete.

Let’s try now.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 43> stats GGTUNEX1

Sending STATS request to EXTRACT GGTUNEX1 ...

Start of Statistics at 2018-04-05 04:03:08.

Output to /acfsmounts/acfsggv1/ggdir/dirdat/s1:
Extracting from SCOTT.EMP to SCOTT.EMP:

*** Total statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Latest statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.

Yup, we’ve got stats for the process now after the change.

Hope It Helps

Posted in Basics | Tagged: | 3 Comments »

Some new features of Oracle Database 18c

Posted by FatDBA on February 28, 2018

Hey Everyone,

Today’s post is to discuss few of the new features (Small but nice) of the all new Oracle 18c (Not In depth) that i have tested.

0. Oracle Database 18c is the first version of the product to follow a yearly release pattern. Yup, that’s correct!

1. Read Only Oracle Home (ROOH)
Yes, finally we have the RO Oracle Homes.

2. Columnar Format Exadata Flash Cache Enhancements
With Oracle Database 18c we now support accessing non-HCC objects in the Exadata Flash cache in full Database In-Memory columnar format. In 12.2 this was restricted to just HCC objects.

3. Scalable Sequences
A scalable sequence is particularly efficient when used to generate unordered primary or unique keys for data ingestion workloads having high level of concurrency. Scalable sequences significantly reduce the sequence and index block contention.

4. Converting Normal or High Redundancy Disk Groups to Flex Disk Groups without Restricted Mount
You can convert a conventional disk group (disk group created before Oracle Database18c) to an Oracle ASM flex disk group without using the restrictive mount (MOUNTED RESTRICTED) option.

Another way to kill/cancel a SQL in a session.
The syntax would be like …

e.g. ALTER SYSTEM CANCEL SQL ‘448, 98175, @1, 761hchah78addfj’;

6. The default value of PARALLEL_THREADS_PER_CPU is finally set 1 as default!
PARALLEL_THREADS_PER_CPU describes the number of parallel execution processes or threads that a CPU can handle during parallel execution. Good Move! 🙂

7. For users of Exadata and Real Application Clusters (RAC), Oracle Database 18c brings changes that will enable a significant reduction in the amount of undo that needs to be transferred across the interconnect. It achieves this by using RDMA, over the Infiniband connection, to access the undo blocks in the remote instance. This feature combined with a local commit cache significantly improves the throughput of some OLTP workloads when running on top of RAC.

8. 18c Authenticate and authorize Oracle users directly with Microsoft Active Directory

9. New Oracle Spatial features in Oracle Database 18c include distributed transactions, sharding, easier to use web services admin console.


Hope It Helps
Prashant Dixit

Posted in Basics | Tagged: , | 2 Comments »

%d bloggers like this: