Tales From A Lazy Fat DBA

Fan of Oracle DB & Performance, PostgreSQL & Cassandra … \,,/

  • Likes

    • 227,592
  • Archives

  • Categories

  • Subscribe

Archive for the ‘Basics’ Category

Core/Basics/Theory

Connecting Oracle database from PostgreSQL using Public DB Links.

Posted by FatDBA on August 31, 2020

Hi Guys,

This post is in continuation of my last post where I showed how to connect to Oracle database from an PostgreSQL instance. Last post was all about accessing Oracle database using Foreign data wrappers. This post is about accessing/querying Oracle database from PostgreSQL using DB Links.

Software Used:
Oracle Instant Clients (Need both BASIC and DEVEL packages)
Link: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

For this method too, you have to install Oracle instant basic and devel packages.


[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
[sudo] password for enterprisedb:
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$

[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ ls -ll /usr/lib/oracle/12.2/client64/lib
total 216568
lrwxrwxrwx. 1 root root        21 Aug 27 22:59 libclntshcore.so -> libclntshcore.so.12.1
-rw-rwxr--. 1 root root   8033199 Jan 26  2017 libclntshcore.so.12.1
lrwxrwxrwx. 1 root root        17 Aug 27 22:59 libclntsh.so -> libclntsh.so.12.1
-rw-rwxr--. 1 root root  71638263 Jan 26  2017 libclntsh.so.12.1
-rw-rwxr--. 1 root root   2981501 Jan 26  2017 libipc1.so
-rw-rwxr--. 1 root root    539065 Jan 26  2017 libmql1.so
-rw-rwxr--. 1 root root   6568149 Jan 26  2017 libnnz12.so
lrwxrwxrwx. 1 root root        15 Aug 27 22:59 libocci.so -> libocci.so.12.1
-rw-rwxr--. 1 root root   2218687 Jan 26  2017 libocci.so.12.1
-rw-rwxr--. 1 root root 124771800 Jan 26  2017 libociei.so
-rw-rwxr--. 1 root root    158543 Jan 26  2017 libocijdbc12.so
-rw-rwxr--. 1 root root    380996 Jan 26  2017 libons.so
-rw-rwxr--. 1 root root    116563 Jan 26  2017 liboramysql12.so
-rw-r--r--. 1 root root   3984814 Jan 26  2017 ojdbc8.jar
-rw-rwxr--. 1 root root    312974 Jan 26  2017 ottclasses.zip
-rw-r--r--. 1 root root     37494 Jan 26  2017 xstreams.jar
[enterprisedb@fatdba ~]$
 


Cool, let’s connect with the PostgreSQL instance and create the DBLink. To create that you need Oracle username and its password which you want to connect and its IP address along with SID or database name, and you are done. And yes, don’t forget to set the LD_LIBRARY_PATH to the location of your Oracle instant client.


[enterprisedb@fatdba ~]$ export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ psql -d enterprisedb -U enterprisedb
psql.bin (10.12.20)
Type "help" for help.
enterprisedb=#
enterprisedb=#
enterprisedb=# select version();
                                                    version
---------------------------------------------------------------------------------------------------------------
 EnterpriseDB 10.12.20 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit
(1 row)

enterprisedb=#
enterprisedb=# CREATE DATABASE LINK dixdroid CONNECT TO migr IDENTIFIED BY 'oracle90' USING oci '//10.0.0.130/fatdb';
CREATE DATABASE LINK
enterprisedb=#
 


Great, now time to test. Lets do a query on Oracle’s table from Postgresql instance using DB Link named ‘dixdroid’ which we have created earlier.



enterprisedb=# select * from migr.bigtab1@dixdroid;
  id  |    created_date    | lookup_id |            data
------+--------------------+-----------+----------------------------
  320 | 19-MAY-19 02:10:38 |         1 | This is some data for 320
  321 | 19-MAY-18 02:10:38 |         2 | This is some data for 321
  322 | 19-MAY-19 02:10:38 |         1 | This is some data for 322
  323 | 19-MAY-20 02:10:38 |         3 | This is some data for 323
  324 | 19-MAY-18 02:10:38 |         2 | This is some data for 324
  325 | 19-MAY-20 02:10:38 |         3 | This is some data for 325
  326 | 19-MAY-19 02:10:38 |         1 | This is some data for 326
  327 | 19-MAY-18 02:10:38 |         2 | This is some data for 327
  328 | 19-MAY-19 02:10:38 |         1 | This is some data for 328
  329 | 19-MAY-20 02:10:38 |         3 | This is some data for 329
  330 | 19-MAY-18 02:10:38 |         2 | This is some data for 330
  331 | 19-MAY-20 02:10:38 |         3 | This is some data for 331
 

Great, it worked like a charm!

Hope It Helps
Prashant Dixit

Posted in Basics | Tagged: , | Leave a Comment »

Datastax Certified Cassandra Administrator, some tips & more

Posted by FatDBA on August 21, 2020

Hi Guys,

With a sharp rise in NoSQL databases, many of the organizations are making a transition from traditional databases to distributed and high performance databases like ‘Cassandra’. Cassandra has become Apache’s one of the most popular projects. Though there are multiple NoSQL databases available in the market but no one has the features like peer-to-peer architecture, HA and Fault tolerant, Column based, Highly perform-ant, Schema Less, tunable consistency, great analytical possibilities, easy to scale-up & scale-down, distributed and the list goes on and on and on.

Cassandra already proved it’s mettle and is magical for IoT, Sensor data, Event based, Time series data, voucher generation systems and with other data models. Datastax provides best in class database management software and wide-range services with 24×7 support to get more from your Cassandra. Alongside comes some really cool features and tools i.e. opscenter (GUI), Nodesync (for enti entropy repairs), great SOLR integration, dsetool (similar to nodetool with more capabilities), sstableloader, pre-flight check tool, yaml file compare tools, stress tools, extra commands i.e. dsefs and many more.

DataStax is a pioneer and they have their own Cassandra certification path/track to prove you have valid credentials to work with Cassandra database either as a developer or an administrator. Now question comes where to start ?? – In fact many of you have asked me about my latest credentials ‘Datastax Apache Cassandra 3.x Administrator Associate‘, I was getting questions like how to prepare, how to book the exam and many other related questions. So, this post will be all about covering topics like how to prepare and book exam along with few tips.

I would always prefer to go point wise to make things more ordered and easy to digest.

1. Create your account on Datastax Academy.
Link: https://auth.cloud.datastax.com/auth/realms/CloudUsers/login-actions/registration?client_id=absorb&tab_id=lv4-57nRbu4

2. Go to the option ‘Catalog’ to lookout for courses available.
You have to choose between the Administrator (3 course based curriculum) or Developer (3 Courses based curriculum) track. I have completed the ADMIN path and it has three courses DS101 (Introduction), DS201 (Foundations) and DS210 (Operations with Apache Cassandra). All of the courses are beautifully designed, contains large numbers of demos, presentations, guides, quiz and a pre-build Ubuntu VM where you can all exercises.

Though the presentations and program covers every topic and all major parameters and topics but still if you want to read in depth, they have their own document collection and can be accessed through their website https://docs.datastax.com/en/landing_page/doc/landing_page/current.html or from https://cassandra.apache.org/doc/latest/

Note: There are few other specialized courses available too within the catalog i.e. Kafka connectors, DSE Graph, DSE Analytics, DSE Search etc.

3. Other learning platforms
Github: https://github.com/datastax
Can be very useful specially if you are preparing for developer track.
Youtube: Full of some great presentations, videos and some precious workshops and demos.
https://www.youtube.com/user/DataStaxMedia
Twitter: For news (about webinars etc.), press releases and other exciting information.
https://twitter.com/DataStax (@DataStax)

4. All set!
Once you are done with your all three of your courses under ADMIN track, you are done and ready for the certification. Go to ‘Datastax Certification’ widget within catalog and book your exam by creating your profile on their certification website.
https://certification.mettl.com/datastax/applicant/signup

Currently they are giving one free exam vouchers and those will be issued at the end of the series for participants of the workshop.

5. Once registered you have to choose your exam type – Admin or Developer.
Both of the exams has 60 questions that you have to complete within 90 minutes, exam fees (right now) is $145
Note: It’s good that you check your system comparability before the exam, for more details follow their official guidelines.

So, don’t wait, go and enroll for the course and grab a chance for giving free certification and more importantly stand out from the crowd. These widely accepted and recognized credentials will help you in your continued professional development and is an ideal way to gain a greater understanding of your industry, and to enhance your knowledge and skills. It also offers excellent chances to network among Cassandra geeks.

Hope It Helps!
Prashant Dixit

Posted in Basics, Uncategorized | Tagged: | Leave a Comment »

Passed ‘Datastax Apache Cassandra 3.x Administrator Associate’ Certification

Posted by FatDBA on August 21, 2020

Hi Guys,

News to share, today itself I’ve cleared the ‘Datastax Apache Cassandra 3.x Administrator Associate‘ Certification. Was working towards Cassandra from past few months now, and it’s always good to get certified from the market leaders. Good way to stand out from the crowd!

Will soon write a blog about the preparation and the process to get certified from Datastax, stay tuned!

Hope It Helps!
Prashant D

Posted in Basics | Tagged: | Leave a Comment »

Postgres CREATE TABLESPACE failed with Permission denied

Posted by FatDBA on September 2, 2018

Hi Everyone,

This week i was busy preparing one Postgres database for migration purposes and have faced few basic problems while doing some of the rudimentary or elementary operations. Yesterday itself i was trying to create one Tablespace and was getting ‘Permission Denied’ for the defined path/location.
I tried to create this tablespace on different locations but failed every-time with same error related with permissions and interestingly permissions are okay (Directory owned by POSTGRES with RWX permissions). This left me confused about where exactly is the problem.

This is what i was trying and getting the error.

elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql/tbs';
ERROR:  could not set permissions on directory "/var/lib/pgsql/tbs": Permission denied
elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql';
ERROR:  could not set permissions on directory "/var/lib/pgsql": Permission denied
elephant=# CREATE TABLESPACE aleph location '/var';
ERROR:  could not set permissions on directory "/var": Permission denied

Then all of the sudden i remember something similar i have encountered in the past while working on Oracle Databases where SELinux Policy Prevents SQLPlus From Connecting to Oracle Database. So, i decided to turn off the SELinux status, and to effectively do it i ran setenforce 0 (Or you can use setenforce Permissive )
* The above commands will switch off SELinux enforcement temporarily until the machine is rebooted. If you would like to make it permanently, edit /etc/sysconfig/selinux, enter:
# vi /etc/sysconfig/selinux

And set / update it as follows:
SELINUX=disabled

[root@fatdba ~]# setenforce 0
[root@fatdba ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   permissive
Mode from config file:          enforcing
Policy version:                 26
Policy from config file:        targeted
[root@fatdba ~]#
[root@fatdba ~]#

Now after changing the setting of SELinux i once again tried the same step and BOOM, it worked!

elephant=# CREATE TABLESPACE aleph location '/var/lib/pgsql/tbs';
CREATE TABLESPACE
elephant=#
elephant=#

Hope It Helps
Prashant Dixit

Posted in Basics, troubleshooting | Tagged: | Leave a Comment »

Postgresql Error – could not connect to server, Is the server accepting connections on Unix domain socket “/tmp/.s.PGSQL.5432”?

Posted by FatDBA on August 31, 2018

Hey Mates,

I have got this new test box for some POCs related with Data Migration between Oracle and PostgreSQL, and this is the very first time i tried to connect with psql on this server and got an error. This was a newly provisioned server with Postgresql 8.4 installed.

Error is something related with the connection settings and is pasted below.

[root@fatdba ~]# psql

psql: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

I tried to initialize the server and started the services on the machine and it was started smoothly with no error.

[root@fatdba ~]# service postgresql initdb
Initializing database:
                                                           [  OK  ]
[root@fatdba ~]# service postgresql start
Starting postgresql service:

I tried to connect with the database once again and this time got a different set of error which is now related with Authentication which was Ident based. But this gave me an idea that something is related with the authentication protocol what’s bothering the database to start and to verify that I’ve checked pg_hba configuration file and changed the method to ‘Trust’ for this host connection type and did a restart of postgres serviced. And that worked!

[root@fatdba data]# psql -U postgres -W
Password for user postgres:
psql: FATAL:  Ident authentication failed for user "postgres"

[root@fatdba data]# vi pg_hba.conf
[root@fatdba data]#
# TYPE  DATABASE    USER        CIDR-ADDRESS          METHOD
# IPv4 local connections:
host    all         all         127.0.0.1/32          trust
host    all         all         192.168.145.129/32    trust


[root@fatdba data]# service postgresql restart
Stopping postgresql service:                               [  OK  ]
Starting postgresql service:                               [  OK  ]
[root@fatdba data]#

Posted in Basics | Tagged: | Leave a Comment »

No active extraction maps – Golden Gate STATS command, what’s that ?

Posted by FatDBA on April 8, 2018

Hey Mates,

Not sure if you guys have ever encountered a situation where you’ve got a return message “No active extraction maps” from Golden Gate STATS command against your Golden Gate process.
For GG newcomers, STATS command is used to display statistics for one or more Extract,Pump/Replicat groups and the output includes DML and DDL operations that are included in the Oracle GoldenGate configuration.

Now coming back to the error scenario. Let’s see where we got that message and what does that means.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 20> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     GGTUNEX1    00:00:03      00:00:08
EXTRACT     RUNNING     GGTUNPU1    00:00:00      00:00:01
REPLICAT    RUNNING     GGTUNRP1    00:00:00      00:00:03

Now when i tried to check statistics for my processes, we’ve got a message which says that there are “No active extraction maps”

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 18> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 22> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

So what does that means ??
The message that is displayed indicates that nothing has been processed in the extract or replicat. Once data for the tables in the extract and replicat are processed (captured from or applied to DB) the STATS command produces processing statistics.

Let me try to do some manipulations at the source table which is part of replication and see if this brings something to STATS results.

SQL> update emp set ENAME='KARTIKEY' where EMPNO=8090;
1 row updated.

SQL> commit;
Commit complete.

Let’s try now.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 43> stats GGTUNEX1

Sending STATS request to EXTRACT GGTUNEX1 ...

Start of Statistics at 2018-04-05 04:03:08.

Output to /acfsmounts/acfsggv1/ggdir/dirdat/s1:
Extracting from SCOTT.EMP to SCOTT.EMP:

*** Total statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Latest statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.

Yup, we’ve got stats for the process now after the change.

Hope It Helps
Prashant

Posted in Basics | Tagged: | Leave a Comment »

Some new features of Oracle Database 18c

Posted by FatDBA on February 28, 2018

Hey Everyone,

Today’s post is to discuss few of the new features (Small but nice) of the all new Oracle 18c (Not In depth) that i have tested.

0. Oracle Database 18c is the first version of the product to follow a yearly release pattern. Yup, that’s correct!

1. Read Only Oracle Home (ROOH)
Yes, finally we have the RO Oracle Homes.

2. Columnar Format Exadata Flash Cache Enhancements
With Oracle Database 18c we now support accessing non-HCC objects in the Exadata Flash cache in full Database In-Memory columnar format. In 12.2 this was restricted to just HCC objects.

3. Scalable Sequences
A scalable sequence is particularly efficient when used to generate unordered primary or unique keys for data ingestion workloads having high level of concurrency. Scalable sequences significantly reduce the sequence and index block contention.

4. Converting Normal or High Redundancy Disk Groups to Flex Disk Groups without Restricted Mount
You can convert a conventional disk group (disk group created before Oracle Database18c) to an Oracle ASM flex disk group without using the restrictive mount (MOUNTED RESTRICTED) option.

5. ALTER SYSTEM CANCEL SQL
Another way to kill/cancel a SQL in a session.
The syntax would be like …

ALTER SYSTEM CANCEL SQL ‘SID, SERIAL[, @INST_ID][, SQL_ID]’;
e.g. ALTER SYSTEM CANCEL SQL ‘448, 98175, @1, 761hchah78addfj’;

6. The default value of PARALLEL_THREADS_PER_CPU is finally set 1 as default!
PARALLEL_THREADS_PER_CPU describes the number of parallel execution processes or threads that a CPU can handle during parallel execution. Good Move! 🙂

7. For users of Exadata and Real Application Clusters (RAC), Oracle Database 18c brings changes that will enable a significant reduction in the amount of undo that needs to be transferred across the interconnect. It achieves this by using RDMA, over the Infiniband connection, to access the undo blocks in the remote instance. This feature combined with a local commit cache significantly improves the throughput of some OLTP workloads when running on top of RAC.

8. 18c Authenticate and authorize Oracle users directly with Microsoft Active Directory

9. New Oracle Spatial features in Oracle Database 18c include distributed transactions, sharding, easier to use web services admin console.

 

Hope It Helps
Prashant Dixit

Posted in Basics | Tagged: , | 2 Comments »

All about Oracle 12c Database In-Memory!

Posted by FatDBA on December 27, 2017

Hi Mates,

Today i will discuss about the Database In-Memory option of Oracle 12c databases. I know i am little late to add about this feature but recently I’ve implemented the option for one of my customer in Latin Americas region and there I’ve got more exposure to understand it better and implement the feature.

So, lets start with most probably my last post of the year!

Recently i have got a chance to test and to benchmark the performance gains in one of out Pre-Prod environment. This feature applies on Tablespaces, Tables, MViews, (Sub) Partitions (Except objects owned by SYS, SYSTEM and SYSAUX). To understand the feature i would first like to shed some light on two of the ways how Oracle stores tables on both Disk and Memory using conventional ‘Row Format’ and with the all new In-Memory ‘Column’ format.

Row Arrangement: Is the same old traditional method to store data in row formats. This is best for OLTP systems as queries runs faster with this approach as it quickly fetches all of the columns in a record.
Column Based Arrangement: This way it stores records in a separate column store. This proves good for OLAP systems where large set of data is chosen but only for few number of columns.

Hence, based on above two methods or data arrangements in database it is clear that the row based method is best for DMLs and column based arrangement is good when selecting large portion of data, so both of the two methods have their own respective pros and cons. But starting from Oracle 12.1.0.2 we have the all new feature of ‘DB In-Memory’ which use best of both the approaches. I mean it uses both row and column arrangements to keep data in memory. Our smart optimizer automatically knows which query to route as per the workload (OLTP and for Analytical processing).

The In-Memory feature uses the IM Column store which is a new occupant of the SGA (In-Memory Area). This In-Memory Area is adjusted by a parameter INMEMORY_SIZE.

As far as IM Column Store it is filled by the information collected by worker processes e.g. w001, w002 etc.; each of the worker process updates the IM Compression Units or IMCUs.

So, enough the background and internals involved, lets jump to the real work and check how this thing practically works.
First check the minimum comparability of the database

COMPATIBLE = 12.1.0.0.0

Lets first enable the in-memory column store. There are many of the related parameters and are given below.

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
inmemory_adg_enabled                 boolean     TRUE
inmemory_clause_default              string
inmemory_expressions_usage           string      ENABLE
inmemory_force                       string      DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query                       string      ENABLE
inmemory_size                        big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
inmemory_virtual_columns             string      MANUAL
optimizer_inmemory_aware             boolean     TRUE

Before i show next logs on how to enable it, i would first like to show you the database startup details with no In-Memory enabled.

SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             771753984 bytes
Database Buffers          402653184 bytes
Redo Buffers                8155136 bytes
Database mounted.
Database opened.
SQL>

Okay now lets enable it!
Now a question might be asked about the criterion of adding objects to the pool. So in my opinion the best candidates would
be — Very hot data, large segments (not less than 1MB), collect these stats from sources like Oracle segment statistics, number of column scans, AWR reports etc. Analytical queries etc.

SQL> alter system set inmemory_size=250m scope=spfile;

System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             822085632 bytes
Database Buffers           83886080 bytes
Redo Buffers                8155136 bytes
In-Memory Area            268435456 bytes
Database mounted.
Database opened.

Now i will try to load some stuff in to the memory.

SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate 
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                DISABLED

00:14:42 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE                         >>>>>>> 1MB pool is used to store the column formatted data.
64KB POOL                     50331648          0 DONE                         >>>>>>> 64 KB pool is used to store the metadata about its residents.

Now i am moving the discussed table to in memory.

exec DBMS_INMEMORY.POPULATE(schema_name => dixit, table_name => 'TEST1');
or Using

00:15:01 SQL> alter table test1 inmemory;

Table altered.


00:17:59 SQL> SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                ENABLED  NONE     AUTO            FOR QUERY LOW     NO DUPLICATE




00:20:17 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE
64KB POOL                     50331648          0 DONE


00:18:25 SQL> select owner, segment_name, populate_status from v$im_segments;

no rows selected

Above results shows that no segment is added to the pool. Now lets try to query the table and see the results again.

00:20:26 SQL> select count(*) from test1;

  COUNT(*)
----------
   1000000



00:20:48 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 POPULATING
64KB POOL                     50331648          0 POPULATING


Okay now it's doing something as status from DONE has changed to POLULATING. Lets repeat the same command again.

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016   49283072 DONE
64KB POOL                     50331648     458752 DONE

Okay its there in the pool now! Lets check the in memory area, what we have inside it.

00:23:56 SQL> select owner, segment_name, populate_status from v$im_segments;

OWNER      SEGMENT_NAME         POPULATE_STAT
---------- -------------------- -------------
DIXIT      TEST1                COMPLETED

Lets conform if table is fully populated we have to look at v$im_segments_detail which compares the number of blocks in In-Memory and in the TEST1 table.

SELECT m.inst_id, 
       m.blocksinmem, 
       m.datablocks 
FROM   v$im_segments_detail m, 
       user_objects o 
WHERE  m.dataobj = o.object_id 
AND    o.object_name = 'TEST1';

   INST_ID BLOCKSINMEM DATABLOCKS
---------- ----------- ----------
         1        6922       6922

Lets see if there is any changes captured in Execution plan of the query.

SQL> explain plan for select count(*) from test1;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3896847026

-----------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    71   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE             |       |     1 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| TEST1 |  1000K|    71   (2)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

Now we have a new operation type added to the plan ‘TABLE ACCESS INMEMORY FULL’. It says that the object was accessed directly from the memory.

You can compare the response times with/without the In-Memory Store by simply disabling In-Memory functionality in your session:

ALTER SESSION SET inmemory_query = ENABLE;
ALTER SESSION SET inmemory_query = DISABLE;

You can anytime remove the objects from the pool. Other options to use with In memory.

SQL> ALTER TABLESPACE example INMEMORY;

Enabling the In-Memory attribute on the TEST1 table but excluding the “dest_id” column

SQL> ALTER TABLE TEST1 INMEMORY NO INMEMORY(dest_id);

Enabling the in memory option for table TEST1 and setting the priority to CRITICAL. With this option set on object (Other options are HIGH, MEDIUM, LOW), it will be populated immediately after the database is opened.

SQL> ALTER TABLE TEST1 INMEMORY PRIORITY CRITICAL;

There are few other options like Compression (Objects compressed during population) , Joins, Scans are also there and will be covered in a separate post or you can read the official documentation on in memory to understand them.

But i would like to discuss little about how the In-Memory option works in RAC environment.

In case of user querying the database in-memory in RAC, serial queries will only access a fraction of the data from its own node. IMCUs or IM Compression Units are not traveled using interconnect or using cache fusion. It fetches the remainder data from the disk itself.
Parallel execution helps as it starts multiple processes and ensures that at least one parallel server slave is allocated for each RAC instance. For that we have to set the parallel_degree_policy or Auto DOP to AUTO which makes the query coordinator ICMU aware and it (QC) automatically starts parallel server processes on correct server.

Now after all the discussion time to share my final words on the subject.
As we know that caching or pooling are the concepts there with Oracle RDBMS or with any of the databases from a very long time; then – why this in memory now ?
We already have few of the areas like KEEP/RECYCLE pools, Result cache etc. to keep cache data or results.
So my answer is that the Oracle In-Memory column store enables objects to load in memory in compressed columnar format which makes the scans to perform better that on-disk reads and leads to performance boosts. Use it when there is a SQL related performance issue and you’ve tried all tuning methods and each one of them are failed to fix the problem and yes If the In-Memory performance benefit can outperform the additional Oracle license costs.

Hope It Helps
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | Leave a Comment »

MySQL – How to use LOAD DATA INFILE and INTO OUTFILE

Posted by FatDBA on December 20, 2017

Today i will discuss about the the useful but script/SQL based data export/import method in MySQL database that is – LOAD DATA INFILE and INTO OUTFILE.

Lets first create an export file/script for the table using SELECT … INTO OUTFILE, here you can specify the location of the export file.

mysql> select * from country into outfile 'countrycreate.sql';
Query OK, 109 rows affected (0.00 sec)

-rw-rw-rw-. 1 mysql mysql 3.6K Dec 20 01:07 countrycreate.sql

As there is no table definition captured using SELECT INTO OUTFILE way, so you should always ensure that you have a copy of the table definition for restoration of the file.

bash-4.1$ mysqldump -u root -p --no-data dixit country > /var/lib/mysql/dixit/countryschemadef.sql
Enter password:

-rw-rw-rw-. 1 mysql mysql 3.6K Dec 20 01:07 countrycreate.sql
-rw-r--r--. 1 mysql mysql 1.6K Dec 20 01:10 countryschemadef.sql

Lets see the contents of this newly created file.

bash-4.1$ more countryschemadef.sql
-- MySQL dump 10.13  Distrib 5.7.20, for Linux (x86_64)
--
-- Host: localhost    Database: dixit
-- ------------------------------------------------------
-- Server version       5.7.20

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;

--
-- Table structure for table `country`
--

DROP TABLE IF EXISTS `country`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `country` (
  `country_id` int(11) DEFAULT NULL,
  `country` text,
  `last_update` text
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2017-12-20  1:10:20

Lets create the new user and load the table data to it.


bash-4.1$ mysqladmin -u root -p create dixit2
Enter password:


bash-4.1$ mysql -u root -p dixit2  load data infile '/var/lib/mysql/dixit/countrycreate.sql' into table country;
Query OK, 109 rows affected (0.01 sec)
Records: 109  Deleted: 0  Skipped: 0  Warnings: 0

mysql>
mysql>
mysql> select count(*) from country;
+----------+
| count(*) |
+----------+
|      109 |
+----------+
1 row in set (0.00 sec)

All set!

Hope It Helps!
Prashant Dixit

Posted in Basics | Tagged: , | Leave a Comment »

MySQL ERROR 1054 (42S22): Unknown column ‘Password’ in ‘field list’ – Version 5.7

Posted by FatDBA on November 27, 2017

mysql> update mysql.user set Password = PASSWORD(‘mysql’) where user =’root’;
ERROR 1054 (42S22): Unknown column ‘Password’ in ‘field list’

WHY ??????
This was working all good in other instances of MySQL where i had earlier versions installed, why not this one – Puzzled, Perplexed!
Let me check version information of this instance.

mysql> show variables like "%version%";
+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| innodb_version | 5.7.20 |
| protocol_version | 10 |
| slave_type_conversions | |
| tls_version | TLSv1,TLSv1.1 |
| version | 5.7.20 |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------+
8 rows in set (0.01 sec)

Well, starting from MySQL version 5.7 the PASSWORD column from mysql.user table has been removed and now replaced with ‘authentication_string’.
So the all new syntax for this password reset would be like this …

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql>

mysql> update user set authentication_string=password(‘mysql’) where user=’root’;
Query OK, 2 rows affected, 1 warning (0.00 sec)
Rows matched: 3 Changed: 2 Warnings: 1


Hope That Helps
Prashant Dixit

Posted in Basics | Tagged: | 2 Comments »

 
%d bloggers like this: