Tales From A Lazy Fat DBA

Fan of Oracle DB & Performance, PostgreSQL, Cassandra & much more … \,,/

  • Likes

    • 256,760
  • Archives

  • Categories

  • Subscribe

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Disclaimer!

    FatDBA or Oracle ‘Ant’ is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Posts Tagged ‘postgres’

PgBackRest: A reliable backup and recovery solution to your PostgreSQL clusters …

Posted by FatDBA on April 8, 2021

Hi Everyone,

Recently while working for one of the customer, I was asked to propose a reliable backup and recovery solution for the database clusters. The customer was using both EDB and open source PostgreSQL. The ask is to take all major types of backups i.e. Full, Incremental and Differential, and last two types is to cover their anticipation for ways to decrease amount of time and disk space usage to take a full backup. After considering all their prerequisites and necessities, I came up with the idea of using PgBackRest which I have tested in some of my previous assignments too. PgBackRest is an open source backup tool that creates physical backups with some improvements compared to the classic pg_basebackup tool.

It comes up with lot of cool features which otherwise isn’t possible with pg_basebackup and few of them not even with other backup tools. Features like parallel backups, encryption, differential and incremental backups, backup integrity checks, archive expiration policies, local and remote operations, backup resume etc.

This post is all about this popular backup tool PgBackRest, how to configure and how to perform backup and restore operations using the tool. I will be doing few test cases here on my personal lab where I have RHEL and have PostgreSQL 12 installed.

I have already installed the tool using PostgreSQL YUM repository. It’s pretty straight forward, do ‘yum install pgbackrest‘ and that’s it!

-bash-4.2$ which pgbackrest
/usr/bin/pgbackrest

Let’s checked the version.

-bash-4.2$ pgbackrest version
pgBackRest 2.32

Now when the tool is installed and working fine, time to configure its core property file or config (pgbackrest.conf) file. I will be first creating a new directory to house this core confgiuration file for the tool.

[root@canttowin ~]# mkdir /etc/pgbackrest
[root@canttowin ~]# vi /etc/pgbackrest/pgbackrest.conf

Lets add global and local database details to the configuration file. Here I am setting full backup retention of 2 days (repo1-retention-full=2), I am only passing required set of params to it, else there is a huge list of them which you can use and defined under config file.

[root@canttowin ~]# more /etc/pgbackrest/pgbackrest.conf
[global]
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2

[openpgsql]
pg1-path=/var/lib/pgsql/12/data/
pg1-port=5432

If you have noticed, all operations above I have performed with root account/user, and this should not be the case, ownership should be passed to PostgreSQL database owner, that is ‘postgres’ user in my case. So, let’s fix permissions first before we do our first backup.

[root@canttowin ~]# chmod 0750 /var/lib/pgbackrest
[root@canttowin ~]# chown -R postgres:postgres /var/lib/pgbackrest
[root@canttowin ~]# ls -ll /var/log/pgbackrest/
total 8
-rw-r-----. 1 root root 0 Apr 4 04:23 all-start.log
-rw-r----- 1 root root 185 Mar 27 05:37 all-start.log.1.gz
-rw-r----- 1 postgres postgres 450 Apr 6 00:54 openpgsql-stanza-create.log
[root@canttowin ~]#
[root@canttowin ~]# chown -R postgres:postgres /var/log/pgbackrest
[root@canttowin ~]#
[root@canttowin ~]# ls -ll /var/log/pgbackrest/
total 8
-rw-r-----. 1 postgres postgres 0 Apr 4 04:23 all-start.log
-rw-r----- 1 postgres postgres 185 Mar 27 05:37 all-start.log.1.gz
-rw-r----- 1 postgres postgres 450 Apr 6 00:54 openpgsql-stanza-create.log

All set with the permissions, now next is to set few of the parameters within postgresql.conf file to make pgbackrest handle WAL segments, pushing them immediately to archive.

[postgres@canttowin data]$
[postgres@canttowin data]$ more /var/lib/pgsql/12/data/postgresql.conf |grep archive
archive_mode = on
archive_command = 'pgbackrest --stanza=openpgsql archive-push %p'

Above changes to database configuration requires a restart of the database, so, let’s do it.

[postgres@canttowin bin]$ ./pg_ctl -D /var/lib/pgsql/12/data stop
waiting for server to shut down…. done
server stopped
[postgres@canttowin bin]$ ./pg_ctl -D /var/lib/pgsql/12/data start
waiting for server to start….2021-04-06 01:03:45.837 EDT [28770] LOG: starting PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
2021-04-06 01:03:45.838 EDT [28770] LOG: listening on IPv6 address "::1", port 5432
2021-04-06 01:03:45.838 EDT [28770] LOG: listening on IPv4 address "127.0.0.1", port 5432
2021-04-06 01:03:45.861 EDT [28770] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-04-06 01:03:45.911 EDT [28770] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2021-04-06 01:03:45.983 EDT [28770] LOG: redirecting log output to logging collector process
2021-04-06 01:03:45.983 EDT [28770] HINT: Future log output will appear in directory "log".
done
server started

Next is to create the ‘STANZA‘, A stanza defines the backup configuration for a specific PostgreSQL database cluster.

[postgres@canttowin ~]$ pgbackrest stanza-create --stanza=openpgsql --log-level-console=info
2021-04-06 00:54:31.731 P00 INFO: stanza-create command begin 2.32: --exec-id=24839-da2916aa --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=openpgsql
2021-04-06 00:54:32.361 P00 INFO: stanza-create for stanza 'openpgsql' on repo1
2021-04-06 00:54:32.400 P00 INFO: stanza-create command end: completed successfully (672ms)

Next, we will check if everything is okay, this ‘check‘ command will check the cluster and validates archive_command and other related settings, if no error, then it’s all good.
[postgres@canttowin bin]$ pgbackrest --stanza=openpgsql check --log-level-console=info
2021-04-06 01:07:18.941 P00 INFO: check command begin 2.32: --exec-id=30501-dbf76c75 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=openpgsql
2021-04-06 01:07:19.553 P00 INFO: check repo1 configuration (primary)
2021-04-06 01:07:19.778 P00 INFO: check repo1 archive for WAL (primary)
2021-04-06 01:07:20.196 P00 INFO: WAL segment 000000010000000000000057 successfully archived to '/var/lib/pgbackrest/archive/openpgsql/12-1/0000000100000000/000000010000000000000057-dd44b724c7a9e257512f5c9d3ecf5d87f7ae9f67.gz' on repo1
2021-04-06 01:07:20.197 P00 INFO: check command end: completed successfully (1258ms)

All good, time to take our first backup, we have to use ‘type‘ argument with values full, incr, diff for all three types of backups. As this is our first backup, if you go and try for diff and incr backups, they will still go for the full backup as they requires a base backup to consider.
Below are the runtime logs, I have used option ‘log-level-console=info‘ which prints log information, warnings and errors, other possible values with this parameter are off, error, warn, detail, debug and trace.

[postgres@canttowin bin]$ pgbackrest --stanza=openpgsql --type=full backup --log-level-console=info
2021-04-06 01:07:49.917 P00 INFO: backup command begin 2.32: --exec-id=30602-14142f51 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql --type=full
2021-04-06 01:07:50.646 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the next regular checkpoint completes
2021-04-06 01:07:51.362 P00 INFO: backup start archive = 000000010000000000000059, lsn = 0/59000060
2021-04-06 01:07:53.028 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/16415 (13.5MB, 22%) checksum d8deb3703748d22554be2fb29c0ed105bab9658c
2021-04-06 01:07:53.782 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/16426 (5MB, 30%) checksum 29a07de6e53a110380ef984d3effca334a07d6e6
2021-04-06 01:07:54.176 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/16423 (2.2MB, 33%) checksum 5184ac361b2bef0df25a34e91636a085fc526930
2021-04-06 01:07:54.222 P01 INFO: backup file /var/lib/pgsql/12/data/base/16385/1255 (632KB, 34%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 01:07:54.334 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/1255 (632KB, 35%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 01:07:54.434 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/1255 (632KB, 36%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
….
……
2021-04-06 01:08:05.364 P01 INFO: backup file /var/lib/pgsql/12/data/PG_VERSION (3B, 100%) checksum ad552e6dc057d1d825bf49df79d6b98eba846ebe
2021-04-06 01:08:05.369 P01 INFO: backup file /var/lib/pgsql/12/data/global/6100 (0B, 100%)
2021-04-06 01:08:05.372 P01 INFO: backup file /var/lib/pgsql/12/data/global/6000 (0B, 100%)
2021-04-06 01:08:05.376 P01 INFO: backup file /var/lib/pgsql/12/data/global/4185 (0B, 100%)
2021-04-06 01:08:05.379 P01 INFO: backup file /var/lib/pgsql/12/data/global/4183 (0B, 100%)
2021-04-06 01:08:05.390 P01 INFO: backup file /var/lib/pgsql/12/data/global/4181 (0B, 100%)
….
…..
2021-04-06 01:08:06.735 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14040 (0B, 100%)
2021-04-06 01:08:06.738 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14035 (0B, 100%)
2021-04-06 01:08:06.743 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14030 (0B, 100%)
2021-04-06 01:08:06.847 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14025 (0B, 100%)
2021-04-06 01:08:06.848 P00 INFO: full backup size = 61MB
2021-04-06 01:08:06.848 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 01:08:07.068 P00 INFO: backup stop archive = 000000010000000000000059, lsn = 0/59000170
2021-04-06 01:08:07.107 P00 INFO: check archive for segment(s) 000000010000000000000059:000000010000000000000059
2021-04-06 01:08:07.354 P00 INFO: new backup label = 20210406-010750F
2021-04-06 01:08:07.489 P00 INFO: backup command end: completed successfully (17575ms)
2021-04-06 01:08:07.489 P00 INFO: expire command begin 2.32: --exec-id=30602-14142f51 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql
2021-04-06 01:08:07.500 P00 INFO: expire command end: completed successfully (11ms)
[postgres@canttowin bin]$

So, our first backup is done. Now, let’s check it’s detail (size, timings, WAL details etc.).

[postgres@canttowin bin]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 000000010000000000000056/000000010000000000000059

        full backup: 20210406-010650F
            timestamp start/stop: 2021-04-06 01:06:50 / 2021-04-06 01:07:12
            wal start/stop: 000000010000000000000056 / 000000010000000000000056
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

When we have our first full backup ready, let’s take the differential backup.

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --type=diff --log-level-console=info backup
2021-04-06 14:40:34.145 P00 INFO: backup command begin 2.32: --exec-id=54680-0dd25993 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql --type=diff
2021-04-06 14:40:34.892 P00 INFO: last backup label = 20210406-143757F, version = 2.32
2021-04-06 14:40:34.892 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the next regular checkpoint completes
2021-04-06 14:40:35.405 P00 INFO: backup start archive = 00000001000000000000005F, lsn = 0/5F000028
2021-04-06 14:40:36.252 P01 INFO: backup file /var/lib/pgsql/12/data/global/pg_control (8KB, 99%) checksum 962d11b5c25154c5c8141095be417a7f5d699419
2021-04-06 14:40:36.354 P01 INFO: backup file /var/lib/pgsql/12/data/pg_logical/replorigin_checkpoint (8B, 100%) checksum 347fc8f2df71bd4436e38bd1516ccd7ea0d46532
2021-04-06 14:40:36.355 P00 INFO: diff backup size = 8KB
2021-04-06 14:40:36.355 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 14:40:36.568 P00 INFO: backup stop archive = 00000001000000000000005F, lsn = 0/5F000100
2021-04-06 14:40:36.573 P00 INFO: check archive for segment(s) 00000001000000000000005F:00000001000000000000005F
2021-04-06 14:40:36.615 P00 INFO: new backup label = 20210406-143757F_20210406-144034D
2021-04-06 14:40:36.672 P00 INFO: backup command end: completed successfully (2528ms)
2021-04-06 14:40:36.672 P00 INFO: expire command begin 2.32: --exec-id=54680-0dd25993 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql
2021-04-06 14:40:36.678 P00 INFO: expire command end: completed successfully (6ms)

[postgres@canttowin ~]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 00000001000000000000005B/00000001000000000000005F

        full backup: 20210406-143652F
            timestamp start/stop: 2021-04-06 14:36:52 / 2021-04-06 14:37:10
            wal start/stop: 00000001000000000000005B / 00000001000000000000005B
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

        diff backup: 20210406-143757F_20210406-144034D
            timestamp start/stop: 2021-04-06 14:40:34 / 2021-04-06 14:40:36
            wal start/stop: 00000001000000000000005F / 00000001000000000000005F
            database size: 61MB, database backup size: 8.3KB
            repo1: backup set size: 8.0MB, backup size: 431B
            backup reference list: 20210406-143757F

The ‘info’ command output can be printed in JSON format too, just like below.

[postgres@canttowin ~]$ pgbackrest info --output=json
[{"archive":[{"database":{"id":1,"repo-key":1},"id":"12-1","max":"00000001000000000000005F","min":"00000001000000000000005B"}],"backup":[{"archive":{"start":"00000001000000000000005B","stop":"00000001000000000000005B"},"backrest":{"format":5,"version":"2.32"},"database":{"id":1,"repo-key":1},"info":{"delta":64047301,"repository":{"delta":8380156,"size":8380156},"size":64047301},"label":"20210406-143652F","prior":null,"reference":null,"timestamp":{"start":1617734212,"stop":1617734230},"type":"full"},{"archive":{"start":"00000001000000000000005D","stop":"00000001000000000000005D"},"backrest":{"format":5,"version":"2.32"},"database":{"id":1,"repo-key":1},"info":{"delta":64047301,"repository":{"delta":8380155,"size":8380155},"size":64047301},"label":"20210406-143757F","prior":null,"reference":null,"timestamp":{"start":1617734277,"stop":1617734285},"type":"full"},{"archive":{"start":"00000001000000000000005F","stop":"00000001000000000000005F"},"backrest":{"format":5,"version":"2.32"},"database":{"id":1,"repo-key":1},"info":{"delta":8459,"repository":{"delta":431,"size":8380156},"size":64047301},"label":"20210406-143757F_20210406-144034D","prior":"20210406-143757F","reference":["20210406-143757F"],"timestamp":{"start":1617734434,"stop":1617734436},"type":"diff"}],"cipher":"none","db":[{"id":1,"repo-key":1,"system-id":6941966298907810297,"version":"12"}],"name":"openpgsql","repo":[{"cipher":"none","key":1,"status":{"code":0,"message":"ok"}}],"status":{"code":0,"lock":{"backup":{"held":false}},"message":"ok"}}][postgres@canttowin ~]$
[postgres@canttowin ~]$

Now next comes the incremental backup, let’s do it!

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --type=incr --log-level-console=info backup
2021-04-06 14:43:26.193 P00 INFO: backup command begin 2.32: --exec-id=55204-d310aa59 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql --type=incr
2021-04-06 14:43:26.976 P00 INFO: last backup label = 20210406-143757F_20210406-144034D, version = 2.32
2021-04-06 14:43:26.976 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the next regular checkpoint completes
2021-04-06 14:43:27.495 P00 INFO: backup start archive = 000000010000000000000061, lsn = 0/61000028
2021-04-06 14:43:28.266 P01 INFO: backup file /var/lib/pgsql/12/data/global/pg_control (8KB, 99%) checksum 92143d43c90ed770f99f722d734bec62d9413d2a
2021-04-06 14:43:28.369 P01 INFO: backup file /var/lib/pgsql/12/data/pg_logical/replorigin_checkpoint (8B, 100%) checksum 347fc8f2df71bd4436e38bd1516ccd7ea0d46532
2021-04-06 14:43:28.369 P00 INFO: incr backup size = 8KB
2021-04-06 14:43:28.369 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 14:43:28.872 P00 INFO: backup stop archive = 000000010000000000000061, lsn = 0/61000100
2021-04-06 14:43:28.874 P00 INFO: check archive for segment(s) 000000010000000000000061:000000010000000000000061
2021-04-06 14:43:28.915 P00 INFO: new backup label = 20210406-143757F_20210406-144326I
2021-04-06 14:43:28.977 P00 INFO: backup command end: completed successfully (2785ms)
2021-04-06 14:43:28.977 P00 INFO: expire command begin 2.32: --exec-id=55204-d310aa59 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql
2021-04-06 14:43:28.981 P00 INFO: expire command end: completed successfully (4ms)
[postgres@canttowin ~]$

[postgres@canttowin ~]$

[postgres@canttowin ~]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 00000001000000000000005B/000000010000000000000061

        full backup: 20210406-143652F
            timestamp start/stop: 2021-04-06 14:36:52 / 2021-04-06 14:37:10
            wal start/stop: 00000001000000000000005B / 00000001000000000000005B
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

        diff backup: 20210406-143757F_20210406-144034D
            timestamp start/stop: 2021-04-06 14:40:34 / 2021-04-06 14:40:36
            wal start/stop: 00000001000000000000005F / 00000001000000000000005F
            database size: 61MB, database backup size: 8.3KB
            repo1: backup set size: 8.0MB, backup size: 431B
            backup reference list: 20210406-143757F

        incr backup: 20210406-143757F_20210406-144326I
            timestamp start/stop: 2021-04-06 14:43:26 / 2021-04-06 14:43:28
            wal start/stop: 000000010000000000000061 / 000000010000000000000061
            database size: 61MB, database backup size: 8.3KB
            repo1: backup set size: 8.0MB, backup size: 430B
            backup reference list: 20210406-143757F

So, that’s how you can take all three types of backup using the tool, if you want to scchedule it, you can use CRON and do entries something like below.

[postgres@canttowin ~]$ crontab -l
#m h   dom mon dow   command
30 06  *   *   0     pgbackrest --type=full --stanza=openpgsql backup
30 06  *   *   1-6   pgbackrest --type=diff --stanza=openpgsql backup

There are some other cool options which you can use with your backup command directly or even mention them in configuration file. There is a long list of parameters which you can use, click here to know about them. Few of them which are very useful are discussed below.

start-fast : Force a checkpoint to start backup quickly.
compress: Use file compression
compress-level: To declare compression levels
repo1-retention-diff: For differential backup retention.

Now, let’s create a recovery scenario. I am going to delete the entire DATA directory from PG HOME and will restore it using backups that we have. This being a brand new cluster, let me create some sample data.

dixit=#
dixit=# CREATE TABLE COMPANY(
dixit(# ID INT PRIMARY KEY NOT NULL,
dixit(# NAME TEXT NOT NULL,
dixit(# AGE INT NOT NULL,
dixit(# ADDRESS CHAR(50),
dixit(# SALARY REAL,
dixit(# JOIN_DATE DATE
dixit(# );
CREATE TABLE
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,JOIN_DATE) VALUES (1, 'Paul', 32, 'California', 20000.00,'2001-07-13');
INSERT 0 1
dixit=#
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,JOIN_DATE) VALUES (2, 'Allen', 25, 'Texas', '2007-12-13');
INSERT 0 1
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,JOIN_DATE) VALUES (3, 'Teddy', 23, 'Norway', 20000.00, DEFAULT );
INSERT 0 1
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,JOIN_DATE) VALUES (4, 'Mark', 25, 'Rich-Mond ', 65000.00, '2007-12-13' ), (5, 'David', 27, 'Texas', 85000.00, '2007-12-13');
INSERT 0 2
dixit=#
dixit=# select * from COMPANY;
id | name | age | address | salary | join_date
----+-------+-----+----------------------------------------------------+--------+------------
1 | Paul | 32 | California | 20000 | 2001-07-13
2 | Allen | 25 | Texas | | 2007-12-13
3 | Teddy | 23 | Norway | 20000 |
4 | Mark | 25 | Rich-Mond | 65000 | 2007-12-13
5 | David | 27 | Texas | 85000 | 2007-12-13
(5 rows)

And now when we have made above changes and we have the full database backup taken before than that, we have to take an incremental backup to cover new changes.

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --type=incr backup --log-level-console=info
2021-04-06 23:12:18.008 P00 INFO: backup command begin 2.32: --exec-id=80088-57a7eed8 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-diff=2 -
repo1-retention-full=1 --stanza=openpgsql --start-fast --type=incr
2021-04-06 23:12:18.744 P00 INFO: last backup label = 20210406-225743F_20210406-231110I, version = 2.32
2021-04-06 23:12:18.744 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the requested immediate checkpoint completes
2021-04-06 23:12:19.256 P00 INFO: backup start archive = 00000001000000000000006C, lsn = 0/6C000028
2021-04-06 23:12:20.245 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/1249 (440KB, 90%) checksum b85efa460cab148bf9d7db5a3e78dba71cc5b0b2
2021-04-06 23:12:20.247 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/2610 (32KB, 96%) checksum c6331e9df78c639a6b04aed46ecc96bd09f170f6
2021-04-06 23:12:20.250 P01 INFO: backup file /var/lib/pgsql/12/data/global/pg_control (8KB, 98%) checksum e75e69d389d82b2bc9bee88aea6353d3d889c28e
2021-04-06 23:12:20.252 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/2606 (8KB, 99%) checksum 59284824f0a0cd49006d5c220941248b13c2b286
2021-04-06 23:12:20.355 P01 INFO: backup file /var/lib/pgsql/12/data/pg_logical/replorigin_checkpoint (8B, 100%) checksum 347fc8f2df71bd4436e38bd1516ccd7ea0d46532
2021-04-06 23:12:20.355 P00 INFO: incr backup size = 488KB
2021-04-06 23:12:20.355 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 23:12:20.558 P00 INFO: backup stop archive = 00000001000000000000006C, lsn = 0/6C000138
2021-04-06 23:12:20.561 P00 INFO: check archive for segment(s) 00000001000000000000006C:00000001000000000000006C
2021-04-06 23:12:20.591 P00 INFO: new backup label = 20210406-225743F_20210406-231218I
2021-04-06 23:12:20.643 P00 INFO: backup command end: completed successfully (2636ms)
2021-04-06 23:12:20.643 P00 INFO: expire command begin 2.32: --exec-id=80088-57a7eed8 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-diff=2 --repo1-retention-full=1 --stanza=openpgsql
2021-04-06 23:12:20.649 P00 INFO: expire command end: completed successfully (6ms)
[postgres@canttowin ~]$

[postgres@canttowin ~]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 000000010000000000000068/00000001000000000000006C

        full backup: 20210406-225743F
            timestamp start/stop: 2021-04-06 22:57:43 / 2021-04-06 22:58:01
            wal start/stop: 000000010000000000000068 / 000000010000000000000068
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

        incr backup: 20210406-225743F_20210406-231110I
            timestamp start/stop: 2021-04-06 23:11:10 / 2021-04-06 23:11:12
            wal start/stop: 00000001000000000000006A / 00000001000000000000006A
            database size: 61.2MB, database backup size: 2.4MB
            repo1: backup set size: 8.0MB, backup size: 239.5KB
            backup reference list: 20210406-225743F

To know more about any of the database backup, we can use option ‘set‘ where we use backup name with ‘info‘ command, just like below.

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --set=20210406-225743F_20210406-231110I info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 000000010000000000000068/00000001000000000000006C

        incr backup: 20210406-225743F_20210406-231110I
            timestamp start/stop: 2021-04-06 23:11:10 / 2021-04-06 23:11:12
            wal start/stop: 00000001000000000000006A / 00000001000000000000006A
            database size: 61.2MB, database backup size: 2.4MB
            repo1: backup set size: 8.0MB, backup size: 239.5KB
            backup reference list: 20210406-225743F
            database list: dixit (16384), kartikey (16385), postgres (14188)

I have removed (did rm -rf *) all files that exists in PG_HOME/base directory, lets restore.

[postgres@canttowin data]$ pgbackrest --stanza=openpgsql --db-include=dixit --type=immediate --target-action=promote restore --log-level-console=detail
2021-04-06 23:19:12.641 P00 INFO: restore command begin 2.32: --db-include=dixit --exec-id=82229-9187cb59 --log-level-console=detail --pg1-path=/var/lib/pgsql/12/data --repo1-path=/var/lib/pgbackrest --stanza=openpgsql --target-action=promote --type=immediate
2021-04-06 23:19:12.676 P00 INFO: restore backup set 20210406-225743F_20210406-231218I
2021-04-06 23:19:12.677 P00 DETAIL: databases found for selective restore (1, 14187, 14188, 16384, 16385)
2021-04-06 23:19:12.677 P00 DETAIL: check '/var/lib/pgsql/12/data' exists
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base'
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/1'
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/14187'
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/14188'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/16384'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/16385'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/global'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/log'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_commit_ts'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_dynshmem'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_logical'
2021-04-06 23:19:12.683 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_logical/mappings'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_logical/snapshots'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_multixact'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_multixact/members'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_multixact/offsets'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_notify'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_replslot'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_serial'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_snapshots'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_stat'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_stat_tmp'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_subtrans'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_tblspc'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_twophase'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_wal'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_wal/archive_status'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_xact'
2021-04-06 23:19:12.879 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/16415 (13.5MB, 22%) checksum d8deb3703748d22554be2fb29c0ed105bab9658c
2021-04-06 23:19:12.957 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/16426 (5MB, 30%) checksum 29a07de6e53a110380ef984d3effca334a07d6e6
2021-04-06 23:19:12.999 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/16423 (2.2MB, 33%) checksum 5184ac361b2bef0df25a34e91636a085fc526930
2021-04-06 23:19:13.000 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/1255 (632KB, 34%)
2021-04-06 23:19:13.057 P01 INFO: restore file /var/lib/pgsql/12/data/base/16384/1255 (632KB, 35%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 23:19:13.065 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/1255 (632KB, 36%) checksum fc3c70ab83b8c87e056594f20b2186689d3c4678
2021-04-06 23:19:13.101 P01 INFO: restore file /var/lib/pgsql/12/data/base/14187/1255 (632KB, 37%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 23:19:13.118 P01 INFO: restore file /var/lib/pgsql/12/data/base/1/1255 (632KB, 38%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 23:19:13.119 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/2838 (456KB, 39%)
2021-04-06 23:19:13.127 P01 INFO: restore file /var/lib/pgsql/12/data/base/16384/2838 (456KB, 40%) checksum c41dbf11801f153c9bd0493eb6deadd1a3f22333
2021-04-06 23:19:13.133 P01 INFO: restore file /var/lib/pgsql/12/data/base/16384/2608 (456KB, 41%) checksum 9de1966f80ac1c0bfa530fa3379e55bfea5936e0

…..
2021-04-06 23:19:14.941 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14043_fsm (24KB, 78%)
2021-04-06 23:19:14.942 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14038_fsm (24KB, 78%)
2021-04-06 23:19:14.943 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14033_fsm (24KB, 78%)
2021-04-06 23:19:14.943 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14028_fsm (24KB, 78%)
2021-04-06 23:19:14.944 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14023_fsm (24KB, 78%)
….
……
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_stat_tmp'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_subtrans'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_tblspc'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_twophase'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_wal'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_wal/archive_status'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_xact'
2021-04-06 23:19:17.879 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/global'
2021-04-06 23:19:17.883 P00 INFO: restore command end: completed successfully (5243ms)
[postgres@canttowin data]$

Perfect, the restore is completed. Let’s start the database cluster.

[postgres@canttowin data]$ cd /usr/pgsql-12/bin/
[postgres@canttowin bin]$
[postgres@canttowin bin]$ /usr/pgsql-12/bin/pg_ctl^C
[postgres@canttowin bin]$
[postgres@canttowin bin]$ ./pg_ctl -D /var/lib/pgsql/12/data start
waiting for server to start….2021-04-06 23:19:55.212 EDT [82343] LOG: starting PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
2021-04-06 23:19:55.212 EDT [82343] LOG: listening on IPv6 address "::1", port 5432
2021-04-06 23:19:55.212 EDT [82343] LOG: listening on IPv4 address "127.0.0.1", port 5432
2021-04-06 23:19:55.213 EDT [82343] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-04-06 23:19:55.216 EDT [82343] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2021-04-06 23:19:55.226 EDT [82343] LOG: redirecting log output to logging collector process
2021-04-06 23:19:55.226 EDT [82343] HINT: Future log output will appear in directory "log".
done
server started

Now let’s connect with the database and see if we still see those records which we inserted.

[postgres@canttowin ~]$ psql -p 5432
psql (12.6.7)
Type "help" for help.

postgres=# SELECT datname FROM pg_database WHERE datistemplate = false;
 datname
----------
 postgres
 dixit
 kartikey
(3 rows)

postgres=# \c dixit
You are now connected to database "dixit" as user "postgres".

dixit=# \d
           List of relations
 Schema |    Name    | Type  |  Owner
--------+------------+-------+----------
 public | company    | table | postgres
 public | department | table | postgres
(2 rows)

dixit=# select * from company;
 id | name  | age |                      address                       | salary | join_date
----+-------+-----+----------------------------------------------------+--------+------------
  1 | Paul  |  32 | California                                         |  20000 | 2001-07-13
  2 | Allen |  25 | Texas                                              |        | 2007-12-13
  3 | Teddy |  23 | Norway                                             |  20000 |
  4 | Mark  |  25 | Rich-Mond                                          |  65000 | 2007-12-13
  5 | David |  27 | Texas                                              |  85000 | 2007-12-13
(5 rows)

dixit=#
dixit=#

Perfect! they are there.

Similarly you can do PITR (point-in-time) backups and restores and even backup and restore any specific database.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

How to register remote PEM agents to the PEM Server ?

Posted by FatDBA on April 3, 2021

Hi Guys,

During that quiesce period when I was away from blogging, I worked on lot of stuff, hence lot of contents to share 🙂 …. So here goes my another post. This one is about registering PEM agents with the PEM server as each PEM agent must be registered with the PEM server.

I have this PEM Server already configured (steps for configuring PEM server) and I have this new EDB AS 12 standby server which I would like to add to the PEM monitoring console. let’s get started!

192.168.20.128: PEM Server Host (canttowin.ontadomain)
192.168.20.129: Standby host (canttowinsec.quebecdomain)

I have already installed PEM agent (edb-pem-agent-8.0.1-1.rhel7.x86_64) on this remote standby host, let me show you that.

[root@canttowinsec ~]# yum install edb-pem-agent
Loaded plugins: langpacks, ulninfo
epel/x86_64/metalink | 7.0 kB 00:00:00
local | 2.9 kB 00:00:00
ol7_UEKR6 | 2.5 kB 00:00:00
ol7_latest | 2.7 kB 00:00:00
percona-release-noarch | 2.9 kB 00:00:00
percona-release-x86_64 | 2.9 kB 00:00:00
prel-release-noarch | 2.9 kB 00:00:00
Package edb-pem-agent-8.0.1-1.rhel7.x86_64 already installed and latest version
Nothing to do

Let’s go to the agent home directory and call the configuration utility called ‘pemworker’.

[root@canttowinsec bin]# pwd
/usr/edb/pem/agent/bin

[root@canttowinsec bin]# ls
pemagent pemworker pkgLauncher

Here we have to use few of the configuration options with their preferred values.
–pem-server : IP Address of the PEM backend database server.
–pem-port : Port of the PEM backend database server, default is 5432, but you have to check what port you have used.
-–pem-user : name of the Database user (having superuser privileges) of the PEM backend database server. This is a mandatory option.
–allow_server_restart: Enable the allow-server_restart parameter to allow PEM to restart the monitored server. TRUE is default.
–allow-batch-probes: Enable the allow-batch-probes parameter to allow PEM to run batch probes on this agent. FALSE is default.
-–batch-script-user: operating system user that should be used for executing the batch/shell scripts. NONE is default.

[root@canttowinsec bin]# ./pemworker --register-agent --pem-server 192.168.20.128 --pem-port 5444 --pem-user enterprisedb --allow_server_restart true --allow-batch-probes true --batch-script-user enterprisedb
Postgres Enterprise Manager Agent registered successfully!

Okay, so the agent is successfully registered with the PEM Server. Next we need to add the configuration to the agent.cfg file.

[root@canttowinsec etc]# pwd
/usr/edb/pem/agent/etc
[root@canttowinsec etc]# ls
agent.cfg

I am setting allow_streaming_replication to TRUE as this makes user to configure streaming replication, and next parameter is to provide path of CA certificates.
[root@canttowinsec etc]# echo "allow_streaming_replication=true" >> /usr/edb/pem/agent/etc/agent.cfg
[root@canttowinsec etc]# echo "ca_file=/usr/libexec/libcurl-pem/share/certs/ca-bundle.crt" >> /usr/edb/pem/agent/etc/agent.cfg

So, now my agent configuration file will look like below.

[root@canttowinsec etc]# more agent.cfg
[PEM/agent]
pem_host=192.168.20.128
pem_port=5444

agent_id=2
agent_ssl_key=/root/.pem/agent2.key
agent_ssl_crt=/root/.pem/agent2.crt
log_level=warning
log_location=/var/log/pem/worker.log
agent_log_location=/var/log/pem/agent.log
long_wait=30
short_wait=10
alert_threads=0
enable_smtp=false
enable_snmp=false
enable_webhook=false
max_webhook_retries=3
allow_server_restart=true
max_connections=0
connect_timeout=10
connection_lifetime=0
allow_batch_probes=true
heartbeat_connection=false
enable_nagios=false
batch_script_user=enterprisedb
allow_streaming_replication=true
ca_file=/usr/libexec/libcurl-pem/share/certs/ca-bundle.crt

Now you will see your PEM agent already added to the PEM agents list under PEM console.

Next you can add your standby database to the list of managed server. Here you need to follow same steps what I have discussed in my last post about PEM configuration, please click here to directly go to that post. The only difference is that you need to select the bounded agent from the drop down list, here you see your new agent coming under drop list, rest all is same!

Once it’s added successfully, you will see the new server under the list, here I have named the connection ‘EDBAS12_Sby‘.

Now here is how the main landing page will look like, new agent and database with its status.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

How to install EDB-AS without it’s software repositories ?

Posted by FatDBA on April 3, 2021

Hi Everyone,

Many of you might be thinking after reading the title – Why to write about such a simple or rudimentary task, what is so tricky about installing EDB PostgreSQL software ? I know it’s quite easy and straight forward, but only if you are able to add the EDB repository to your server or register it, if you fail to add or register it, then it will be very difficult and becomes a tedious and a time consuming activity to install all of the software’s and their long list of dependencies. This post is all about how to deal with such situation, how to download the source RPMs and install them on the server if you are not able to add the EDB repository.

First step is to download the complete EDB’s tarball, I am downloading the complete tarball here as I don’t want to miss any dependent packages which are needed by the core components. This tarball is close to 1.8 GBs in size, you can download the file using below wget command, here you need to use your EDB credentials.

wget https://prashant.dixit:password@yum.enterprisedb.com/edb/redhat/edb_redhat_rhel-7-x86_64.tar.gz

Now, once the tarball is installed, we can go and create the YUM local repository, though to create YUM repository is optional as you can also install RPMs directly, but will make your work lot easier otherwise you have to look out for dependencies manually. So, I have deceided to create the local repository here.

Once the above file is downloaded, unzip it. You will see list of all core and dependent packages/rpm, just like below.

….
….
edb-pgpool40-4.0.8-1.rhel7.x86_64.rpm sslutils_96-1.3-2.rhel7.x86_64.rpm
edb-pgpool40-4.0.9-1.rhel7.x86_64.rpm wxjson-1.2.1-1.rhel7.x86_64.rpm
edb-pgpool40-devel-4.0.6-1.rhel7.x86_64.rpm wxjson-1.2.1-2.rhel7.x86_64.rpm
edb-pgpool40-devel-4.0.8-1.rhel7.x86_64.rpm wxjson-devel-1.2.1-1.rhel7.x86_64.rpm
edb-pgpool40-devel-4.0.9-1.rhel7.x86_64.rpm wxjson-devel-1.2.1-2.rhel7.x86_64.rpm

Next I will create a directory which will be used as a repository container.
[root@canttowin edb]# mkdir -p /home/user/repo

move all unzipped files/rpms to this new directory.
[root@canttowin edb]# mv * /home/user/repo

change permissions of the directory.
[root@canttowin edb]# chown -R root.root /home/user/repo
[root@canttowin edb]# chmod -R o-w+r /home/user/repo

Now we can go and create the repository, for that we will use ‘createrepo‘ command.
[root@canttowin edb]# createrepo /home/user/repo
Spawning worker 0 with 1151 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Now let’s create the YUM repository entry under /etc/yum.repos.d
[root@canttowin edb]# more /etc/yum.repos.d/myrepo.repo
[local]
name=Prashant Local EDB Repo
baseurl=file:///home/user/repo
enabled=1
gpgcheck=0
[root@canttowin edb]#

All set! let’s try to look for any EDB’s package using this new local repository

[root@canttowin ~]# yum search edb-as12-server
Loaded plugins: langpacks, ulninfo
=============================================================== N/S matched: edb-as12-server ================================================================
edb-as12-server.x86_64 : EnterpriseDB Advanced Server Client and Server Components
edb-as12-server-client.x86_64 : The client software required to access EDBAS server.
edb-as12-server-cloneschema.x86_64 : cloneschema is a module for EnterpriseDB Advanced Server
edb-as12-server-contrib.x86_64 : Contributed source and binaries distributed with EDBAS
edb-as12-server-core.x86_64 : The core programs needed to create and run a EnterpriseDB Advanced Server
edb-as12-server-devel.x86_64 : EDBAS development header files and libraries
edb-as12-server-docs.x86_64 : Extra documentation for EDBAS
edb-as12-server-edb-modules.x86_64 : EDB-Modules for EnterpriseDB Advanced Server
edb-as12-server-indexadvisor.x86_64 : Index Advisor for EnterpriseDB Advanced Server
edb-as12-server-libs.x86_64 : The shared libraries required for any EDBAS clients
edb-as12-server-llvmjit.x86_64 : Just-In-Time compilation support for EDBAS
edb-as12-server-parallel-clone.x86_64 : parallel_clone is a module for EnterpriseDB Advanced Server
edb-as12-server-pldebugger.x86_64 : PL/pgSQL debugger server-side code
edb-as12-server-plperl.x86_64 : The Perl procedural language for EDBAS
edb-as12-server-plpython.x86_64 : The Python procedural language for EDBAS
edb-as12-server-plpython3.x86_64 : The Python3 procedural language for EDBAS
edb-as12-server-pltcl.x86_64 : The Tcl procedural language for EDBAS
edb-as12-server-sqlprofiler.x86_64 : SQL profiler for EnterpriseDB Advanced Server
edb-as12-server-sqlprotect.x86_64 : SQL Protect for EnterpriseDB Advanced Server

Great, so we are now able to look and install all our EDB packages through YUM, it’s lot easier than manually fixing dependencies and install core packages.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

How to monitor your PostgreSQL databases using EDB PEM – Setup, Config, benchmarking and much more …

Posted by FatDBA on March 26, 2021

Hi Everyone,

Today’s post will be all about monitoring your PostgreSQL database clusters using EDB PostgreSQL Enterprise Manager (PEM). Postgres Enterprise Manager is a comprehensive, customizable solution providing an interface to control and optimize your PostgreSQL deployment.

I will be doing the installation, configuration, adding servers to the console and will perform a live monitoring of the database while I will be generating some synthetic load on the database host. I am doing this on a standalone RHEL 7 64 Bit server which I will be using it both as a PEM server and local instance. Alright, so without further ado, lets start. So, first you need to download EDB’s official repository and install following package.

Below is a complete list of packages available with name ‘edb-pem’, you need to install version: edb-pem-8.0.1-1.rhel7.x86_64

[root@canttowin repo]# yum search edb-pem
Loaded plugins: langpacks, ulninfo

=================================================================== N/S matched: edb-pem ====================================================================
edb-pem-debuginfo.x86_64 : Debug information for package edb-pem
edb-pem.x86_64 : PostgreSQL Enterprise Manager
edb-pem-agent.x86_64 : Postgres Enterprise Manager Agent
edb-pem-docs.x86_64 : Documentation for Postgres Enterprise Manager
edb-pem-server.x86_64 : PEM Server Components

Once installation is completed, go to the default installation directory, it’s /usr/edb in my case, and go to pem/bin folder.

[root@canttowin ~]# cd /usr/edb/
[root@canttowin edb]# ls
as12 bart efm-4.1 jdbc migrationtoolkit pem pgbouncer1.15 pgpool4.2
[root@canttowin ~]# cd /usr/edb/pem/bin/
[root@canttowin bin]# ls
configure-pem-server.sh configure-selinux.sh

We see two configuration shell scripts are present, we will be using the configuration script – configure-pem-server.sh
Here I will be choosing option 1 which means I will be installing web services and databases all on one host, next you need to input installation path (/usr/edb/as12 in my case), followed by super user name, port numbers and IP Address of the server.

Before I call the config script, let me quickly reset the default superuser’s password.

postgres=# alter user postgres with password 'dixit';
ALTER ROLE

Now, let’s call the configuration scipt and pass all discussed values.

[root@canttowin bin]# ./configure-pem-server.sh

 -----------------------------------------------------
 EDB Postgres Enterprise Manager
 -----------------------------------------------------
Install type: 1:Web Services and Database, 2:Web Services 3: Database [ ] :1
Enter local database server installation path (i.e. /usr/edb/as12 , or /usr/pgsql-12, etc.) [ ] :/usr/edb/as12
Enter database super user name [ ] :enterprisedb
Enter database server port number [ ] :5444
Enter database super user password [ ] :
Please enter CIDR formatted network address range that agents will connect to the server from, to be added to the server's pg_hba.conf file. For example, 192.168.1.0/24 [ 0.0.0.0/0 ] :10.0.0.153/32
Enter database systemd unit file or init script name (i.e. edb-as-12 or postgresql-12, etc.) [ ] :edb-as-12
Please specify agent certificate path (Script will attempt to create this directory, if it does not exists) [ ~/.pem/ ] :
CREATE EXTENSION
[Info] Configuring database server.
CREATE DATABASE
CREATE ROLE
CREATE ...
..
..
..
CREATE EXTENSION
-->  [Info] -->  [Info] Configuring database server.
-->  [Info] -->  [Info] creating role pem
-->  [Info] -->  [Info] Generating certificates
-->  [Info] -->  [Info] Executing systemctl stop edb-as-12
-->  [Info] -->  [Info] Skipping - configurations for /var/lib/edb/as12/data/pg_hba.conf and /var/lib/edb/as12/data/postgresql.conf file
-->  [Info] -->  [Info] Executing systemctl start edb-as-12
-->  [Info] -->  [Info] Enable pemagent service.
-->  [Info] -->  [Info] Executing systemctl enable pemagent
-->  [Info] -->  [Info] Stop pemagent service
-->  [Info] -->  [Info] Executing systemctl stop pemagent
-->  [Info] -->  [Info] Start pemagent service.
-->  [Info] -->  [Info] Executing systemctl start pemagent
-->  [Info] -->  [Info] Configuring httpd server
-->  [Info] -->  [Info] Executing systemctl stop httpd
-->  [Info] -->  [Info] Taking backup of /usr/edb/pem/web/pem.wsgi
-->  [Info] -->  [Info] Creating /usr/edb/pem/web/pem.wsgi
-->  [Info] -->  [Info] Taking backup of /usr/edb/pem/web/config_local.py.
-->  [Info] -->  [Info] Generating PEM Cookie Name.
-->  [Info] -->  [Info] Creating /usr/edb/pem/web/config_local.py
-->  [Info] -->  [Info] Taking backup of /etc/httpd/conf.d/edb-pem.conf
-->  [Info] -->  [Info] Creating /etc/httpd/conf.d/edb-pem.conf
-->  [Info] -->  [Info] Configuring httpd server sslconf
-->  [Info] -->  [Info] Taking backup of /etc/httpd/conf.d/edb-ssl-pem.conf
-->  [Info] -->  [Info] Taking backup of /etc/httpd/conf.d/edb-ssl-pem.conf
-->  [Info] -->  [Info] Executing /usr/edb/pem/web/setup.py
Postgres Enterprise Manager - Application Initialisation
========================================================
-->  [Info] -->  [Info] Check and Configure SELinux security policy for PEM
 getenforce found, now executing 'getenforce' command
 Configure the httpd to work with the SELinux
 Allow the httpd to connect the database (httpd_can_network_connect_db = on)
 Allow the httpd to connect the network (httpd_can_network_connect = on)
 Allow the httpd to work with cgi (httpd_enable_cgi = on)
 Allow to read & write permission on the 'pem' user home directory
 SELinux policy is configured for PEM
-->  [Info] -->  [Info] Executing systemctl start httpd
-->  [Info] -->  [Info] Configured the webservice for EDB Postgres Enterprise Manager (PEM) Server on port '8443'.
-->  [Info] -->  [Info] PEM server can be accessed at https://127.0.0.1:8443/pem at your browser

It’s completed, and at the very end it has provided URL to access the PEM GUI.

Now next step is to install PEM Agents to the server, you need to install it on all servers which you want to monitor, I am leaving the PEMAgents configuration that you do in agent.cfg file.

[root@canttowin bin]# yum install edb-pem-agent

Let’s check the PEM GUI now.

Here on the left panel you will notice there’s already one database present under ‘PEM Server Directory’ folder, this is the same database which we have configured/used PEM server, hence it will be automatically added to the server list. We will manually add one more database cluster to explain how to do it explicitly.

Let’s check the dashboard for the same (PEM Server) database for session, TPS, IO related details.

Now, let’s add another database to the monitoring console. I will be adding a community PostgreSQL 12 database to it. Go to ‘PEM Server Directory’ folder right click on it, choose option create-> server.

Next, fill connection wizard with all details i.e, username, password, IP, port and security related details for the new database and click save at the end.

And you are done!

Now, let’s see the default landing page of PEM GUI and here you see details of all added hosts and agents with their status.

Next I will create some new databases to see how that data reflects in PEM GUI.
postgres=#
postgres=# create database dixit;
CREATE DATABASE
postgres=# create database kartikey;
CREATE DATABASE

postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size
-----------+----------+----------+-------------+-------------+-----------------------+--------
dixit | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8049 kB
kartikey | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8049 kB
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8193 kB

(3 rows)

All good! now let’s do some performance test to see how useful PEM can be in case of performance issues. In order to mimic or simulate the situation, I will generating some synthetic load using PostgreSQL’s default utility Pgbench.

Reference:
-c number of clients
-j 2 number of threads
-t amount of transactions

These values are 10000 transactions per client. So : 10 x 10000 = 100,000 transactions

[postgres@canttowin bin]$ ./pgbench -U postgres -p 5432 -c 10 -j 2 -t 10000 postgres
starting vacuum…end.

Let’s see how the changes are captured and presented in PEM.

Okay, we can see the peaks are recorded and presented.

The load is still running and we can clearly see that from the below graph.

[postgres@canttowin bin]$ ./pgbench -U postgres -p 5432 -c 10 -j 2 -t 10000 postgres
starting vacuum…end.transaction type:
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average = 18.217 ms
tps = 548.940142 (including connections establishing)
tps = 548.970173 (excluding connections establishing)

Alright, so the load run has ended, let see how the graph now looks like.

So to conclude, PEM is a great tool which can fulfil all your monitoring needs, it has got some cool features too i.e. performance dashboards, tuning wizards, advisories and other graphs.

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | 1 Comment »

How to monitor your PostgreSQL database using Grafana, Prometheus & postgres_exporter.

Posted by FatDBA on March 24, 2021

Hi Everyone,

I am back with the another post, this time it’s for monitoring PostgreSQL database using one of the popular interactive visualization platform Grafana. I have recently implemented Grafana + Prometheus and created few really cool performance charts and database metric dashboards using one of the popular PostgreSQL metric exporter ‘postgres_exporter‘ to monitor a EDB 12 PostgreSQL database cluster.

I have divided everything in to three parts – Grafana Installation & configuration, Prometheus Installation and Configuration and final postgres_exporter installation and configuration.

let’s first start with Grafana installation and configuration on Linux server (this is EL 7).

Grafana Installation:

1. Disable SELinux

Change SELINUX=enforcing to SELINUX=disabled and Reboot.

vi /etc/sysconfig/selinux

2. Now we need to create Grafana YUM repository

vi /etc/yum.repos.d/grafana.repo

and add following lines to it

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

3. Install Grafana now using YUM

yum install grafana

This will install all binaries to location /usr/sbin/grafana-server, copies init.d file to /etc/init.d/grafana-server and default log location would be /var/log/grafana/grafana.log.

4. Install additional font packages

yum install fontconfig
yum install freetype*
yum install urw-fonts

5. Now, enable grafana service

Enable Grafana service on system boot
systemctl enable grafana-server.service

Start it using the following command:
systemctl start grafana-server

6. Check Grafana web interface

http://IPADDRESSorHOSTNAME:3000/

Note: Default username and password is ‘admin‘, and once you login, it will prompt you to set a new password.

Add PostgreSQL as Data Source:

Next we now going to add PostgreSQL as a data source in Grafana.

1. Login to Grafana and go to ‘configuration‘ tab on the left of the console.

2. Add PostgreSQL as a data source, populate all details like hostname, port number, database name, DB user and password, PostgreSQL version and some optional entries like connection limits, SSL details etc. Once done, click on button with name save and test to see if all details are fine.

3. Now you will start seeing PostgreSQL as a data source under data source tab.

Prometheus Installation & Config

All good, now next is to install and configure ‘Prometheus’, this is a time-series database that is optimized for storing and serving time series through associated pairs of time(s) and value(s). You can also use InfluxDB or Graphite as a time series database for Grafana.

1. Download and untar the Prometheus file for your respective OS.
Example:
curl -LO url -LO https://github.com/prometheus/prometheus/releases/download/v2.22.0/prometheus-2.22.0.linux-amd64.tar.gz
tar -xvf prometheus-2.22.0.linux-amd64.tar.gz
mv prometheus-2.22.0.linux-amd64 prometheus-files

2. Create prometheus user, directories and make that account owner for all the files and folders.

sudo useradd --no-create-home --shell /bin/false prometheus
sudo mkdir /etc/prometheus
sudo mkdir /var/lib/prometheus
sudo chown prometheus:prometheus /etc/prometheus
sudo chown prometheus:prometheus /var/lib/prometheus

3. Copy prometheus and promtool binaries from prometheus-files to /usr/local/bin and change ownership

sudo cp prometheus-files/prometheus /usr/local/bin/
sudo cp prometheus-files/promtool /usr/local/bin/
sudo chown prometheus:prometheus /usr/local/bin/prometheus
sudo chown prometheus:prometheus /usr/local/bin/promtool

4. Move consoles and console_libraries directories from prometheus-files to /etc/prometheus folder and change ownership

sudo cp -r prometheus-files/consoles /etc/prometheus
sudo cp -r prometheus-files/console_libraries /etc/prometheus
sudo chown -R prometheus:prometheus /etc/prometheus/consoles
sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries

5. Now time to configure. Create prometheus.yml file

vi /etc/prometheus/prometheus.yml

copy below entries to the YAML file

global:
  scrape_interval: 10s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['10.0.0.153:9090']

6. Change ownership of this file

sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

7. Create prometheus service file

vi /etc/systemd/system/prometheus.service

copy below entries to the file.

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

8. Register systemd service to register prometheus service and start it.

sudo systemctl daemon-reload
sudo systemctl enable prometheus
sudo systemctl start prometheus

and check the status of the service

sudo systemctl status prometheus

[root@canttowin edb]# systemctl status prometheus
● prometheus.service - Prometheus
   Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-03-24 05:55:53 EDT; 4s ago
 Main PID: 17641 (prometheus)
    Tasks: 7
   CGroup: /system.slice/prometheus.service
           └─17641 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates...

Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.172Z caller=head.go:714 component=tsdb msg="WAL segment ...gment=10
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.358Z caller=head.go:714 component=tsdb msg="WAL segment ...gment=10
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.455Z caller=head.go:714 component=tsdb msg="WAL segment ...gment=10
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.456Z caller=head.go:714 component=tsdb msg="WAL segment ...gment=10
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.456Z caller=head.go:719 component=tsdb msg="WAL replay c...173255ms
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.478Z caller=main.go:732 fs_type=XFS_SUPER_MAGIC
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.478Z caller=main.go:735 msg="TSDB started"
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.478Z caller=main.go:861 msg="Loading configuration file"...heus.yml
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.481Z caller=main.go:892 msg="Completed loading of configuration …µs
Mar 24 05:55:54 canttowin.ontadomain prometheus[17641]: level=info ts=2021-03-24T09:55:54.481Z caller=main.go:684 msg="Server is ready to receive ...quests."
Hint: Some lines were ellipsized, use -l to show in full.

9. If no issues till here, you are all good, time to check Prometheus WEB GUI, you can access its using below URL.

http://10.0.0.153:9090/graph

10. You can check other stats and other available metrics via GUI console.

11. Now you will start seeing ‘Prometheus’ in the list of data sources along with PostgreSQL which we have added at the first step.

Postgres_Exporter Installation and configuration

Now, when Grafana and Prometheus is all set, time to install and configure postgres_exporter. Postgres_exporter is a popular PostgreSQL metric exporter for Prometheus.

1. Lets create few required directories first.

mkdir /opt/postgres_exporter
cd /opt/postgres_exporter

2. Download and untar the file.

wget https://github.com/wrouesnel/postgres_exporter/releases/download/v0.5.1/postgres_exporter_v0.5.1_linux-amd64.tar.gz
tar -xzvf postgres_exporter_v0.5.1_linux-amd64.tar.gz
cd postgres_exporter_v0.5.1_linux-amd64

3. Copy core file ‘postgres_exporter’ file to /usr/local/bin directory

cp postgres_exporter /usr/local/bin

4. Next, lets create the configuration file for postres_exporter

cd /opt/postgres_exporter
sudo vi postgres_exporter.env

I want to visualize my EDB 12 PostgreSQL stats, so will use below data source details. This is for all database, you can also monitor any specific database.

[root@canttowin edb]# more /opt/postgres_exporter/postgres_exporter.env
DATA_SOURCE_NAME="postgresql://enterprisedb:oracle@10.0.0.153:5444/?sslmode=disable"

5. Next, create the service for postgres_exporter

vi /etc/systemd/system/postgres_exporter.service

put below lines to the service file

[Unit]
Description=Prometheus exporter for Postgresql
Wants=network-online.target
After=network-online.target
[Service]
User=postgres
Group=postgres
WorkingDirectory=/opt/postgres_exporter
EnvironmentFile=/opt/postgres_exporter/postgres_exporter.env
ExecStart=/usr/local/bin/postgres_exporter
Restart=always
[Install]
WantedBy=multi-user.target

6. Next enable service and check status

sudo systemctl daemon-reload
sudo systemctl start postgres_exporter
sudo systemctl enable postgres_exporter

[root@canttowin edb]# systemctl status postgres_exporter
● postgres_exporter.service - Prometheus exporter for Postgresql
   Loaded: loaded (/etc/systemd/system/postgres_exporter.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-03-24 05:52:17 EDT; 2s ago
 Main PID: 16984 (postgres_export)
    Tasks: 3
   CGroup: /system.slice/postgres_exporter.service
           └─16984 /usr/local/bin/postgres_exporter --web.listen-address=:9187 --web.telemetry-path=/metrics

Mar 24 05:52:17 canttowin.ontadomain systemd[1]: Started Prometheus exporter for Postgresql.
Mar 24 05:52:17 canttowin.ontadomain postgres_exporter[16984]: time="2021-03-24T05:52:17-04:00" level=info msg="Established new database connection...go:777"
Mar 24 05:52:17 canttowin.ontadomain postgres_exporter[16984]: time="2021-03-24T05:52:17-04:00" level=info msg="Semantic Version Changed on \"10.0....o:1229"
Mar 24 05:52:18 canttowin.ontadomain postgres_exporter[16984]: time="2021-03-24T05:52:18-04:00" level=info msg="Starting Server: :9187" source="pos...o:1437"
Hint: Some lines were ellipsized, use -l to show in full.

7. Now we can check status if postgres_exporter (as a target) state information and other details. This we can check it through Prometheus web GUI.

As we have configured postgres_exporter and mapped it with Prometheus, we can also look for many of the expressions that it has created. This you can access on ‘graph’ page under Prometheus GUI console.

You can check it’s immediate results in both graphical or in console itself.

Alright, we have installed and configured Grafana and have added PostgreSQL and Prometheus as a data source and have configured postgres_exporter metric collector too. Time to generate some graphs. Well, you can do it by creating manual queries under new dashboards or else you can use any of prebuild Grafana dashboard templates for PostgreSQL i.e. 6742 or even 9628 (there are many more on Grafana labs repo).

1. Lets Import dashboard with ID 6742. Go to option with + sign on the left panel and choose last option ‘import’.

2. In next screen, pass dashboard ID 6742 and press button Load. You can also use JSON file entries instead of ID, I will use ID here in this sample configuration. Here select ‘Prometheus’ from the drop down list and click ‘Import’ and it’s all set!

3. Now this will bring the final dashboard showing lot’s of PostgreSQL metrics and their current values.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

DDL generation in PostgreSQL & a TRICK …

Posted by FatDBA on December 3, 2020

Hi Guys,

One of my friend recently asked if there is any way to get DDL of all objects that exists inside a schema in PostgreSQL without using any third party tools, something similar like dbms_metadata what we have in Oracle. Though it sounds trivial, but it’s not and this indeed an interesting question. I mean if you have to collect DDL for an Index, View, constraint, trigger or a function you have predefined/in-build PostgreSQL functions (pg_get_*def) and it will be easy to get the structure, but what if you want DDL of a Table, sequence or a type (There few more) ?

You might be thinking of doing concatenations or by doing some coding, and yes that can be done but that will be tricky and complicated. So there isn’t any direct way of generating definitions of said objects, but here comes the PG_DUMP utility for the rescue, you can take the dump of the entire schema and redirect its output in to a SQL file and you can get all your DDLs from the dump file.

Let’s do some hands on, I am on EDB PostgreSQL Version 10.12.20.


[enterprisedb@fatdba ~]$ psql
psql.bin (10.12.20)
Type "help" for help.

enterprisedb=# set search_path=migr;
SET
enterprisedb=# \dtsvi+
                                                               List of relations
 Schema |            Name             |   Type   |    Owner     |         Table          |    Size    |              Description
--------+-----------------------------+----------+--------------+------------------------+------------+----------------------------------------
 migr   | audits                      | table    | enterprisedb |                        | 0 bytes    |
 migr   | audits_audit_id_seq         | sequence | enterprisedb |                        | 8192 bytes |
 migr   | audits_pkey                 | index    | enterprisedb | audits                 | 8192 bytes |
 migr   | bar                         | table    | enterprisedb |                        | 8192 bytes |
 migr   | bar_pk                      | index    | enterprisedb | bar                    | 8192 bytes |
 migr   | bigtab                      | table    | enterprisedb |                        | 944 kB     |
 migr   | bigtab1                     | table    | enterprisedb |                        | 112 kB     |
 migr   | bigtab1_vw                  | view     | enterprisedb |                        | 0 bytes    |
 migr   | foo                         | table    | enterprisedb |                        | 8192 bytes |
 migr   | foo_bar                     | table    | enterprisedb |                        | 8192 bytes |
 migr   | foo_pk                      | index    | enterprisedb | foo                    | 8192 bytes |
 migr   | i_mlog$_bar                 | index    | enterprisedb | mlog$_bar              | 8192 bytes |
 migr   | i_mlog$_foo                 | index    | enterprisedb | mlog$_foo              | 8192 bytes |
 migr   | mlog$_bar                   | table    | enterprisedb |                        | 8192 bytes | snapshot log for master table MIGR.BAR
 migr   | mlog$_foo                   | table    | enterprisedb |                        | 8192 bytes | snapshot log for master table MIGR.FOO
 migr   | person_address_details      | table    | enterprisedb |                        | 40 kB      |
 migr   | person_address_details_pkey | index    | enterprisedb | person_address_details | 16 kB      |
 migr   | person_info                 | table    | enterprisedb |                        | 40 kB      |
 migr   | person_info_pkey            | index    | enterprisedb | person_info            | 16 kB      |
 migr   | trigger_test                | table    | enterprisedb |                        | 8192 bytes |
(20 rows)
 

This is how you can generate DDL’s for your views, Indexes and others.


enterprisedb=# select pg_get_viewdef('bigtab1_vw'::regclass, true);
           pg_get_viewdef
------------------------------------
  SELECT bigtab1.id,               +
     bigtab1.created_date,         +
     bigtab1.lookup_id,            +
     bigtab1.data                  +
    FROM bigtab1                   +
   WHERE bigtab1.id > 950::numeric;
(1 row)
                           
enterprisedb=#  select pg_get_indexdef('person_address_details_pkey'::regclass);
                                                pg_get_indexdef
----------------------------------------------------------------------------------------------------------------
 CREATE UNIQUE INDEX person_address_details_pkey ON migr.person_address_details USING btree (person_address_id)
(1 row)
 

Let’s take a pg_dump backup of this schema and will look out for table and other object definitions.


[enterprisedb@fatdba dumpddl]$ pg_dump --schema migr > migr_ddl.sql
[enterprisedb@fatdba dumpddl]$
[enterprisedb@fatdba dumpddl]$ ls -ltrh
total 728K
-rw-rw-r--. 1 enterprisedb enterprisedb 728K Nov 10 11:34 migr_ddl.sql
 

[enterprisedb@fatdba dumpddl]$ more migr_ddl.sql |grep -A 5 --color 'CREATE TABLE'
CREATE TABLE migr.audits (
    audit_id bigint NOT NULL,
    table_name character varying(255),
    transaction_name character varying(10),
    by_user character varying(30),
    transaction_date timestamp without time zone
--
CREATE TABLE migr.bar (
    foo numeric NOT NULL,
    bar numeric NOT NULL
);


--
CREATE TABLE migr.bigtab (
    id numeric(12,6),
    v1 character varying(10),
    padding character varying(50)
);

--
CREATE TABLE migr.bigtab1 (
    id numeric(10,0),
    created_date timestamp without time zone,
    lookup_id numeric(10,0),
    data character varying(50)
);
--
CREATE TABLE migr.foo (
    foo numeric NOT NULL
);


ALTER TABLE migr.foo OWNER TO enterprisedb;
--
CREATE TABLE migr.foo_bar (
    foo numeric,
    bar numeric,
    foo_rowid rowid,
    bar_rowid rowid
);
--
CREATE TABLE migr."mlog$_bar" (
    "m_row$$" character varying(255),
    "snaptime$$" timestamp without time zone,
    "dmltype$$" character varying(1),
    "old_new$$" character varying(1),
    "change_vector$$" bytea(255),
--
CREATE TABLE migr."mlog$_foo" (
    "m_row$$" character varying(255),
    "snaptime$$" timestamp without time zone,
    "dmltype$$" character varying(1),
    "old_new$$" character varying(1),
    "change_vector$$" bytea(255),
--
CREATE TABLE migr.person_address_details (
    person_address_id numeric(5,0) NOT NULL,
    person_id numeric(5,0),
    city character varying(15),
    state character varying(15),
    country character varying(20),
--
CREATE TABLE migr.person_info (
    person_id numeric(5,0) NOT NULL,
    first_name character varying(20),
    last_name character varying(20)
);

--
CREATE TABLE migr.trigger_test (
    id numeric NOT NULL,
    description character varying(50) NOT NULL
);

 

So, we finally got the DDL of tables and all objects inside the schema.

Hope It helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

Dockers – OCI runtime create failed: container_linux.go:349: starting container process caused – process_linux.go:449: container init caused \ – write /proc/self/attr/keycreate: permission denied\’

Posted by FatDBA on September 29, 2020

Hi Guys,

Today I would be discussing about one of the problem that I have encountered while starting PostgreSQL on a docker container. This is the very first time we are calling any container to run on this machine. The error says something like ‘OCI runtime create failed: container_linux.go:349’, followed by “process_linux.go:449: container init caused: permission denied”.I was totally dumbstruck as the error doesn’t give us any clue or idea where and what is failing.

The exact error is given below, and is simulated case on my personal sandbox, but with exact error and issue.


[root@fatdba-doccass ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
postgres            latest              817f2d3d51ec        4 days ago          314MB
[root@fatdba-doccass ~]# docker run --name postgres -e POSTGRES_PASSWORD=postgres -d postgres
c90d92ea603044d72ffed2449e550bfd39d328beacb6a55e17c4515861f86140
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused 
\"write /proc/self/attr/keycreate: permission denied\"": unknown.
 


I remember we fixed something similar, not exactly the same on one another docker setup, where we disabled the SELINUX and that worked for me. So, we planned to give it a try to see if that works, this being a test setup, we didn’t hesitate to try the said option. It was set up to value ‘ENFORCING’ and we will have to set it to value ‘disabled’ and reboot the machine.


[root@fatdba-doccass ~]# more /etc/selinux/config |grep "SELINUX="
SELINUX=disabled
[root@fatdba-doccass ~]# reboot
 


Now, when the system is back, we are all set to run the postgresql image.


[root@fatdba-doccass ~]# docker run --name postgres -e POSTGRES_PASSWORD=postgres -d postgres
6aebd2ea4304202980daeff761857f5aa53deaf51cf7d13b1d00974219b6f80c
[root@fatdba-doccass ~]#
[root@fatdba-doccass ~]#
 


Awesome, it worked, let’s check the status of the container.


[root@fatdba-doccass ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
6aebd2ea4304        postgres            "docker-entrypoint.s…"   8 seconds ago       Up 4 seconds        5432/tcp            postgres
[root@fatdba-doccass ~]#
[root@fatdba-doccass ~]#
 


Next, we tried to connect with the host and psql terminal and that worked too!


[root@fatdba-doccass ~]# docker exec -it fatdba_psql bash
Error: No such container: fatdba_psql
[root@fatdba-doccass ~]# docker exec -it postgres bash
root@6aebd2ea4304:/#
root@6aebd2ea4304:/# psql -U postgres postgres
psql (13.0 (Debian 13.0-1.pgdg100+1))
Type "help" for help.

postgres=#
postgres=# select version();
                                                     version
------------------------------------------------------------------------------------------------------------------
 PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
(1 row)

postgres=#
postgres=#
postgres-# \dt+
                             List of relations
 Schema | Name  | Type  |  Owner   | Persistence |    Size    | Description
--------+-------+-------+----------+-------------+------------+-------------
 public | dixit | table | postgres | permanent   | 8192 bytes |

 

Hope That Helped!
Prashant Dixit

Posted in Advanced | Tagged: , , , | Leave a Comment »

Connecting Oracle database from PostgreSQL using Public DB Links.

Posted by FatDBA on August 31, 2020

Hi Guys,

This post is in continuation of my last post where I showed how to connect to Oracle database from an PostgreSQL instance. Last post was all about accessing Oracle database using Foreign data wrappers. This post is about accessing/querying Oracle database from PostgreSQL using DB Links.

Software Used:
Oracle Instant Clients (Need both BASIC and DEVEL packages)
Link: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

For this method too, you have to install Oracle instant basic and devel packages.


[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
[sudo] password for enterprisedb:
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$

[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ ls -ll /usr/lib/oracle/12.2/client64/lib
total 216568
lrwxrwxrwx. 1 root root        21 Aug 27 22:59 libclntshcore.so -> libclntshcore.so.12.1
-rw-rwxr--. 1 root root   8033199 Jan 26  2017 libclntshcore.so.12.1
lrwxrwxrwx. 1 root root        17 Aug 27 22:59 libclntsh.so -> libclntsh.so.12.1
-rw-rwxr--. 1 root root  71638263 Jan 26  2017 libclntsh.so.12.1
-rw-rwxr--. 1 root root   2981501 Jan 26  2017 libipc1.so
-rw-rwxr--. 1 root root    539065 Jan 26  2017 libmql1.so
-rw-rwxr--. 1 root root   6568149 Jan 26  2017 libnnz12.so
lrwxrwxrwx. 1 root root        15 Aug 27 22:59 libocci.so -> libocci.so.12.1
-rw-rwxr--. 1 root root   2218687 Jan 26  2017 libocci.so.12.1
-rw-rwxr--. 1 root root 124771800 Jan 26  2017 libociei.so
-rw-rwxr--. 1 root root    158543 Jan 26  2017 libocijdbc12.so
-rw-rwxr--. 1 root root    380996 Jan 26  2017 libons.so
-rw-rwxr--. 1 root root    116563 Jan 26  2017 liboramysql12.so
-rw-r--r--. 1 root root   3984814 Jan 26  2017 ojdbc8.jar
-rw-rwxr--. 1 root root    312974 Jan 26  2017 ottclasses.zip
-rw-r--r--. 1 root root     37494 Jan 26  2017 xstreams.jar
[enterprisedb@fatdba ~]$
 


Cool, let’s connect with the PostgreSQL instance and create the DBLink. To create that you need Oracle username and its password which you want to connect and its IP address along with SID or database name, and you are done. And yes, don’t forget to set the LD_LIBRARY_PATH to the location of your Oracle instant client.


[enterprisedb@fatdba ~]$ export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ psql -d enterprisedb -U enterprisedb
psql.bin (10.12.20)
Type "help" for help.
enterprisedb=#
enterprisedb=#
enterprisedb=# select version();
                                                    version
---------------------------------------------------------------------------------------------------------------
 EnterpriseDB 10.12.20 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit
(1 row)

enterprisedb=#
enterprisedb=# CREATE DATABASE LINK dixdroid CONNECT TO migr IDENTIFIED BY 'oracle90' USING oci '//10.0.0.130/fatdb';
CREATE DATABASE LINK
enterprisedb=#
 


Great, now time to test. Lets do a query on Oracle’s table from Postgresql instance using DB Link named ‘dixdroid’ which we have created earlier.



enterprisedb=# select * from migr.bigtab1@dixdroid;
  id  |    created_date    | lookup_id |            data
------+--------------------+-----------+----------------------------
  320 | 19-MAY-19 02:10:38 |         1 | This is some data for 320
  321 | 19-MAY-18 02:10:38 |         2 | This is some data for 321
  322 | 19-MAY-19 02:10:38 |         1 | This is some data for 322
  323 | 19-MAY-20 02:10:38 |         3 | This is some data for 323
  324 | 19-MAY-18 02:10:38 |         2 | This is some data for 324
  325 | 19-MAY-20 02:10:38 |         3 | This is some data for 325
  326 | 19-MAY-19 02:10:38 |         1 | This is some data for 326
  327 | 19-MAY-18 02:10:38 |         2 | This is some data for 327
  328 | 19-MAY-19 02:10:38 |         1 | This is some data for 328
  329 | 19-MAY-20 02:10:38 |         3 | This is some data for 329
  330 | 19-MAY-18 02:10:38 |         2 | This is some data for 330
  331 | 19-MAY-20 02:10:38 |         3 | This is some data for 331
 

Great, it worked like a charm!

Hope It Helps
Prashant Dixit

Posted in Basics | Tagged: , | 1 Comment »

Connecting Oracle database from PostgreSQL using ORACLE_FDW (Foreign Data Wrappers)

Posted by FatDBA on August 27, 2020

Hi All,

Today when I received a call from my team mate where he was facing issues with PostgreSQL when trying to access a remote table that exists on an Oracle database, I I instantly recalled something similar I tried few years back using Oracle Foreign Data Wrappers (oracle_FDW). So, I tried to perform a quick demo showing steps on how to do that using FDW.

Source: EnterpriseDB 10.12.20 on x86_64-pc-linux-gnu
Target: Oracle Enterprise 12c Release 2 (12.2.0.1.0)

Software Used:
Oracle Instant Clients (Need both BASIC and DEVEL packages)
Link: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

Oracle FDW Software: Version 2.2.1)
Link: https://pgxn.org/dist/oracle_fdw/2.2.1/

Below are the files that I have downloaded.


-rw-rw-r--.  1 enterprisedb enterprisedb 143K Aug 27 17:12 oracle_fdw-2.2.1.zip
-rw-rw-r--.  1 enterprisedb enterprisedb  51M Aug 27 17:26 oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
-rw-rw-r--.  1 enterprisedb enterprisedb 593K Aug 27 17:27 oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
drwxrwxr-x.  5 enterprisedb enterprisedb 4.0K Aug 27 22:43 oracle_fdw-2.2.1

[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
 


Let’s install both if the two instant clients on the server.


[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
[sudo] password for enterprisedb:
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ sudo -Uvh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
sudo: unknown user: vh
[enterprisedb@fatdba ~]$ sudo rpm -Uvh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracle-instantclient12.########################################### [100%]
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$
 


now let’s quickly check if all required libraries are created.


[oracle@fatdba client64]$ cd lib/
[oracle@fatdba lib]$ pwd
/usr/lib/oracle/12.2/client64/lib
[oracle@fatdba lib]$
[oracle@fatdba lib]$
[oracle@fatdba lib]$ ls -ll
total 216568
lrwxrwxrwx. 1 root root        21 Aug 27 22:59 libclntshcore.so -> libclntshcore.so.12.1
-rw-rwxr--. 1 root root   8033199 Jan 26  2017 libclntshcore.so.12.1
lrwxrwxrwx. 1 root root        17 Aug 27 22:59 libclntsh.so -> libclntsh.so.12.1
-rw-rwxr--. 1 root root  71638263 Jan 26  2017 libclntsh.so.12.1
-rw-rwxr--. 1 root root   2981501 Jan 26  2017 libipc1.so
-rw-rwxr--. 1 root root    539065 Jan 26  2017 libmql1.so
-rw-rwxr--. 1 root root   6568149 Jan 26  2017 libnnz12.so
lrwxrwxrwx. 1 root root        15 Aug 27 22:59 libocci.so -> libocci.so.12.1
-rw-rwxr--. 1 root root   2218687 Jan 26  2017 libocci.so.12.1
-rw-rwxr--. 1 root root 124771800 Jan 26  2017 libociei.so
-rw-rwxr--. 1 root root    158543 Jan 26  2017 libocijdbc12.so
-rw-rwxr--. 1 root root    380996 Jan 26  2017 libons.so
-rw-rwxr--. 1 root root    116563 Jan 26  2017 liboramysql12.so
-rw-r--r--. 1 root root   3984814 Jan 26  2017 ojdbc8.jar
-rw-rwxr--. 1 root root    312974 Jan 26  2017 ottclasses.zip
-rw-r--r--. 1 root root     37494 Jan 26  2017 xstreams.jar
[oracle@fatdba lib]$
 


Yup, they all are there, perfect!
Now lets set the installation library path using variable LD_LIBRARY_PATH, same as below.
This is to avoid errors i.e.

ERROR:  could not load library "/opt/edb/as10/lib/oracle_fdw.so": libclntsh.so.12.1: cannot open shared object file: No such file or directory 


[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$ export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
[enterprisedb@fatdba oracle_fdw-2.2.1]$
 


Now let’s do a quick restart of the postgresql instance.


[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$ pg_ctl -D /opt/edb/as10/data restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2020-08-27 23:06:33 CEST [3641]: [1-1] user=,db=,remote= LOG:  listening on IPv4 address "0.0.0.0", port 5444
2020-08-27 23:06:33 CEST [3641]: [2-1] user=,db=,remote= LOG:  listening on IPv6 address "::", port 5444
2020-08-27 23:06:33 CEST [3641]: [3-1] user=,db=,remote= LOG:  listening on Unix socket "/tmp/.s.PGSQL.5444"
2020-08-27 23:06:33 CEST [3641]: [4-1] user=,db=,remote= LOG:  redirecting log output to logging collector process
2020-08-27 23:06:33 CEST [3641]: [5-1] user=,db=,remote= HINT:  Future log output will appear in directory "custompg_logdir".
 done
server started
[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$
[enterprisedb@fatdba oracle_fdw-2.2.1]$ psql -d enterprisedb -U enterprisedb
psql.bin (10.12.20)
Type "help" for help.
 


All good, lets created the EXTENSION in postgresql.


enterprisedb=#
enterprisedb=#
enterprisedb=# create extension oracle_fdw;
CREATE EXTENSION
enterprisedb=#
 


Now next will create the foreign server for the remote database which I need to connect to, Oracle 12.2 in my case.
Note: Here 10.0.0.130 is the oracle_server machine IP address where Oracle is running and ‘fatdb’ is the instance name.


enterprisedb=#
enterprisedb=#
enterprisedb=#
enterprisedb=# create server oracle foreign data wrapper oracle_fdw options (dbserver '//10.0.0.130/fatdb' );
CREATE SERVER
enterprisedb=#
 


Next is to create the USER MAPPING for the specific user which I would like to access, in my case it was ‘migr’ user, and finally pass the password of the user.


enterprisedb=# create user mapping for enterprisedb server oracle options (user 'migr', password 'oracle90');
CREATE USER MAPPING
enterprisedb=#
enterprisedb=#
 


All set, you can now try to query the remote table.


enterprisedb=#
enterprisedb=# select * from migr.bigtab1;
  id  |    created_date    | lookup_id |            data
------+--------------------+-----------+----------------------------
  320 | 19-MAY-19 02:10:38 |         1 | This is some data for 320
  321 | 19-MAY-18 02:10:38 |         2 | This is some data for 321
  322 | 19-MAY-19 02:10:38 |         1 | This is some data for 322
  323 | 19-MAY-20 02:10:38 |         3 | This is some data for 323
  324 | 19-MAY-18 02:10:38 |         2 | This is some data for 324
  325 | 19-MAY-20 02:10:38 |         3 | This is some data for 325
  326 | 19-MAY-19 02:10:38 |         1 | This is some data for 326
  327 | 19-MAY-18 02:10:38 |         2 | This is some data for 327
  328 | 19-MAY-19 02:10:38 |         1 | This is some data for 328
  329 | 19-MAY-20 02:10:38 |         3 | This is some data for 329
  330 | 19-MAY-18 02:10:38 |         2 | This is some data for 330
  331 | 19-MAY-20 02:10:38 |         3 | This is some data for 331
  332 | 19-MAY-19 02:10:38 |         1 | This is some data for 332
 


Let’s do some more fun, lets try to do one insertion.


enterprisedb=#
enterprisedb=# select count(*) from migr.bigtab1;
 count
-------
  1000
(1 row)

enterprisedb=# insert into migr.bigtab1 values(1001, '19-MAY-19 02:10:38', 4, 'This is some data for prashant');
INSERT 0 1
enterprisedb=# select count(*) from migr.bigtab1;
 count
-------
  1001
(1 row)

enterprisedb=# select * from migr.bigtab1 where id=1001;
  id  |    created_date    | lookup_id |              data
------+--------------------+-----------+--------------------------------
 1001 | 19-MAY-19 02:10:38 |         4 | This is some data for prashant
(1 row)

enterprisedb=#
enterprisedb=#
 

Hope It Helps
Prashant Dixit

Posted in Advanced | Tagged: | 1 Comment »

Migrating from Oracle to PostgreSQL using ora2pg

Posted by FatDBA on October 26, 2018

Hey Everyone,

Nowadays lot’s of organizations are started looking to migrate their databases from Oracle to open source databases and specially when they are looking for such replacements they in general looks for cost efficiency, High performance, good data integrity and easy integration with cloud providers i.e. Amazon. PostgreSQL Database is the answer for most of them, i mean not only the cost but with PostgreSQL you are not compromising any of the good features like replication, clustering, NoSQL support and other features as well.

PostgreSQL has always been a popular database for about a decade now and currently the second most loved DB.
It’s gradually taking over many databases as it’s a true open source, flexible, standard-compliant, and highly extensible RDBMS solution. Recently it has gotten significantly better with features like full text search, logical replication, json support and lot of other cool features.

* Of course i love Oracle and will always remain my first love, it is just that i am a fan of PostgreSQL too! 🙂 🙂

Okay so coming back to the purpose of writing this post – How to do the from your existing Oracle Database to PostgreSQL using one of the popular open source software Ora2pg ?

During the post i will be discussing about one migration that i did using the tool.
Here during the post i won’t be discussing in depth checks and factors that you will be considering while adopting the right approach, tool, methodology or strategy. I am planning to cover these items during future posts.

Before the start i would like to give a short introduction about the tool and the approach. ora2pg is the most open-source tool used for migrating the Oracle database to PostgreSQL.
Most of the Schema migration can be done automatically using ora2pg. The Oracle database objects not supported by PostgreSQL must be identified and must be migrated manually. Ora2pg partially migrated PL/SQL objects. For example: PostgreSQL does not support objects like Packages and SCHEMA can be used as an alternative for Package definitions and Package Body must be converted to FUNCTION(S) as Package Body alternative.

Few of the features that are offered by its latest version (19.1)
– Export full database schema (tables, views, sequences, indexes), with unique, primary, foreign key and check constraints.
– Export grants/privileges for users and groups.
– Export range/list partitions and sub partitions.
– Export a table selection (by specifying the table names).
– Export Oracle schema to a PostgreSQL 8.4+ schema.
– Export predefined functions, triggers, procedures, packages and package bodies.
– Export full data or following a WHERE clause.
– Full support of Oracle BLOB object as PG BYTEA.
– Export Oracle views as PG tables.
– Provide some basic automatic conversion of PLSQL code to PLPGSQL.
– Export Oracle tables as foreign data wrapper tables.
– Export materialized view.
– Show a detailed report of an Oracle database content.
– Migration cost assessment of an Oracle database.
– Migration difficulty level assessment of an Oracle database.
– Migration cost assessment of PL/SQL code from a file.
– Migration cost assessment of Oracle SQL queries stored in a file.
– Export Oracle locator and spatial geometries into PostGis.
– Export DBLINK as Oracle FDW.
– Export SYNONYMS as views.
– Export DIRECTORY as external table or directory for external_file extension.

There are few other unsupported objects like Materialized Views, Public Synonyms IOT Tables and has other alternatives in PostgreSQL.

Okay now i will be jumping to the real execution.

Step 1: Installation
You need to install Oracle & postgres database drivers and perl db modules which is required by the ora2pg tool to run.

I will be using ora2pg version 19.1, PG Driver (DBD-Pg-3.7.4), Oracle Driver (DBD-Oracle-1.75_2) and Perl Module (DBI-1.641.tar.gz).
All pakages you can download from https://metacpan.org/ and for tool itself go to https://sourceforge.net/projects/ora2pg/
First i’ve installed Perl Modules and the usual steps to install all of the required three packages is given below. Unzip packages and call files mentioned below in same sequence.


perl Makefile.PL
make
make test
make install

Step 2: Next you have to install the ora2pg tool.
Like any other Perl Module Ora2Pg can be installed with the following commands.



tar xjf ora2pg-x.x.tar.bz2
cd ora2pg-x.x/
perl Makefile.PL
make && make install


Step 3: Configuring the tool as per need.
Ora2Pg consist of a Perl script (ora2pg) and a Perl module (Ora2Pg.pm), the only thing you have to modify is the configuration file ora2pg.conf by setting the DSN to the Oracle database and optionally the name of a schema. Once that’s done you just have to set the type of export you want: TABLE with constraints, VIEW, MVIEW, TABLESPACE, SEQUENCE, INDEXES, TRIGGER, GRANT, FUNCTION, PROCEDURE, PACKAGE, PARTITION, TYPE, INSERT or COPY, FDW, QUERY, SYNONYM.



Installation creates the configuration file under /etc directory i.e.
[root@gunna ora2pg]# pwd
/etc/ora2pg


Next you have to set few of the required parameters within the configuration file, for example.



# Set Oracle database connection (datasource, user, password)
ORACLE_DSN      dbi:Oracle:host=gunna.localdomain;sid=gunnadb;port=1539
ORACLE_USER     system
ORACLE_PWD      oracle90


# Oracle schema/owner to use
SCHEMA          soe


# Type of export. Values can be the following keyword:
-- Here i will be exporting the TABLES, PROCEDURES, FUNCTION and will be exporting from tables as INSERT statements (Or you can choose COPY as format).
TYPE            TABLE INSERT PROCEDURE FUNCTION

# Output file name
OUTPUT          oracletopgmigrationsoeschema.sql


Step 4: Now after the configuration set, we are all good to call the tool and take the export dump for Oracle’s SOE schema in our database.

The SOE schema in Oracle contains only 2 tables – ADDRESSES (1506152 Rows) and CARD_DETAILS (1505972 rows).
Let’s quickly verify it …




SQL> select count(*) from addresses;
 count
-------
 150615

Next you will be required to set the Oracle Library Path.
[root@gunna ora2pg]#  export LD_LIBRARY_PATH=/home/oracle/app/oracle/product/12.2.0/dbhome_1/lib



Now, call the tool



[root@gunna ora2pg]# ora2pg
[========================>] 2/2 tables (100.0%) end of scanning.
[========================>]  0/2 tables (0.0%) end of scanning.
[========================>] 2/2 tables (100.0%) end of table export.
[========================>] 1506152/1506152 rows (100.0%) Table ADDRESSES (9538 recs/sec)
[========================>]  3012124/3012124 total rows (100.0%) - (159 sec., avg: 9538 recs/sec).
[========================>] 1505972/1505972 rows (100.0%) Table CARD_DETAILS (16666 recs/sec)
[========================>] 1/1 Indexes(100.0%) end of output.
[========================>] 2/2 Tables(100.0%) end of output.
[root@gunna ora2pg]#
[root@gunna ora2pg]#


This will create the dump in the same directory from where you’ve called.



[root@gunna ora2pg]# ls -ltrh
-rw-r--r--. 1 root root  47K Sep 24 07:18 ora2pg.conf.dist_main
-rw-r--r--. 1 root root  47K Oct  8 05:04 ora2pg.conf
-rw-r--r--. 1 root root 668M Oct 10 03:49 oracletopgmigrationsoeschema.sql


Step 5: Let’s see what’s inside the dump.
Here you will see all data type conversions and Insert statements will be created by the tool itself.
example: integer to bigint, date to timestamp, varchar2 to varchar and etc.

Below are the contents copied from the dump.



CREATE TABLE addresses (
        address_id bigint NOT NULL,
        customer_id bigint NOT NULL,
        date_created timestamp NOT NULL,
        house_no_or_name varchar(60),
        street_name varchar(60),
        town varchar(60),
        county varchar(60),
        country varchar(60),
        post_code varchar(12),
        zip_code varchar(12)
) ;
ALTER TABLE addresses ADD PRIMARY KEY (address_id);


CREATE TABLE card_details (
        card_id bigint NOT NULL,
        customer_id bigint NOT NULL,
        card_type varchar(30) NOT NULL,
        card_number bigint NOT NULL,
        expiry_date timestamp NOT NULL,
        is_valid varchar(1) NOT NULL,
        security_code integer
) 
CREATE INDEX carddetails_cust_ix ON card_details (customer_id);
ALTER TABLE card_details ADD PRIMARY KEY (card_id);


BEGIN;
ALTER TABLE addresses DROP CONSTRAINT IF EXISTS add_cust_fk;

INSERT INTO addresses (address_id,customer_id,date_created,house_no_or_name,street_name,town,county,country,post_code,zip_code) VALUES (5876,984495,'2008-12-13 08:00:00',E'8',E'incompetent gardens',E'Armadale',E'West Lothian',E'Norway',E'4N2W7M',E'406013');
INSERT INTO addresses (address_id,customer_id,date_created,house_no_or_name,street_name,town,county,country,post_code,zip_code) VALUES (5877,166622,'2005-05-21 23:00:00',E'35',E'nasty road',E'Millport',E'Glasgow',E'Austria',NULL,NULL);
INSERT INTO addresses (address_id,customer_id,date_created,house_no_or_name,street_name,town,county,country,post_code,zip_code) VALUES (5878,221212,'2009-03-21 14:00:00',E'80',E'mushy road',E'Innerleithen',E'Flintshire',E'Germany',E'RIUMCV',E'813939');
INSERT INTO addresses (address_id,customer_id,date_created,house_no_or_name,street_name,town,county,country,post_code,zip_code) VALUES (5879,961529,'2004-01-02 08:00:00',E'73',E'obedient road',E'Milton',E'South Gloucestershire',E'Massachusetts',NULL,NULL);
INSERT INTO addresses (address_id,customer_id,date_created,house_no_or_name,street_name,town,county,country,post_code,zip_code) VALUES (5880,361999,'2000-04-16 22:00:00',E'56',E'chilly road',E'Cupar',E'Dorset',E'Philippines',NULL,NULL);

and so on ....


Step 5: Next, time to import the Oracle data to Postgres Database
Before import lets quickly create the sample database and schema.



postgres=# CREATE DATABASE migra;
CREATE DATABASE
postgres=#

dixit=# create schema soe;
CREATE SCHEMA


postgres=# \l
                                  List of databases
   Name    |  Owner   | Encoding |  Collation  |    Ctype    |   Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
 migra     | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |



Now i will be starting the import process.



dixit=# \i oracletopgmigrationsoeschema.sql
SET
CREATE TABLE
CREATE TABLE
CREATE INDEX
INSERT 1,0
.......
.........



dixit=# \dt+
                            List of relations
 Schema |         Name         | Type  |  Owner   |  Size   | Description
--------+----------------------+-------+----------+---------+-------------
 public | addresses            | table | postgres | 40 MB   |
 public | card_details         | table | postgres | 10 MB   |


postgres=# select count(*) from addresses;
 count
-------
 150615
(1 row)



There are whole lot of areas that i could cover, but just to keep the post simple and easy to digest for readers i will be covering issues that i faced or manual efforts that are needed during the migration and other areas.

Hope It Helps
Prashnt Dixit

Posted in Advanced | Tagged: | 1 Comment »

 
%d bloggers like this: