Tales From A Lazy Fat DBA

Love all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Archive for the ‘Uncategorized’ Category

When and why the optimizer switched to RBO from CBO mode … Why I cannot see that in 10053 trace ??

Posted by FatDBA on August 28, 2021

Hi Guys,

Recently I was working on one performance issue where one critical SQL statement started consuming more time. After a quick check I saw a switch was happened to RBO mode from CBO, but wasn’t sure on when and why the optimizer mode was switched. And the expected answer to my quest is to generate the debug 10053 trace file to get some insight about the cost-based optimizer internal calculations and to check the cardinality, selectivity and draw a good parallel with the way cost of table, index or sort etc. may be calculated.

Usually the best way to work out what’s going on in this situation is to look at the optimizer debug trace event, 10053 trace file. I always prefer to generate optimizer traces in such situations when the mighty optimizer messed up things. Being a performance consultant, it had saved me so many times in the past, always a best bet for me.

But this time it was looking little different, I couldn’t see details about why optimizer switched the mode in the ‘Query‘ section of the trace. I was totally perplexed, I mean this was not the first time I was looking for that information in the trace file. Why it’s not there, what happened .. 😦

This was Oracle 19.3.0.0.0 database running on RHEL, I tried metalink and found one document specific to this issue and luckily this was happening all due to a known bug 31130156. The problem was later on solved after we applied the bug-fix patch and interpreted the reason of the mode switch (I will write another post about the core problem) …

Note: It can be very difficult to interpret the 10053 optimizer trace if you don’t have any prior experience on it. I recommend readers to check one great document written by Wolfgang Breitling with title ‘A Look Under The Hood Of CBO’.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

OGG-01201 Error reported by MGR Access denied

Posted by FatDBA on August 22, 2021

Hi Guys,

Last week encountered a problem with one old GG setup running on v12.2 where the extract was failing with errors OGG-01201/OGG-01668 when doing Initial load.

ERROR   OGG-01201  Oracle GoldenGate Capture for Oracle, exld1.prm:  Error reported by MGR : Access denied
ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, exld1.prm:  PROCESS ABENDING

This ‘access denied’ error was there even when the login information was correct for both source and target systems. I was overly confused and wasn’t sure what was causing the issue!

What I come to know after reading a particular piece of documentation, in version 12.2 of GG, the default behavior is the MANAGER and related EXTRACT/REPLICAT cannot be started or stopped remotely as by default there is only deny rule. And while I was trying to do the initial load on the source server and attempts to starts the replicat on target server, I hit the error. This is a security feature and is to prevent unauthorized access to Oracle GoldenGate manager processes and the processes under its control.

Solution to the problem is add “ACCESSRULE, PROG *, IPADDR *, ALLOW” to your manager parameter file on the target system, something like below. The ACCESSRULE parameter restricts the remote system access.

-- GoldenGate Manager Parameter File (mgr.prm) on Target system
--
userid xxxxxxx, password xxxxxxx
PORT 7810
ACCESSRULE, PROG REPLICAT, IPADDR 10.11.01.15, ALLOW
PURGEOLDEXTRACTS ./dirdat/rp*, USECHECKPOINTS, MINKEEPHOURS 4

Here you can also set priority using PRI (0-99) which specifies the priority. The PROG parameter could be anything like GGSCI, GUI, MGR/MANAGER, REPLICAT, COLLECTOR|SERVER and * for all options (default). IPADDR specifies from which IP can access the specified program. Login_ID specifies with RMTHOST configuration and ALLOW | DENY specifies allow or deny the access.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

New JSON features – The old lovely database is new again with 21c

Posted by FatDBA on August 21, 2021

Oracle Database supports relational, graph, spatial, text, OLAP, XML, and JSON data – yes, all at once in one database.
The Oracle Database 21c provides a native JSON data type in binary format. This data type can be used in tables, uses less space and is faster. It’s uniquely designed Oracle Binary JSON format (OSON) is able to speed up both OLAP and OLTP workloads over JSON documents.

I’d recently did some tests and found JSON datatype is now fully integrated into all components of the 21c database and have few new things added to improve its performance. This post is all about JSON datatype in the Oracle 21c Database eco-system, new features, improvements etc.

So, before I move ahead, would like to first build the foundation for the readers, lets create a table with with JSON Data and do some examples.

-- Create a table with JSON datatype.
CREATE TABLE testOrder
 (did NUMBER PRIMARY KEY, jdoc JSON)


-- Let's insert some data to the table
INSERT INTO testOrder
VALUES (1, ' {"testOrder": {
"podate": "2015-06-03",
"shippingAddress": {"street": "3467 35th Ave",
 "city" : "Clara", “state”: “CA”, "zip":
94612},
"comments" : "Discounted sales Foundation Day",
"sparse_id" :"PFHA35",
"items": [
 {"name" : "TV", "price": 341.55, "quantity": 2,
 "parts": [
 {"partName": "remoteCon", "partQuantity": 1},
 {"partName": "antenna”, "partQuantity": 2}]},
 {"name": “PC”, “price”: 441.78, "quantity": 10,
 "parts": [
 {"partName": "mousepad", "partQuantity": 2},
 {"partName": "keyboard", "partQuantity": 1}]}
]}}');



-- Do some SELECT ops
SELECT did,
 po.jdoc.testOrder.podate.date(),
 po.jdoc.testOrder.shippingAddress,
 po.jdoc.testOrder.items[*].count(),
 po.jdoc.testOrder.item[1]
FROM testOrder po
WHERE po.jdoc.testOrder.podate.date() =
TO_DATE(‘2015-06-03’,'YYYY-MM-DD') AND
po.jdoc.testOrder.shippingAddress.zip.number()
BETWEEN 84610 AND 84620;


SELECT JSON {
 ‘name’ : li.itemName,
 ‘sales’ : li.price * li.quantity
}
FROM lineItems_rel li 




-- That's how to UPDATE 
UPDATE testOrder po
SET jdoc = JSON_TRANSFORM(jdoc,
 REPLACE
‘$.testOrder.shippingAddress.city’
 = ‘Oakland’,
 REPLACE ‘$.testOrder.shippingAddress.zip’
 = 94607,
 SET '$.testOrder.contactPhone' =
 JSON('["(415)-867-8560","(500)312-8198"]'),
 REMOVE ‘$.testOrder.sparse_id’,
 APPEND ‘$.testOrder.items’ =
 JSON(‘{“items” :[{“name”:”iphone”,
 “price” : 635.54, “quantity” :2}]}’))
WHERE po.jdoc,testOrder.podate.date() =
 TO_DATE(‘2019-07-01’); 

So, that’s how you can create, query, update your JSON data in any table, pretty cool right 🙂

Okay, coming back to the purpose of the post – What is new in Oracle 21c in terms of JSON support ?

  • Though JSON data type was added in Oracle 20c to provide native support, but is generally available in version 21c.
  • Earlier to 21c, users can only use a single-value functional index to accelerate JSON_VALUE() predicate evaluation. Antecedently, a functional index was bounded to index at most one value per row for JSON that meant a field value having at most one occurrence. In 21c, a user can create a multi-value functional index on a JSON datatype column to index elements within a JSON array. This speeds up the rating of JSON_EXISTS() – an operator allowing the use array of equivalence predicates of the SQL/JSON path language.
  • Oracle 21c includes several other enhancements to the JSON functionality in the database JSON_SCALAR function, that creates an instance of a JSON type from a SQL scalar value.
  • JSON_TRANSFORM function was introduced in Oracle Database 21c to make JSON data alterations simpler or easier in complexity.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: | Leave a Comment »

What’s new in Golden Gate version 21c ?

Posted by FatDBA on August 20, 2021

Hi Guys,

Oracle has recently released Golden Gate version 21.1, this happened immediately after they released database version 21c (21.3) for on-prem. Today’s post is all about new features and changes happened with this new GG version.

  • Oracle GoldenGate is available with Microservices Architecture : This release of Oracle GoldenGate is available with Microservices Architecture only.
  • This release of Oracle GoldenGate is available with Microservices Architecture only.
  • Automatic Extract of tables with supplemental logging is supported : Oracle GoldenGate provides a new auto_capture mode to capture changes for all the tables that are enabled for logical replication. You can list the tables enabled for auto-capture using the LIST TABLES AUTO_CAPTURE command option. Use the TRANLOGOPTIONS INTEGRATEDPARAMS auto_capture option to set up automatic capture.
  • Oracle native JSON datatype is supported : Oracle GoldenGate capture and apply processes now support the new native JSON datatype, which is supported by Oracle Database 21c and higher.
  • Enhanced Automatic Conflict Detection and Resolution for Oracle Database 21c
  • Autonomous Database Extract is supported : Oracle GoldenGate can now capture from the Autonomous Databases in OCI.
  • Large DDL (greater than 4 MB) replication is supported : DDLs that are greater than 4 MB in size will be provided replication support.
  • DB_UNIQUE_NAME with heartbeat table : DB_UNIQUE_NAME is available with the heartbeat table to allow users to uniquely identify the source of the heartbeat.
  • Oracle GoldenGate binaries are no longer installed on a shared drive : Oracle always recommended installing the Oracle GoldenGate binaries (OGG_HOME) on a local file system as a best practice. From this release onward, it is a requirement. The binaries must be installed on local drives.
  • Partition Filtering
  • A new Extract needs to be created when the DB timezone is changed : You need to create new Extract if DB timezone is changed, especially in case of Oracle Cloud deployment.
  • DB_UNIQUE_NAME with trail file header : DB_UNIQUE_NAME is added in the trail file header along with DB_NAME, which helps in troubleshooting replication in active-active environments, where mostly all replicas have the same DB_NAME but identify each replica site uniquely using the DB_UNIQUE_NAME.
  • Per PDB Capture
  • Parallel Replicat Core Infrastructure Support for Heterogeneous Databases : Parallel Replicat is supported with SQL Server, DB2 z/OS, and MySQL.

Release announcement link : https://blogs.oracle.com/dataintegration/oracle-goldengate-21c-release-announcement

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Kafka Producer: Error while fetching metadata with correlation id INVALID_REPLICATION_FACTOR

Posted by FatDBA on August 14, 2021

Hi Guys,

Recently I was working on a replication project where we used Kafka to move data from source to target. I tried to create a test topic using Kafka producer console and immediately kicked out with error which says “INVALID_REPLICATION_FACTOR”. This we were doing on a test VM with single CPU and with limited system resources.

[root@cantowintert bin]#
[root@cantowintert bin]# kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic first_topic
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
>hello prashant
[2021-07-26 08:18:30,051] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 40 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,154] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 41 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,260] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 42 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,367] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 43 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,471] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 44 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,576] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 45 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,681] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 46 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,788] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 47 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:30,896] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 48 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:31,000] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 49 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:31,103] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 50 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:18:31,221] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 51 : {first_topic=INVALID_REPLICATION_FACTOR} (org.apache.kafka.clients.NetworkClient)
^Corg.apache.kafka.common.KafkaException: Producer closed while send in progress
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:909)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
        at kafka.tools.ConsoleProducer$.send(ConsoleProducer.scala:71)
        at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:53)
        at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: Requested metadata update after close
        at org.apache.kafka.clients.producer.internals.ProducerMetadata.awaitUpdate(ProducerMetadata.java:126)
        at org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:1047)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:906)
        ... 4 more
[root@cantowintert bin]#

Lets check what is captured in Kafka server startup logs, and we found the hint that the RF is > than the available brokers.

org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
[2021-07-26 08:24:45,723] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (cantowintert.bcdomain/192.168.20.129:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:25:06,830] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (cantowintert.bcdomain/192.168.20.129:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-07-26 08:25:27,950] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (cantowintert.bcdomain/192.168.20.129:9092) could not be established. Broker may not be available. 

Solution to the problem is to uncomment this line and restart and try to edit the topic all over again.

listeners=PLAINTEXT://:9092

changed this to

listeners=PLAINTEXT://127.0.0.1:9092

Hope It helped
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

What is that strange looking TRANLOGOPTIONS EXCLUDETAG in parameter file ?

Posted by FatDBA on August 11, 2021

Hi Guys,

Recently someone asked why we have this strange looking entry in Golden Gate extract parameter file which reads ‘TRANLOGOPTIONS EXCLUDETAG 00‘. Why is that, what are those numbers ? I was able to explain him the purpose, would like to write a short post about it.

OGG v12.1.2 has a new EXTRACT parameter TRANLOGOPTIONS EXCLUDETAG. This is typically used to exclude the REPLICAT user in bi-directional configurations. When Extract is in classic or integrated capture mode, use the TRANLOGOPTIONS parameter with the EXCLUDETAG tag option. This parameter directs the Extract process to ignore transactions that are tagged with the specified redo tag. For example:

extract exttestpd
useridalias clouduser
EXTTRAIL ./dirdat/rp, format release 12.1
ddl include all
ddloptions addtrandata, report
tranlogoption excludetag 00 
TABLE dixituser.*;

Changes made by Integrated REP are tagged by default in redo as 00. So adding the EXTRACT parameter TRANLOGOPTIONS EXCLUDETAG 00 Would exclude those operations. The tag can also be explicitly set in REPLICAT using:

DBOPTIONS SETTAG 0885

Then in EXTRACT param:

TRANLOGOPTIONS EXCLUDETAG 0885

The TRANLOGOPTION EXCLUDETAG 00 prevents Golden Gate extract from capturing transactions from the replication which are by default tagged with “00”. The excludetag will ensure that the we don’t run into problems with ping-pong updates.

Some other possible examples of using this parameter are …

TRANLOGOPTIONS EXCLUDETAG 00
TRANLOGOPTIONS EXCLUDETAG +
TRANLOGOPTIONS EXCLUDETAG 0991
TRANLOGOPTIONS EXCLUDETAG 2222 4444

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

PgBackRest: A reliable backup and recovery solution to your PostgreSQL clusters …

Posted by FatDBA on April 8, 2021

Hi Everyone,

Recently while working for one of the customer, I was asked to propose a reliable backup and recovery solution for the database clusters. The customer was using both EDB and open source PostgreSQL. The ask is to take all major types of backups i.e. Full, Incremental and Differential, and last two types is to cover their anticipation for ways to decrease amount of time and disk space usage to take a full backup. After considering all their prerequisites and necessities, I came up with the idea of using PgBackRest which I have tested in some of my previous assignments too. PgBackRest is an open source backup tool that creates physical backups with some improvements compared to the classic pg_basebackup tool.

It comes up with lot of cool features which otherwise isn’t possible with pg_basebackup and few of them not even with other backup tools. Features like parallel backups, encryption, differential and incremental backups, backup integrity checks, archive expiration policies, local and remote operations, backup resume etc.

This post is all about this popular backup tool PgBackRest, how to configure and how to perform backup and restore operations using the tool. I will be doing few test cases here on my personal lab where I have RHEL and have PostgreSQL 12 installed.

I have already installed the tool using PostgreSQL YUM repository. It’s pretty straight forward, do ‘yum install pgbackrest‘ and that’s it!

-bash-4.2$ which pgbackrest
/usr/bin/pgbackrest

Let’s checked the version.

-bash-4.2$ pgbackrest version
pgBackRest 2.32

Now when the tool is installed and working fine, time to configure its core property file or config (pgbackrest.conf) file. I will be first creating a new directory to house this core confgiuration file for the tool.

[root@canttowin ~]# mkdir /etc/pgbackrest
[root@canttowin ~]# vi /etc/pgbackrest/pgbackrest.conf

Lets add global and local database details to the configuration file. Here I am setting full backup retention of 2 days (repo1-retention-full=2), I am only passing required set of params to it, else there is a huge list of them which you can use and defined under config file.

[root@canttowin ~]# more /etc/pgbackrest/pgbackrest.conf
[global]
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2

[openpgsql]
pg1-path=/var/lib/pgsql/12/data/
pg1-port=5432

If you have noticed, all operations above I have performed with root account/user, and this should not be the case, ownership should be passed to PostgreSQL database owner, that is ‘postgres’ user in my case. So, let’s fix permissions first before we do our first backup.

[root@canttowin ~]# chmod 0750 /var/lib/pgbackrest
[root@canttowin ~]# chown -R postgres:postgres /var/lib/pgbackrest
[root@canttowin ~]# ls -ll /var/log/pgbackrest/
total 8
-rw-r-----. 1 root root 0 Apr 4 04:23 all-start.log
-rw-r----- 1 root root 185 Mar 27 05:37 all-start.log.1.gz
-rw-r----- 1 postgres postgres 450 Apr 6 00:54 openpgsql-stanza-create.log
[root@canttowin ~]#
[root@canttowin ~]# chown -R postgres:postgres /var/log/pgbackrest
[root@canttowin ~]#
[root@canttowin ~]# ls -ll /var/log/pgbackrest/
total 8
-rw-r-----. 1 postgres postgres 0 Apr 4 04:23 all-start.log
-rw-r----- 1 postgres postgres 185 Mar 27 05:37 all-start.log.1.gz
-rw-r----- 1 postgres postgres 450 Apr 6 00:54 openpgsql-stanza-create.log

All set with the permissions, now next is to set few of the parameters within postgresql.conf file to make pgbackrest handle WAL segments, pushing them immediately to archive.

[postgres@canttowin data]$
[postgres@canttowin data]$ more /var/lib/pgsql/12/data/postgresql.conf |grep archive
archive_mode = on
archive_command = 'pgbackrest --stanza=openpgsql archive-push %p'

Above changes to database configuration requires a restart of the database, so, let’s do it.

[postgres@canttowin bin]$ ./pg_ctl -D /var/lib/pgsql/12/data stop
waiting for server to shut down…. done
server stopped
[postgres@canttowin bin]$ ./pg_ctl -D /var/lib/pgsql/12/data start
waiting for server to start….2021-04-06 01:03:45.837 EDT [28770] LOG: starting PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
2021-04-06 01:03:45.838 EDT [28770] LOG: listening on IPv6 address "::1", port 5432
2021-04-06 01:03:45.838 EDT [28770] LOG: listening on IPv4 address "127.0.0.1", port 5432
2021-04-06 01:03:45.861 EDT [28770] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-04-06 01:03:45.911 EDT [28770] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2021-04-06 01:03:45.983 EDT [28770] LOG: redirecting log output to logging collector process
2021-04-06 01:03:45.983 EDT [28770] HINT: Future log output will appear in directory "log".
done
server started

Next is to create the ‘STANZA‘, A stanza defines the backup configuration for a specific PostgreSQL database cluster.

[postgres@canttowin ~]$ pgbackrest stanza-create --stanza=openpgsql --log-level-console=info
2021-04-06 00:54:31.731 P00 INFO: stanza-create command begin 2.32: --exec-id=24839-da2916aa --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=openpgsql
2021-04-06 00:54:32.361 P00 INFO: stanza-create for stanza 'openpgsql' on repo1
2021-04-06 00:54:32.400 P00 INFO: stanza-create command end: completed successfully (672ms)

Next, we will check if everything is okay, this ‘check‘ command will check the cluster and validates archive_command and other related settings, if no error, then it’s all good.
[postgres@canttowin bin]$ pgbackrest --stanza=openpgsql check --log-level-console=info
2021-04-06 01:07:18.941 P00 INFO: check command begin 2.32: --exec-id=30501-dbf76c75 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=openpgsql
2021-04-06 01:07:19.553 P00 INFO: check repo1 configuration (primary)
2021-04-06 01:07:19.778 P00 INFO: check repo1 archive for WAL (primary)
2021-04-06 01:07:20.196 P00 INFO: WAL segment 000000010000000000000057 successfully archived to '/var/lib/pgbackrest/archive/openpgsql/12-1/0000000100000000/000000010000000000000057-dd44b724c7a9e257512f5c9d3ecf5d87f7ae9f67.gz' on repo1
2021-04-06 01:07:20.197 P00 INFO: check command end: completed successfully (1258ms)

All good, time to take our first backup, we have to use ‘type‘ argument with values full, incr, diff for all three types of backups. As this is our first backup, if you go and try for diff and incr backups, they will still go for the full backup as they requires a base backup to consider.
Below are the runtime logs, I have used option ‘log-level-console=info‘ which prints log information, warnings and errors, other possible values with this parameter are off, error, warn, detail, debug and trace.

[postgres@canttowin bin]$ pgbackrest --stanza=openpgsql --type=full backup --log-level-console=info
2021-04-06 01:07:49.917 P00 INFO: backup command begin 2.32: --exec-id=30602-14142f51 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql --type=full
2021-04-06 01:07:50.646 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the next regular checkpoint completes
2021-04-06 01:07:51.362 P00 INFO: backup start archive = 000000010000000000000059, lsn = 0/59000060
2021-04-06 01:07:53.028 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/16415 (13.5MB, 22%) checksum d8deb3703748d22554be2fb29c0ed105bab9658c
2021-04-06 01:07:53.782 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/16426 (5MB, 30%) checksum 29a07de6e53a110380ef984d3effca334a07d6e6
2021-04-06 01:07:54.176 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/16423 (2.2MB, 33%) checksum 5184ac361b2bef0df25a34e91636a085fc526930
2021-04-06 01:07:54.222 P01 INFO: backup file /var/lib/pgsql/12/data/base/16385/1255 (632KB, 34%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 01:07:54.334 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/1255 (632KB, 35%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 01:07:54.434 P01 INFO: backup file /var/lib/pgsql/12/data/base/14188/1255 (632KB, 36%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
….
……
2021-04-06 01:08:05.364 P01 INFO: backup file /var/lib/pgsql/12/data/PG_VERSION (3B, 100%) checksum ad552e6dc057d1d825bf49df79d6b98eba846ebe
2021-04-06 01:08:05.369 P01 INFO: backup file /var/lib/pgsql/12/data/global/6100 (0B, 100%)
2021-04-06 01:08:05.372 P01 INFO: backup file /var/lib/pgsql/12/data/global/6000 (0B, 100%)
2021-04-06 01:08:05.376 P01 INFO: backup file /var/lib/pgsql/12/data/global/4185 (0B, 100%)
2021-04-06 01:08:05.379 P01 INFO: backup file /var/lib/pgsql/12/data/global/4183 (0B, 100%)
2021-04-06 01:08:05.390 P01 INFO: backup file /var/lib/pgsql/12/data/global/4181 (0B, 100%)
….
…..
2021-04-06 01:08:06.735 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14040 (0B, 100%)
2021-04-06 01:08:06.738 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14035 (0B, 100%)
2021-04-06 01:08:06.743 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14030 (0B, 100%)
2021-04-06 01:08:06.847 P01 INFO: backup file /var/lib/pgsql/12/data/base/1/14025 (0B, 100%)
2021-04-06 01:08:06.848 P00 INFO: full backup size = 61MB
2021-04-06 01:08:06.848 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 01:08:07.068 P00 INFO: backup stop archive = 000000010000000000000059, lsn = 0/59000170
2021-04-06 01:08:07.107 P00 INFO: check archive for segment(s) 000000010000000000000059:000000010000000000000059
2021-04-06 01:08:07.354 P00 INFO: new backup label = 20210406-010750F
2021-04-06 01:08:07.489 P00 INFO: backup command end: completed successfully (17575ms)
2021-04-06 01:08:07.489 P00 INFO: expire command begin 2.32: --exec-id=30602-14142f51 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql
2021-04-06 01:08:07.500 P00 INFO: expire command end: completed successfully (11ms)
[postgres@canttowin bin]$

So, our first backup is done. Now, let’s check it’s detail (size, timings, WAL details etc.).

[postgres@canttowin bin]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 000000010000000000000056/000000010000000000000059

        full backup: 20210406-010650F
            timestamp start/stop: 2021-04-06 01:06:50 / 2021-04-06 01:07:12
            wal start/stop: 000000010000000000000056 / 000000010000000000000056
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

When we have our first full backup ready, let’s take the differential backup.

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --type=diff --log-level-console=info backup
2021-04-06 14:40:34.145 P00 INFO: backup command begin 2.32: --exec-id=54680-0dd25993 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql --type=diff
2021-04-06 14:40:34.892 P00 INFO: last backup label = 20210406-143757F, version = 2.32
2021-04-06 14:40:34.892 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the next regular checkpoint completes
2021-04-06 14:40:35.405 P00 INFO: backup start archive = 00000001000000000000005F, lsn = 0/5F000028
2021-04-06 14:40:36.252 P01 INFO: backup file /var/lib/pgsql/12/data/global/pg_control (8KB, 99%) checksum 962d11b5c25154c5c8141095be417a7f5d699419
2021-04-06 14:40:36.354 P01 INFO: backup file /var/lib/pgsql/12/data/pg_logical/replorigin_checkpoint (8B, 100%) checksum 347fc8f2df71bd4436e38bd1516ccd7ea0d46532
2021-04-06 14:40:36.355 P00 INFO: diff backup size = 8KB
2021-04-06 14:40:36.355 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 14:40:36.568 P00 INFO: backup stop archive = 00000001000000000000005F, lsn = 0/5F000100
2021-04-06 14:40:36.573 P00 INFO: check archive for segment(s) 00000001000000000000005F:00000001000000000000005F
2021-04-06 14:40:36.615 P00 INFO: new backup label = 20210406-143757F_20210406-144034D
2021-04-06 14:40:36.672 P00 INFO: backup command end: completed successfully (2528ms)
2021-04-06 14:40:36.672 P00 INFO: expire command begin 2.32: --exec-id=54680-0dd25993 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql
2021-04-06 14:40:36.678 P00 INFO: expire command end: completed successfully (6ms)

[postgres@canttowin ~]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 00000001000000000000005B/00000001000000000000005F

        full backup: 20210406-143652F
            timestamp start/stop: 2021-04-06 14:36:52 / 2021-04-06 14:37:10
            wal start/stop: 00000001000000000000005B / 00000001000000000000005B
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

        diff backup: 20210406-143757F_20210406-144034D
            timestamp start/stop: 2021-04-06 14:40:34 / 2021-04-06 14:40:36
            wal start/stop: 00000001000000000000005F / 00000001000000000000005F
            database size: 61MB, database backup size: 8.3KB
            repo1: backup set size: 8.0MB, backup size: 431B
            backup reference list: 20210406-143757F

The ‘info’ command output can be printed in JSON format too, just like below.

[postgres@canttowin ~]$ pgbackrest info --output=json
[{"archive":[{"database":{"id":1,"repo-key":1},"id":"12-1","max":"00000001000000000000005F","min":"00000001000000000000005B"}],"backup":[{"archive":{"start":"00000001000000000000005B","stop":"00000001000000000000005B"},"backrest":{"format":5,"version":"2.32"},"database":{"id":1,"repo-key":1},"info":{"delta":64047301,"repository":{"delta":8380156,"size":8380156},"size":64047301},"label":"20210406-143652F","prior":null,"reference":null,"timestamp":{"start":1617734212,"stop":1617734230},"type":"full"},{"archive":{"start":"00000001000000000000005D","stop":"00000001000000000000005D"},"backrest":{"format":5,"version":"2.32"},"database":{"id":1,"repo-key":1},"info":{"delta":64047301,"repository":{"delta":8380155,"size":8380155},"size":64047301},"label":"20210406-143757F","prior":null,"reference":null,"timestamp":{"start":1617734277,"stop":1617734285},"type":"full"},{"archive":{"start":"00000001000000000000005F","stop":"00000001000000000000005F"},"backrest":{"format":5,"version":"2.32"},"database":{"id":1,"repo-key":1},"info":{"delta":8459,"repository":{"delta":431,"size":8380156},"size":64047301},"label":"20210406-143757F_20210406-144034D","prior":"20210406-143757F","reference":["20210406-143757F"],"timestamp":{"start":1617734434,"stop":1617734436},"type":"diff"}],"cipher":"none","db":[{"id":1,"repo-key":1,"system-id":6941966298907810297,"version":"12"}],"name":"openpgsql","repo":[{"cipher":"none","key":1,"status":{"code":0,"message":"ok"}}],"status":{"code":0,"lock":{"backup":{"held":false}},"message":"ok"}}][postgres@canttowin ~]$
[postgres@canttowin ~]$

Now next comes the incremental backup, let’s do it!

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --type=incr --log-level-console=info backup
2021-04-06 14:43:26.193 P00 INFO: backup command begin 2.32: --exec-id=55204-d310aa59 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql --type=incr
2021-04-06 14:43:26.976 P00 INFO: last backup label = 20210406-143757F_20210406-144034D, version = 2.32
2021-04-06 14:43:26.976 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the next regular checkpoint completes
2021-04-06 14:43:27.495 P00 INFO: backup start archive = 000000010000000000000061, lsn = 0/61000028
2021-04-06 14:43:28.266 P01 INFO: backup file /var/lib/pgsql/12/data/global/pg_control (8KB, 99%) checksum 92143d43c90ed770f99f722d734bec62d9413d2a
2021-04-06 14:43:28.369 P01 INFO: backup file /var/lib/pgsql/12/data/pg_logical/replorigin_checkpoint (8B, 100%) checksum 347fc8f2df71bd4436e38bd1516ccd7ea0d46532
2021-04-06 14:43:28.369 P00 INFO: incr backup size = 8KB
2021-04-06 14:43:28.369 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 14:43:28.872 P00 INFO: backup stop archive = 000000010000000000000061, lsn = 0/61000100
2021-04-06 14:43:28.874 P00 INFO: check archive for segment(s) 000000010000000000000061:000000010000000000000061
2021-04-06 14:43:28.915 P00 INFO: new backup label = 20210406-143757F_20210406-144326I
2021-04-06 14:43:28.977 P00 INFO: backup command end: completed successfully (2785ms)
2021-04-06 14:43:28.977 P00 INFO: expire command begin 2.32: --exec-id=55204-d310aa59 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=openpgsql
2021-04-06 14:43:28.981 P00 INFO: expire command end: completed successfully (4ms)
[postgres@canttowin ~]$

[postgres@canttowin ~]$

[postgres@canttowin ~]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 00000001000000000000005B/000000010000000000000061

        full backup: 20210406-143652F
            timestamp start/stop: 2021-04-06 14:36:52 / 2021-04-06 14:37:10
            wal start/stop: 00000001000000000000005B / 00000001000000000000005B
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

        diff backup: 20210406-143757F_20210406-144034D
            timestamp start/stop: 2021-04-06 14:40:34 / 2021-04-06 14:40:36
            wal start/stop: 00000001000000000000005F / 00000001000000000000005F
            database size: 61MB, database backup size: 8.3KB
            repo1: backup set size: 8.0MB, backup size: 431B
            backup reference list: 20210406-143757F

        incr backup: 20210406-143757F_20210406-144326I
            timestamp start/stop: 2021-04-06 14:43:26 / 2021-04-06 14:43:28
            wal start/stop: 000000010000000000000061 / 000000010000000000000061
            database size: 61MB, database backup size: 8.3KB
            repo1: backup set size: 8.0MB, backup size: 430B
            backup reference list: 20210406-143757F

So, that’s how you can take all three types of backup using the tool, if you want to scchedule it, you can use CRON and do entries something like below.

[postgres@canttowin ~]$ crontab -l
#m h   dom mon dow   command
30 06  *   *   0     pgbackrest --type=full --stanza=openpgsql backup
30 06  *   *   1-6   pgbackrest --type=diff --stanza=openpgsql backup

There are some other cool options which you can use with your backup command directly or even mention them in configuration file. There is a long list of parameters which you can use, click here to know about them. Few of them which are very useful are discussed below.

start-fast : Force a checkpoint to start backup quickly.
compress: Use file compression
compress-level: To declare compression levels
repo1-retention-diff: For differential backup retention.

Now, let’s create a recovery scenario. I am going to delete the entire DATA directory from PG HOME and will restore it using backups that we have. This being a brand new cluster, let me create some sample data.

dixit=#
dixit=# CREATE TABLE COMPANY(
dixit(# ID INT PRIMARY KEY NOT NULL,
dixit(# NAME TEXT NOT NULL,
dixit(# AGE INT NOT NULL,
dixit(# ADDRESS CHAR(50),
dixit(# SALARY REAL,
dixit(# JOIN_DATE DATE
dixit(# );
CREATE TABLE
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,JOIN_DATE) VALUES (1, 'Paul', 32, 'California', 20000.00,'2001-07-13');
INSERT 0 1
dixit=#
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,JOIN_DATE) VALUES (2, 'Allen', 25, 'Texas', '2007-12-13');
INSERT 0 1
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,JOIN_DATE) VALUES (3, 'Teddy', 23, 'Norway', 20000.00, DEFAULT );
INSERT 0 1
dixit=# INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,JOIN_DATE) VALUES (4, 'Mark', 25, 'Rich-Mond ', 65000.00, '2007-12-13' ), (5, 'David', 27, 'Texas', 85000.00, '2007-12-13');
INSERT 0 2
dixit=#
dixit=# select * from COMPANY;
id | name | age | address | salary | join_date
----+-------+-----+----------------------------------------------------+--------+------------
1 | Paul | 32 | California | 20000 | 2001-07-13
2 | Allen | 25 | Texas | | 2007-12-13
3 | Teddy | 23 | Norway | 20000 |
4 | Mark | 25 | Rich-Mond | 65000 | 2007-12-13
5 | David | 27 | Texas | 85000 | 2007-12-13
(5 rows)

And now when we have made above changes and we have the full database backup taken before than that, we have to take an incremental backup to cover new changes.

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --type=incr backup --log-level-console=info
2021-04-06 23:12:18.008 P00 INFO: backup command begin 2.32: --exec-id=80088-57a7eed8 --log-level-console=info --pg1-path=/var/lib/pgsql/12/data --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --repo1-retention-diff=2 -
repo1-retention-full=1 --stanza=openpgsql --start-fast --type=incr
2021-04-06 23:12:18.744 P00 INFO: last backup label = 20210406-225743F_20210406-231110I, version = 2.32
2021-04-06 23:12:18.744 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the requested immediate checkpoint completes
2021-04-06 23:12:19.256 P00 INFO: backup start archive = 00000001000000000000006C, lsn = 0/6C000028
2021-04-06 23:12:20.245 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/1249 (440KB, 90%) checksum b85efa460cab148bf9d7db5a3e78dba71cc5b0b2
2021-04-06 23:12:20.247 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/2610 (32KB, 96%) checksum c6331e9df78c639a6b04aed46ecc96bd09f170f6
2021-04-06 23:12:20.250 P01 INFO: backup file /var/lib/pgsql/12/data/global/pg_control (8KB, 98%) checksum e75e69d389d82b2bc9bee88aea6353d3d889c28e
2021-04-06 23:12:20.252 P01 INFO: backup file /var/lib/pgsql/12/data/base/16384/2606 (8KB, 99%) checksum 59284824f0a0cd49006d5c220941248b13c2b286
2021-04-06 23:12:20.355 P01 INFO: backup file /var/lib/pgsql/12/data/pg_logical/replorigin_checkpoint (8B, 100%) checksum 347fc8f2df71bd4436e38bd1516ccd7ea0d46532
2021-04-06 23:12:20.355 P00 INFO: incr backup size = 488KB
2021-04-06 23:12:20.355 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2021-04-06 23:12:20.558 P00 INFO: backup stop archive = 00000001000000000000006C, lsn = 0/6C000138
2021-04-06 23:12:20.561 P00 INFO: check archive for segment(s) 00000001000000000000006C:00000001000000000000006C
2021-04-06 23:12:20.591 P00 INFO: new backup label = 20210406-225743F_20210406-231218I
2021-04-06 23:12:20.643 P00 INFO: backup command end: completed successfully (2636ms)
2021-04-06 23:12:20.643 P00 INFO: expire command begin 2.32: --exec-id=80088-57a7eed8 --log-level-console=info --repo1-path=/var/lib/pgbackrest --repo1-retention-diff=2 --repo1-retention-full=1 --stanza=openpgsql
2021-04-06 23:12:20.649 P00 INFO: expire command end: completed successfully (6ms)
[postgres@canttowin ~]$

[postgres@canttowin ~]$ pgbackrest info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 000000010000000000000068/00000001000000000000006C

        full backup: 20210406-225743F
            timestamp start/stop: 2021-04-06 22:57:43 / 2021-04-06 22:58:01
            wal start/stop: 000000010000000000000068 / 000000010000000000000068
            database size: 61MB, database backup size: 61MB
            repo1: backup set size: 8.0MB, backup size: 8.0MB

        incr backup: 20210406-225743F_20210406-231110I
            timestamp start/stop: 2021-04-06 23:11:10 / 2021-04-06 23:11:12
            wal start/stop: 00000001000000000000006A / 00000001000000000000006A
            database size: 61.2MB, database backup size: 2.4MB
            repo1: backup set size: 8.0MB, backup size: 239.5KB
            backup reference list: 20210406-225743F

To know more about any of the database backup, we can use option ‘set‘ where we use backup name with ‘info‘ command, just like below.

[postgres@canttowin ~]$ pgbackrest --stanza=openpgsql --set=20210406-225743F_20210406-231110I info
stanza: openpgsql
    status: ok
    cipher: none

    db (current)
        wal archive min/max (12): 000000010000000000000068/00000001000000000000006C

        incr backup: 20210406-225743F_20210406-231110I
            timestamp start/stop: 2021-04-06 23:11:10 / 2021-04-06 23:11:12
            wal start/stop: 00000001000000000000006A / 00000001000000000000006A
            database size: 61.2MB, database backup size: 2.4MB
            repo1: backup set size: 8.0MB, backup size: 239.5KB
            backup reference list: 20210406-225743F
            database list: dixit (16384), kartikey (16385), postgres (14188)

I have removed (did rm -rf *) all files that exists in PG_HOME/base directory, lets restore.

[postgres@canttowin data]$ pgbackrest --stanza=openpgsql --db-include=dixit --type=immediate --target-action=promote restore --log-level-console=detail
2021-04-06 23:19:12.641 P00 INFO: restore command begin 2.32: --db-include=dixit --exec-id=82229-9187cb59 --log-level-console=detail --pg1-path=/var/lib/pgsql/12/data --repo1-path=/var/lib/pgbackrest --stanza=openpgsql --target-action=promote --type=immediate
2021-04-06 23:19:12.676 P00 INFO: restore backup set 20210406-225743F_20210406-231218I
2021-04-06 23:19:12.677 P00 DETAIL: databases found for selective restore (1, 14187, 14188, 16384, 16385)
2021-04-06 23:19:12.677 P00 DETAIL: check '/var/lib/pgsql/12/data' exists
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base'
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/1'
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/14187'
2021-04-06 23:19:12.678 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/14188'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/16384'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/base/16385'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/global'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/log'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_commit_ts'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_dynshmem'
2021-04-06 23:19:12.679 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_logical'
2021-04-06 23:19:12.683 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_logical/mappings'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_logical/snapshots'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_multixact'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_multixact/members'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_multixact/offsets'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_notify'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_replslot'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_serial'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_snapshots'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_stat'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_stat_tmp'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_subtrans'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_tblspc'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_twophase'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_wal'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_wal/archive_status'
2021-04-06 23:19:12.684 P00 DETAIL: create path '/var/lib/pgsql/12/data/pg_xact'
2021-04-06 23:19:12.879 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/16415 (13.5MB, 22%) checksum d8deb3703748d22554be2fb29c0ed105bab9658c
2021-04-06 23:19:12.957 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/16426 (5MB, 30%) checksum 29a07de6e53a110380ef984d3effca334a07d6e6
2021-04-06 23:19:12.999 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/16423 (2.2MB, 33%) checksum 5184ac361b2bef0df25a34e91636a085fc526930
2021-04-06 23:19:13.000 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/1255 (632KB, 34%)
2021-04-06 23:19:13.057 P01 INFO: restore file /var/lib/pgsql/12/data/base/16384/1255 (632KB, 35%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 23:19:13.065 P01 INFO: restore file /var/lib/pgsql/12/data/base/14188/1255 (632KB, 36%) checksum fc3c70ab83b8c87e056594f20b2186689d3c4678
2021-04-06 23:19:13.101 P01 INFO: restore file /var/lib/pgsql/12/data/base/14187/1255 (632KB, 37%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 23:19:13.118 P01 INFO: restore file /var/lib/pgsql/12/data/base/1/1255 (632KB, 38%) checksum edd483d42330ae26a455b3ee40e5c2b41cb298d5
2021-04-06 23:19:13.119 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/2838 (456KB, 39%)
2021-04-06 23:19:13.127 P01 INFO: restore file /var/lib/pgsql/12/data/base/16384/2838 (456KB, 40%) checksum c41dbf11801f153c9bd0493eb6deadd1a3f22333
2021-04-06 23:19:13.133 P01 INFO: restore file /var/lib/pgsql/12/data/base/16384/2608 (456KB, 41%) checksum 9de1966f80ac1c0bfa530fa3379e55bfea5936e0

…..
2021-04-06 23:19:14.941 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14043_fsm (24KB, 78%)
2021-04-06 23:19:14.942 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14038_fsm (24KB, 78%)
2021-04-06 23:19:14.943 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14033_fsm (24KB, 78%)
2021-04-06 23:19:14.943 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14028_fsm (24KB, 78%)
2021-04-06 23:19:14.944 P01 DETAIL: restore zeroed file /var/lib/pgsql/12/data/base/16385/14023_fsm (24KB, 78%)
….
……
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_stat_tmp'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_subtrans'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_tblspc'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_twophase'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_wal'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_wal/archive_status'
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/pg_xact'
2021-04-06 23:19:17.879 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
2021-04-06 23:19:17.879 P00 DETAIL: sync path '/var/lib/pgsql/12/data/global'
2021-04-06 23:19:17.883 P00 INFO: restore command end: completed successfully (5243ms)
[postgres@canttowin data]$

Perfect, the restore is completed. Let’s start the database cluster.

[postgres@canttowin data]$ cd /usr/pgsql-12/bin/
[postgres@canttowin bin]$
[postgres@canttowin bin]$ /usr/pgsql-12/bin/pg_ctl^C
[postgres@canttowin bin]$
[postgres@canttowin bin]$ ./pg_ctl -D /var/lib/pgsql/12/data start
waiting for server to start….2021-04-06 23:19:55.212 EDT [82343] LOG: starting PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
2021-04-06 23:19:55.212 EDT [82343] LOG: listening on IPv6 address "::1", port 5432
2021-04-06 23:19:55.212 EDT [82343] LOG: listening on IPv4 address "127.0.0.1", port 5432
2021-04-06 23:19:55.213 EDT [82343] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-04-06 23:19:55.216 EDT [82343] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2021-04-06 23:19:55.226 EDT [82343] LOG: redirecting log output to logging collector process
2021-04-06 23:19:55.226 EDT [82343] HINT: Future log output will appear in directory "log".
done
server started

Now let’s connect with the database and see if we still see those records which we inserted.

[postgres@canttowin ~]$ psql -p 5432
psql (12.6.7)
Type "help" for help.

postgres=# SELECT datname FROM pg_database WHERE datistemplate = false;
 datname
----------
 postgres
 dixit
 kartikey
(3 rows)

postgres=# \c dixit
You are now connected to database "dixit" as user "postgres".

dixit=# \d
           List of relations
 Schema |    Name    | Type  |  Owner
--------+------------+-------+----------
 public | company    | table | postgres
 public | department | table | postgres
(2 rows)

dixit=# select * from company;
 id | name  | age |                      address                       | salary | join_date
----+-------+-----+----------------------------------------------------+--------+------------
  1 | Paul  |  32 | California                                         |  20000 | 2001-07-13
  2 | Allen |  25 | Texas                                              |        | 2007-12-13
  3 | Teddy |  23 | Norway                                             |  20000 |
  4 | Mark  |  25 | Rich-Mond                                          |  65000 | 2007-12-13
  5 | David |  27 | Texas                                              |  85000 | 2007-12-13
(5 rows)

dixit=#
dixit=#

Perfect! they are there.

Similarly you can do PITR (point-in-time) backups and restores and even backup and restore any specific database.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

How to register remote PEM agents to the PEM Server ?

Posted by FatDBA on April 3, 2021

Hi Guys,

During that quiesce period when I was away from blogging, I worked on lot of stuff, hence lot of contents to share 🙂 …. So here goes my another post. This one is about registering PEM agents with the PEM server as each PEM agent must be registered with the PEM server.

I have this PEM Server already configured (steps for configuring PEM server) and I have this new EDB AS 12 standby server which I would like to add to the PEM monitoring console. let’s get started!

192.168.20.128: PEM Server Host (canttowin.ontadomain)
192.168.20.129: Standby host (canttowinsec.quebecdomain)

I have already installed PEM agent (edb-pem-agent-8.0.1-1.rhel7.x86_64) on this remote standby host, let me show you that.

[root@canttowinsec ~]# yum install edb-pem-agent
Loaded plugins: langpacks, ulninfo
epel/x86_64/metalink | 7.0 kB 00:00:00
local | 2.9 kB 00:00:00
ol7_UEKR6 | 2.5 kB 00:00:00
ol7_latest | 2.7 kB 00:00:00
percona-release-noarch | 2.9 kB 00:00:00
percona-release-x86_64 | 2.9 kB 00:00:00
prel-release-noarch | 2.9 kB 00:00:00
Package edb-pem-agent-8.0.1-1.rhel7.x86_64 already installed and latest version
Nothing to do

Let’s go to the agent home directory and call the configuration utility called ‘pemworker’.

[root@canttowinsec bin]# pwd
/usr/edb/pem/agent/bin

[root@canttowinsec bin]# ls
pemagent pemworker pkgLauncher

Here we have to use few of the configuration options with their preferred values.
–pem-server : IP Address of the PEM backend database server.
–pem-port : Port of the PEM backend database server, default is 5432, but you have to check what port you have used.
-–pem-user : name of the Database user (having superuser privileges) of the PEM backend database server. This is a mandatory option.
–allow_server_restart: Enable the allow-server_restart parameter to allow PEM to restart the monitored server. TRUE is default.
–allow-batch-probes: Enable the allow-batch-probes parameter to allow PEM to run batch probes on this agent. FALSE is default.
-–batch-script-user: operating system user that should be used for executing the batch/shell scripts. NONE is default.

[root@canttowinsec bin]# ./pemworker --register-agent --pem-server 192.168.20.128 --pem-port 5444 --pem-user enterprisedb --allow_server_restart true --allow-batch-probes true --batch-script-user enterprisedb
Postgres Enterprise Manager Agent registered successfully!

Okay, so the agent is successfully registered with the PEM Server. Next we need to add the configuration to the agent.cfg file.

[root@canttowinsec etc]# pwd
/usr/edb/pem/agent/etc
[root@canttowinsec etc]# ls
agent.cfg

I am setting allow_streaming_replication to TRUE as this makes user to configure streaming replication, and next parameter is to provide path of CA certificates.
[root@canttowinsec etc]# echo "allow_streaming_replication=true" >> /usr/edb/pem/agent/etc/agent.cfg
[root@canttowinsec etc]# echo "ca_file=/usr/libexec/libcurl-pem/share/certs/ca-bundle.crt" >> /usr/edb/pem/agent/etc/agent.cfg

So, now my agent configuration file will look like below.

[root@canttowinsec etc]# more agent.cfg
[PEM/agent]
pem_host=192.168.20.128
pem_port=5444

agent_id=2
agent_ssl_key=/root/.pem/agent2.key
agent_ssl_crt=/root/.pem/agent2.crt
log_level=warning
log_location=/var/log/pem/worker.log
agent_log_location=/var/log/pem/agent.log
long_wait=30
short_wait=10
alert_threads=0
enable_smtp=false
enable_snmp=false
enable_webhook=false
max_webhook_retries=3
allow_server_restart=true
max_connections=0
connect_timeout=10
connection_lifetime=0
allow_batch_probes=true
heartbeat_connection=false
enable_nagios=false
batch_script_user=enterprisedb
allow_streaming_replication=true
ca_file=/usr/libexec/libcurl-pem/share/certs/ca-bundle.crt

Now you will see your PEM agent already added to the PEM agents list under PEM console.

Next you can add your standby database to the list of managed server. Here you need to follow same steps what I have discussed in my last post about PEM configuration, please click here to directly go to that post. The only difference is that you need to select the bounded agent from the drop down list, here you see your new agent coming under drop list, rest all is same!

Once it’s added successfully, you will see the new server under the list, here I have named the connection ‘EDBAS12_Sby‘.

Now here is how the main landing page will look like, new agent and database with its status.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

How to install EDB-AS without it’s software repositories ?

Posted by FatDBA on April 3, 2021

Hi Everyone,

Many of you might be thinking after reading the title – Why to write about such a simple or rudimentary task, what is so tricky about installing EDB PostgreSQL software ? I know it’s quite easy and straight forward, but only if you are able to add the EDB repository to your server or register it, if you fail to add or register it, then it will be very difficult and becomes a tedious and a time consuming activity to install all of the software’s and their long list of dependencies. This post is all about how to deal with such situation, how to download the source RPMs and install them on the server if you are not able to add the EDB repository.

First step is to download the complete EDB’s tarball, I am downloading the complete tarball here as I don’t want to miss any dependent packages which are needed by the core components. This tarball is close to 1.8 GBs in size, you can download the file using below wget command, here you need to use your EDB credentials.

wget https://prashant.dixit:password@yum.enterprisedb.com/edb/redhat/edb_redhat_rhel-7-x86_64.tar.gz

Now, once the tarball is installed, we can go and create the YUM local repository, though to create YUM repository is optional as you can also install RPMs directly, but will make your work lot easier otherwise you have to look out for dependencies manually. So, I have deceided to create the local repository here.

Once the above file is downloaded, unzip it. You will see list of all core and dependent packages/rpm, just like below.

….
….
edb-pgpool40-4.0.8-1.rhel7.x86_64.rpm sslutils_96-1.3-2.rhel7.x86_64.rpm
edb-pgpool40-4.0.9-1.rhel7.x86_64.rpm wxjson-1.2.1-1.rhel7.x86_64.rpm
edb-pgpool40-devel-4.0.6-1.rhel7.x86_64.rpm wxjson-1.2.1-2.rhel7.x86_64.rpm
edb-pgpool40-devel-4.0.8-1.rhel7.x86_64.rpm wxjson-devel-1.2.1-1.rhel7.x86_64.rpm
edb-pgpool40-devel-4.0.9-1.rhel7.x86_64.rpm wxjson-devel-1.2.1-2.rhel7.x86_64.rpm

Next I will create a directory which will be used as a repository container.
[root@canttowin edb]# mkdir -p /home/user/repo

move all unzipped files/rpms to this new directory.
[root@canttowin edb]# mv * /home/user/repo

change permissions of the directory.
[root@canttowin edb]# chown -R root.root /home/user/repo
[root@canttowin edb]# chmod -R o-w+r /home/user/repo

Now we can go and create the repository, for that we will use ‘createrepo‘ command.
[root@canttowin edb]# createrepo /home/user/repo
Spawning worker 0 with 1151 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Now let’s create the YUM repository entry under /etc/yum.repos.d
[root@canttowin edb]# more /etc/yum.repos.d/myrepo.repo
[local]
name=Prashant Local EDB Repo
baseurl=file:///home/user/repo
enabled=1
gpgcheck=0
[root@canttowin edb]#

All set! let’s try to look for any EDB’s package using this new local repository

[root@canttowin ~]# yum search edb-as12-server
Loaded plugins: langpacks, ulninfo
=============================================================== N/S matched: edb-as12-server ================================================================
edb-as12-server.x86_64 : EnterpriseDB Advanced Server Client and Server Components
edb-as12-server-client.x86_64 : The client software required to access EDBAS server.
edb-as12-server-cloneschema.x86_64 : cloneschema is a module for EnterpriseDB Advanced Server
edb-as12-server-contrib.x86_64 : Contributed source and binaries distributed with EDBAS
edb-as12-server-core.x86_64 : The core programs needed to create and run a EnterpriseDB Advanced Server
edb-as12-server-devel.x86_64 : EDBAS development header files and libraries
edb-as12-server-docs.x86_64 : Extra documentation for EDBAS
edb-as12-server-edb-modules.x86_64 : EDB-Modules for EnterpriseDB Advanced Server
edb-as12-server-indexadvisor.x86_64 : Index Advisor for EnterpriseDB Advanced Server
edb-as12-server-libs.x86_64 : The shared libraries required for any EDBAS clients
edb-as12-server-llvmjit.x86_64 : Just-In-Time compilation support for EDBAS
edb-as12-server-parallel-clone.x86_64 : parallel_clone is a module for EnterpriseDB Advanced Server
edb-as12-server-pldebugger.x86_64 : PL/pgSQL debugger server-side code
edb-as12-server-plperl.x86_64 : The Perl procedural language for EDBAS
edb-as12-server-plpython.x86_64 : The Python procedural language for EDBAS
edb-as12-server-plpython3.x86_64 : The Python3 procedural language for EDBAS
edb-as12-server-pltcl.x86_64 : The Tcl procedural language for EDBAS
edb-as12-server-sqlprofiler.x86_64 : SQL profiler for EnterpriseDB Advanced Server
edb-as12-server-sqlprotect.x86_64 : SQL Protect for EnterpriseDB Advanced Server

Great, so we are now able to look and install all our EDB packages through YUM, it’s lot easier than manually fixing dependencies and install core packages.

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

How to monitor your PostgreSQL databases using EDB PEM – Setup, Config, benchmarking and much more …

Posted by FatDBA on March 26, 2021

Hi Everyone,

Today’s post will be all about monitoring your PostgreSQL database clusters using EDB PostgreSQL Enterprise Manager (PEM). Postgres Enterprise Manager is a comprehensive, customizable solution providing an interface to control and optimize your PostgreSQL deployment.

I will be doing the installation, configuration, adding servers to the console and will perform a live monitoring of the database while I will be generating some synthetic load on the database host. I am doing this on a standalone RHEL 7 64 Bit server which I will be using it both as a PEM server and local instance. Alright, so without further ado, lets start. So, first you need to download EDB’s official repository and install following package.

Below is a complete list of packages available with name ‘edb-pem’, you need to install version: edb-pem-8.0.1-1.rhel7.x86_64

[root@canttowin repo]# yum search edb-pem
Loaded plugins: langpacks, ulninfo

=================================================================== N/S matched: edb-pem ====================================================================
edb-pem-debuginfo.x86_64 : Debug information for package edb-pem
edb-pem.x86_64 : PostgreSQL Enterprise Manager
edb-pem-agent.x86_64 : Postgres Enterprise Manager Agent
edb-pem-docs.x86_64 : Documentation for Postgres Enterprise Manager
edb-pem-server.x86_64 : PEM Server Components

Once installation is completed, go to the default installation directory, it’s /usr/edb in my case, and go to pem/bin folder.

[root@canttowin ~]# cd /usr/edb/
[root@canttowin edb]# ls
as12 bart efm-4.1 jdbc migrationtoolkit pem pgbouncer1.15 pgpool4.2
[root@canttowin ~]# cd /usr/edb/pem/bin/
[root@canttowin bin]# ls
configure-pem-server.sh configure-selinux.sh

We see two configuration shell scripts are present, we will be using the configuration script – configure-pem-server.sh
Here I will be choosing option 1 which means I will be installing web services and databases all on one host, next you need to input installation path (/usr/edb/as12 in my case), followed by super user name, port numbers and IP Address of the server.

Before I call the config script, let me quickly reset the default superuser’s password.

postgres=# alter user postgres with password 'dixit';
ALTER ROLE

Now, let’s call the configuration scipt and pass all discussed values.

[root@canttowin bin]# ./configure-pem-server.sh

 -----------------------------------------------------
 EDB Postgres Enterprise Manager
 -----------------------------------------------------
Install type: 1:Web Services and Database, 2:Web Services 3: Database [ ] :1
Enter local database server installation path (i.e. /usr/edb/as12 , or /usr/pgsql-12, etc.) [ ] :/usr/edb/as12
Enter database super user name [ ] :enterprisedb
Enter database server port number [ ] :5444
Enter database super user password [ ] :
Please enter CIDR formatted network address range that agents will connect to the server from, to be added to the server's pg_hba.conf file. For example, 192.168.1.0/24 [ 0.0.0.0/0 ] :10.0.0.153/32
Enter database systemd unit file or init script name (i.e. edb-as-12 or postgresql-12, etc.) [ ] :edb-as-12
Please specify agent certificate path (Script will attempt to create this directory, if it does not exists) [ ~/.pem/ ] :
CREATE EXTENSION
[Info] Configuring database server.
CREATE DATABASE
CREATE ROLE
CREATE ...
..
..
..
CREATE EXTENSION
-->  [Info] -->  [Info] Configuring database server.
-->  [Info] -->  [Info] creating role pem
-->  [Info] -->  [Info] Generating certificates
-->  [Info] -->  [Info] Executing systemctl stop edb-as-12
-->  [Info] -->  [Info] Skipping - configurations for /var/lib/edb/as12/data/pg_hba.conf and /var/lib/edb/as12/data/postgresql.conf file
-->  [Info] -->  [Info] Executing systemctl start edb-as-12
-->  [Info] -->  [Info] Enable pemagent service.
-->  [Info] -->  [Info] Executing systemctl enable pemagent
-->  [Info] -->  [Info] Stop pemagent service
-->  [Info] -->  [Info] Executing systemctl stop pemagent
-->  [Info] -->  [Info] Start pemagent service.
-->  [Info] -->  [Info] Executing systemctl start pemagent
-->  [Info] -->  [Info] Configuring httpd server
-->  [Info] -->  [Info] Executing systemctl stop httpd
-->  [Info] -->  [Info] Taking backup of /usr/edb/pem/web/pem.wsgi
-->  [Info] -->  [Info] Creating /usr/edb/pem/web/pem.wsgi
-->  [Info] -->  [Info] Taking backup of /usr/edb/pem/web/config_local.py.
-->  [Info] -->  [Info] Generating PEM Cookie Name.
-->  [Info] -->  [Info] Creating /usr/edb/pem/web/config_local.py
-->  [Info] -->  [Info] Taking backup of /etc/httpd/conf.d/edb-pem.conf
-->  [Info] -->  [Info] Creating /etc/httpd/conf.d/edb-pem.conf
-->  [Info] -->  [Info] Configuring httpd server sslconf
-->  [Info] -->  [Info] Taking backup of /etc/httpd/conf.d/edb-ssl-pem.conf
-->  [Info] -->  [Info] Taking backup of /etc/httpd/conf.d/edb-ssl-pem.conf
-->  [Info] -->  [Info] Executing /usr/edb/pem/web/setup.py
Postgres Enterprise Manager - Application Initialisation
========================================================
-->  [Info] -->  [Info] Check and Configure SELinux security policy for PEM
 getenforce found, now executing 'getenforce' command
 Configure the httpd to work with the SELinux
 Allow the httpd to connect the database (httpd_can_network_connect_db = on)
 Allow the httpd to connect the network (httpd_can_network_connect = on)
 Allow the httpd to work with cgi (httpd_enable_cgi = on)
 Allow to read & write permission on the 'pem' user home directory
 SELinux policy is configured for PEM
-->  [Info] -->  [Info] Executing systemctl start httpd
-->  [Info] -->  [Info] Configured the webservice for EDB Postgres Enterprise Manager (PEM) Server on port '8443'.
-->  [Info] -->  [Info] PEM server can be accessed at https://127.0.0.1:8443/pem at your browser

It’s completed, and at the very end it has provided URL to access the PEM GUI.

Now next step is to install PEM Agents to the server, you need to install it on all servers which you want to monitor, I am leaving the PEMAgents configuration that you do in agent.cfg file.

[root@canttowin bin]# yum install edb-pem-agent

Let’s check the PEM GUI now.

Here on the left panel you will notice there’s already one database present under ‘PEM Server Directory’ folder, this is the same database which we have configured/used PEM server, hence it will be automatically added to the server list. We will manually add one more database cluster to explain how to do it explicitly.

Let’s check the dashboard for the same (PEM Server) database for session, TPS, IO related details.

Now, let’s add another database to the monitoring console. I will be adding a community PostgreSQL 12 database to it. Go to ‘PEM Server Directory’ folder right click on it, choose option create-> server.

Next, fill connection wizard with all details i.e, username, password, IP, port and security related details for the new database and click save at the end.

And you are done!

Now, let’s see the default landing page of PEM GUI and here you see details of all added hosts and agents with their status.

Next I will create some new databases to see how that data reflects in PEM GUI.
postgres=#
postgres=# create database dixit;
CREATE DATABASE
postgres=# create database kartikey;
CREATE DATABASE

postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size
-----------+----------+----------+-------------+-------------+-----------------------+--------
dixit | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8049 kB
kartikey | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8049 kB
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8193 kB

(3 rows)

All good! now let’s do some performance test to see how useful PEM can be in case of performance issues. In order to mimic or simulate the situation, I will generating some synthetic load using PostgreSQL’s default utility Pgbench.

Reference:
-c number of clients
-j 2 number of threads
-t amount of transactions

These values are 10000 transactions per client. So : 10 x 10000 = 100,000 transactions

[postgres@canttowin bin]$ ./pgbench -U postgres -p 5432 -c 10 -j 2 -t 10000 postgres
starting vacuum…end.

Let’s see how the changes are captured and presented in PEM.

Okay, we can see the peaks are recorded and presented.

The load is still running and we can clearly see that from the below graph.

[postgres@canttowin bin]$ ./pgbench -U postgres -p 5432 -c 10 -j 2 -t 10000 postgres
starting vacuum…end.transaction type:
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average = 18.217 ms
tps = 548.940142 (including connections establishing)
tps = 548.970173 (excluding connections establishing)

Alright, so the load run has ended, let see how the graph now looks like.

So to conclude, PEM is a great tool which can fulfil all your monitoring needs, it has got some cool features too i.e. performance dashboards, tuning wizards, advisories and other graphs.

Hope It Helped
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , | 1 Comment »

 
%d bloggers like this: