Tales From A Lazy Fat DBA

Loves all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Archive for December, 2020

repmgr failover done, but where is my Standby ?

Posted by FatDBA on December 15, 2020

Hey Guys,

Last weekend I’ve got a call from one of my friend asking about a weird issue in his master-slave (primary-standby) setup where he’s using repmgr for automatic failover and switchover operations. He said after the failover (server crashed) with role reversal or promotion the old standby becomes new primary (expected, all good till here), but as soon as the old primary comes back online and rejoins the configuration, it started as a primary too. I mean you can imagine something like below, both of the two hosts becomes master and there is no sign of a new standby.


[postgresdb@fatdba ~]$ repmgr -f /etc/repmgr.conf cluster show

 ID | Name  | Role    | Status    | Upstream | Location | Connection string
----+-------+---------+-----------+----------+----------+-----------------------------------------
 1  | node1 | primary | * running |          | default  | host=db_node1 dbname=repmgr user=repmgr
 2  | node2 | primary |   running |          | default  | host=db_node2 dbname=repmgr user=repmgr
 

He checked repmgr logs on the standby node and everything looks okay, standby promotion was sucessfull. It seems something was stopping the repmgr to update the information and bring that crashed node (previous primary) back as a standby. So the final situation is there are two primary nodes at this point.

So, how to and where to start from ?
A quick solution to this problem is to create an UPDATE TRIGGER on one of the internal table EVENTS of repmgr database (it creates its own database with default name of repmgr). The logic of creating a trigger is to handle events “standby_promote”, I mean when it happens a script is triggered that will bring back the old primary as new standby.

Next we would be needing a script that will help to automate the standby promotion. For that purpose you can use this script (link) or you can write something similar. And now when we have to call a bash script or to write stored procedures in a shell, we need the ‘.plsh’ extension in the database. Let’s install one of the most popular extension to do that, petere/plsh (link)


[postgresdb@fatdba ~]$ cd /usr/share/postgresql/12/extension/
[postgresdb@fatdba ~]$ sudo git clone https://github.com/petere/plsh.git
[postgresdb@fatdba ~]$ cd plsh
[postgresdb@fatdba ~]$ sudo make PG_CONFIG=/usr/bin/pg_config
[postgresdb@fatdba ~]$ sudo apt-get install make
[postgresdb@fatdba ~]$ sudo apt-get install gcc
[postgresdb@fatdba ~]$ sudo apt-get install postgresql-server-dev-12
[postgresdb@fatdba ~]$ sudo make install PG_CONFIG=/usr/bin/pg_config
 

Now, next create the extension in the database.


[postgresdb@fatdba ~]$ psql -U repmgr
 

This one needs to be executed only n primary side.


repmgr=# create extension plsh;
 

Now, let’s create the function and a trigger to invoke the script failover_promote.sh


[postgresdb@fatdba ~]$ psql
postgres=# CREATE FUNCTION failover_promote() RETURNS trigger AS $$
#!/bin/sh
/bin/bash /tmp/psqlscripts/failover_promote.sh $1 $2
$$
LANGUAGE plsh;
 

And the UPDATE TRIGGER on EVENTS table to call function created above.


postgres=#  create trigger failover
after insert
on events
for each row
execute procedure failover_promote();
 

Yes, that’s it. All done!
Now you can mimic the failover scenario i.e by stopping the primary database and check repmgr logs on the standby server.
You will see the standby is promoted and becomes new primary and as soon it’s finished, it will start pinging the crashed node and the moment it is back it will be promoted as a new standby.

Hope It helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

DDL generation in PostgreSQL & a TRICK …

Posted by FatDBA on December 3, 2020

Hi Guys,

One of my friend recently asked if there is any way to get DDL of all objects that exists inside a schema in PostgreSQL without using any third party tools, something similar like dbms_metadata what we have in Oracle. Though it sounds trivial, but it’s not and this indeed an interesting question. I mean if you have to collect DDL for an Index, View, constraint, trigger or a function you have predefined/in-build PostgreSQL functions (pg_get_*def) and it will be easy to get the structure, but what if you want DDL of a Table, sequence or a type (There few more) ?

You might be thinking of doing concatenations or by doing some coding, and yes that can be done but that will be tricky and complicated. So there isn’t any direct way of generating definitions of said objects, but here comes the PG_DUMP utility for the rescue, you can take the dump of the entire schema and redirect its output in to a SQL file and you can get all your DDLs from the dump file.

Let’s do some hands on, I am on EDB PostgreSQL Version 10.12.20.


[enterprisedb@fatdba ~]$ psql
psql.bin (10.12.20)
Type "help" for help.

enterprisedb=# set search_path=migr;
SET
enterprisedb=# \dtsvi+
                                                               List of relations
 Schema |            Name             |   Type   |    Owner     |         Table          |    Size    |              Description
--------+-----------------------------+----------+--------------+------------------------+------------+----------------------------------------
 migr   | audits                      | table    | enterprisedb |                        | 0 bytes    |
 migr   | audits_audit_id_seq         | sequence | enterprisedb |                        | 8192 bytes |
 migr   | audits_pkey                 | index    | enterprisedb | audits                 | 8192 bytes |
 migr   | bar                         | table    | enterprisedb |                        | 8192 bytes |
 migr   | bar_pk                      | index    | enterprisedb | bar                    | 8192 bytes |
 migr   | bigtab                      | table    | enterprisedb |                        | 944 kB     |
 migr   | bigtab1                     | table    | enterprisedb |                        | 112 kB     |
 migr   | bigtab1_vw                  | view     | enterprisedb |                        | 0 bytes    |
 migr   | foo                         | table    | enterprisedb |                        | 8192 bytes |
 migr   | foo_bar                     | table    | enterprisedb |                        | 8192 bytes |
 migr   | foo_pk                      | index    | enterprisedb | foo                    | 8192 bytes |
 migr   | i_mlog$_bar                 | index    | enterprisedb | mlog$_bar              | 8192 bytes |
 migr   | i_mlog$_foo                 | index    | enterprisedb | mlog$_foo              | 8192 bytes |
 migr   | mlog$_bar                   | table    | enterprisedb |                        | 8192 bytes | snapshot log for master table MIGR.BAR
 migr   | mlog$_foo                   | table    | enterprisedb |                        | 8192 bytes | snapshot log for master table MIGR.FOO
 migr   | person_address_details      | table    | enterprisedb |                        | 40 kB      |
 migr   | person_address_details_pkey | index    | enterprisedb | person_address_details | 16 kB      |
 migr   | person_info                 | table    | enterprisedb |                        | 40 kB      |
 migr   | person_info_pkey            | index    | enterprisedb | person_info            | 16 kB      |
 migr   | trigger_test                | table    | enterprisedb |                        | 8192 bytes |
(20 rows)
 

This is how you can generate DDL’s for your views, Indexes and others.


enterprisedb=# select pg_get_viewdef('bigtab1_vw'::regclass, true);
           pg_get_viewdef
------------------------------------
  SELECT bigtab1.id,               +
     bigtab1.created_date,         +
     bigtab1.lookup_id,            +
     bigtab1.data                  +
    FROM bigtab1                   +
   WHERE bigtab1.id > 950::numeric;
(1 row)
                           
enterprisedb=#  select pg_get_indexdef('person_address_details_pkey'::regclass);
                                                pg_get_indexdef
----------------------------------------------------------------------------------------------------------------
 CREATE UNIQUE INDEX person_address_details_pkey ON migr.person_address_details USING btree (person_address_id)
(1 row)
 

Let’s take a pg_dump backup of this schema and will look out for table and other object definitions.


[enterprisedb@fatdba dumpddl]$ pg_dump --schema migr > migr_ddl.sql
[enterprisedb@fatdba dumpddl]$
[enterprisedb@fatdba dumpddl]$ ls -ltrh
total 728K
-rw-rw-r--. 1 enterprisedb enterprisedb 728K Nov 10 11:34 migr_ddl.sql
 

[enterprisedb@fatdba dumpddl]$ more migr_ddl.sql |grep -A 5 --color 'CREATE TABLE'
CREATE TABLE migr.audits (
    audit_id bigint NOT NULL,
    table_name character varying(255),
    transaction_name character varying(10),
    by_user character varying(30),
    transaction_date timestamp without time zone
--
CREATE TABLE migr.bar (
    foo numeric NOT NULL,
    bar numeric NOT NULL
);


--
CREATE TABLE migr.bigtab (
    id numeric(12,6),
    v1 character varying(10),
    padding character varying(50)
);

--
CREATE TABLE migr.bigtab1 (
    id numeric(10,0),
    created_date timestamp without time zone,
    lookup_id numeric(10,0),
    data character varying(50)
);
--
CREATE TABLE migr.foo (
    foo numeric NOT NULL
);


ALTER TABLE migr.foo OWNER TO enterprisedb;
--
CREATE TABLE migr.foo_bar (
    foo numeric,
    bar numeric,
    foo_rowid rowid,
    bar_rowid rowid
);
--
CREATE TABLE migr."mlog$_bar" (
    "m_row$$" character varying(255),
    "snaptime$$" timestamp without time zone,
    "dmltype$$" character varying(1),
    "old_new$$" character varying(1),
    "change_vector$$" bytea(255),
--
CREATE TABLE migr."mlog$_foo" (
    "m_row$$" character varying(255),
    "snaptime$$" timestamp without time zone,
    "dmltype$$" character varying(1),
    "old_new$$" character varying(1),
    "change_vector$$" bytea(255),
--
CREATE TABLE migr.person_address_details (
    person_address_id numeric(5,0) NOT NULL,
    person_id numeric(5,0),
    city character varying(15),
    state character varying(15),
    country character varying(20),
--
CREATE TABLE migr.person_info (
    person_id numeric(5,0) NOT NULL,
    first_name character varying(20),
    last_name character varying(20)
);

--
CREATE TABLE migr.trigger_test (
    id numeric NOT NULL,
    description character varying(50) NOT NULL
);

 

So, we finally got the DDL of tables and all objects inside the schema.

Hope It helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , | Leave a Comment »

Thanks Percona for this great product called PMM (Percona Monitoring & Management Server)

Posted by FatDBA on December 2, 2020

Hi Guys,

For one of our customer we recently moved from on-prem to AWS cloud and we moved our entire infrastructure along with our MySQL & PostgreSQL Clusters. The very next question that started bothering us is to look out for any solid, stable and reliable monitoring and performance management tool, and then we learned about Percona’s Monitoring & Management Server (PMM).

After trying the setup we are pretty satisfied with it’s performance as it fulfils all our expectations from any monitoring tool to monitor our applications and databases. We found the product best in terms of Cost (comes with an hourly price of only $ 0.10/hr), highly secure (with SSL encryption) and offers some really cool and smart features.
There are many features what it offers, but being a long time Performance Consultant (DBA) I personally found the ‘Query Analytics’ section more useful and impressive. It has got everything what you need to know for your real-time workload, it provides an in-depth analysis of your queries i.e. execution plan, latency information, query fingerprints etc. This can provide very useful at the time when you have any problem in hand and want to deep dive in and want to know what all is going on with your system at the statement level.

It has got great integration with Grafana platform too, great data visualizations in the form of dashboards etc.

I will recommend this tool for people who are looking for any good stable monitoring platform along with performance administration tool. Not only for PostgreSQL & MySQL but it’s available for other platforms too like Amazon RDS MySQL, Amazon Aurora MySQL, MongoDB, Percona XtraDB Cluster and ProxySQL.

It’s available on both AWS and Azure markets.

Link to AWS Marketplace : https://aws.amazon.com/marketplace/pp/B077J7FYGX?qid=1605533229523&sr=0-1&ref_=srh_res_product_title

Link to Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/percona.pmm_2?tab=Overview

 

Hope It Helped!

Prashant Dixit

Posted in Uncategorized | Tagged: , , | Leave a Comment »

 
%d bloggers like this: