Tales From A Lazy Fat DBA

Love all databases! – Its all about performance, troubleshooting & much more …. ¯\_(ツ)_/¯

  • Prashant Dixit is the 'FatDBA' ...
  • Follow me on Twitter

Posts Tagged ‘tools’

pgCenter, a TOP like utility to monitor PostgreSQL databases

Posted by FatDBA on July 12, 2021

Hi All,

I was recently asked by one of my team mate to recommend any performance monitoring tool which is free, text based, light weight and is quick to run to check what all is running on the PostgreSQL database. He wanted to use it for one of the performance drive he was leading for one production database setup. The guy was coming from Oracle background, and was looking something like Linux TOP utility or like oratop (like top for oracle database) …

After carefully understanding his requirements, I recommended him to use pgCenter. This is an open source project by Lesovsky Alexey (the man behind pgstats.dev and Weaponry) and is a command-line admin tool for observing and troubleshooting Postgres database. I would have recommended pg_top to him, but pgCenter is one step ahead and better as it provide more details as compared to the later.

pgCenter’s main goal is to help Postgres DBA working with statistics and provide a convenient way to observe Postgres in runtime. Tool is capable to provide replication, table level, Index, activity, table size, database (overall), and function statistics. It also shows details on VACCUM, NIC & Disk stats along with OS level stats of CPU, Memory, load averages, connection and server details.

It’s pretty easy to call or install, please refer below project GitHub link to download and follow install actions. https://github.com/lesovsky/pgcenter

So, let’s see what all it monitors and catches.

[enterprisedb@fatdba ~]$
[enterprisedb@fatdba ~]$ pgcenter top -d perf -h 10.0.0.130 -U enterprisedb  -p 5444

A GIF (from project page on github) to show it in real-time, what all you can do with the tool.

Stats it supports …

PostgreSQL Stats it supports:

  • summary activity – a compilation/selection of metrics from different sources – postgres uptime, version, recovery status, number of clients grouped by their states, number of (auto)vacuums, statements per second, age of the longest transaction and the longest vacuum;
  • pg_stat_activity – activity of connected clients and background processes.
  • pg_stat_database – database-wide and sessions statistics, such as number of commits/rollbacks, processed tuples, deadlocks, temporary files, etc.
  • pg_stat_replication – replication statistics, like connected standbys, their activity and replication lag.
  • pg_stat_user_tables, pg_statio_user_tables – statistics on accesses (including IO) to tables.
  • pg_stat_user_indexes, pg_statio_user_indexes – statistics on accesses (including IO) to indexes.
  • pg_stat_user_functions – statistics on execution of functions.
  • pg_stat_wal – WAL usage statistics.
  • pg_stat_statements – statistics on SQL statements executed including time and resources usage.
  • statistics on tables sizes based on pg_relation_size() and pg_total_relation_size() functions;
  • pg_stat_progress_vacuum – progress of (auto)vacuums operations.
  • pg_stat_progress_cluster – progress of CLUSTER and VACUUM FULL operations.
  • pg_stat_progress_create_index – progress of CREATE INDEX and REINDEX operations.
  • pg_stat_progress_analyze – progress of ANALYZE operations.
  • pg_stat_progress_basebackup – progress of basebackup operations.
  • pg_stat_progress_copy – progress of COPY operations.

System Statistics it supports :

pgCenter top also provides system usage information based on statistics from procfs filesystem:

  • load average and CPU usage time (user, system, nice, idle, iowait, software, and hardware interrupts, steal);
  • memory and swap usage, amount of cached and dirty memory, writeback activity;
  • storage devices statistics: IOPS, throughput, latencies, average queue and requests size, devices utilization;
  • network interfaces statistics: throughput in bytes and packets, different kind of errors, saturation and utilization.
  • mounted filesystems’ usage statistics: total size, amount of free/used/reserved space and inodes.

Hope It Helps!
Prashant Dixit

Posted in troubleshooting | Tagged: , , , | Leave a Comment »

My favorite 5 SQLcl Features ….

Posted by FatDBA on June 13, 2021

Hi Guys,

Todays post is about one of the tool that I have been using from last few years now and which I really like when it comes to working on Oracle databases. This is called SQLcl or SQL Developer Command Line, we can call it as an advance version of SQL Plus as it comes with lot of cool and handy features/commands those are not available with default SQL Plus command line interface or SQL prompt. So, this post is about ‘TOP 5 features of SQLcl which I like the most’ ….

Few of you who don’t know what this SQLcl is ? – This is one of tool developed by Oracle’s SQL Developer team, lead by Jeff Smith (Thanks Jeff for that!) and it’s something that makes DBAs & Developers job easy with its range of commands and features that makes it very powerful and gives upper hand over SQL Plus.

So lets get started with top 5 features of SQLcl that I liked the most.

Note: All of the below tests I have performed on SQLcl version 21.1.1.0 build: 21.1.1.113.1704

[oracle@canttowin bin]$ ./sql

SQLcl: Release 21.1 Production on Sat Jun 12 23:29:31 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Username? (''?) system
Password? (**********?) ********
Last Successful login time: Sat Jun 12 2021 23:29:39 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> version
Oracle SQLDeveloper Command-Line (SQLcl) version: 21.1.1.0 build: 21.1.1.113.1704

Feature 1: ‘REPEAT’ command.
This is pretty useful command if you want to execute any specific query for a number of times to see results. I mean it’s just like what ‘watch’ is on Linux OS.
It simply executes the SQL command and provides output for number of times at a particular interval.

I.e. Below I have executed an SQL for 5 times with a gap of 5 seconds.

SQL> select count(*) from v$session where status='ACTIVE';

   COUNT(*)
___________
         149

SQL> repeat 5 5
Running 1 of 5  @ 11:38:25.243 with a delay of 5s

   COUNT(*)
___________
89
Running 2 of 5  @ 11:38:30.251 with a delay of 5s

   COUNT(*)
___________
109
Running 3 of 5  @ 11:38:35.254 with a delay of 5s

   COUNT(*)
___________
199
Running 4 of 5  @ 11:38:40.258 with a delay of 5s

   COUNT(*)
___________
230
Running 5 of 5  @ 11:38:45.263 with a delay of 5s

   COUNT(*)
___________
409

SQL>

Feature 2: ‘Quick DDL Generation’
Now with SQLcl you don’t have to run DBMS_METADATA.get_ddl to get the definition of your Table or Index or anything. You can simply use the DDL command with syntax DDL and you will have your complete object DDL.

SQL>
SQL>
SQL> ddl system.bigtab table

  CREATE TABLE "SYSTEM"."BIGTAB"
   (    "ID" NUMBER,
        "WEIGHT" NUMBER,
        "ADATE" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ;
SQL>

Feature 3: ‘Collect Object Information’
Now no need to query dynamic views and DESC commands to get your table stats (rows, analyzed date, in memory status, comments and sample size) and table descriptions. This you can get using a single command of INFORMATION or INFO.

If you want to get more details about histograms on your table, then you have INFO+ command which presents more details to you.

SQL> information system.bigtab
TABLE: BIGTAB
         LAST ANALYZED:2021-06-12 23:45:33.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         INMEMORY     :DISABLED
         COMMENTS     :This is a table for testing purposes

Columns
NAME         DATA TYPE   NULL  DEFAULT    COMMENTS
 ID          NUMBER      Yes
 WEIGHT      NUMBER      Yes
 ADATE       DATE        Yes

SQL>


SQL> info+ system.bigtab
TABLE: BIGTAB
         LAST ANALYZED:2021-06-20 13:41:19.0
         ROWS         :67310
         SAMPLE SIZE  :67310
         INMEMORY     :DISABLED
         COMMENTS     :This is a table for testing purposes

Columns
NAME         DATA TYPE   NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM
 ID          NUMBER      Yes                  22                     198                   1000          FREQUENCY
 WEIGHT      NUMBER      Yes                  -2147337077           2147453933            67310          NONE
 ADATE       DATE        Yes                  2018.09.16.23.34.01   2021.06.12.23.24.14   66224          NONE

Feature 4: ‘CTAS easy and quick’
CTAS is very useful command and is quite frequently used because of its simplicity and purpose, now with SQLcl you don’t have to type the complete command or syntax to create a new tables using existing via CTAS.

SQL> ctas bigtab bigtable666

  CREATE TABLE "SYSTEM"."BIGTABLE666"
   (    "ID",
        "WEIGHT",
        "ADATE",
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM"
 as
select * from BIGTAB
SQL>

Feature 5: ‘Extended and more descriptive AUTOTRACING’.
This is by far one of the best feature of SQLcl in my opinion and specially for someone who have to tune and tweak databases every now and then. This provides more advance level details or statistics (overall 37 different stats) for any SQL statement where you set the AUTOTRACING feature ON. I am big fan of this feature!

SQL>
SQL> set autotrace on
Autotrace Enabled
Shows the execution plan as well as statistics of the statement.
SQL>

SQL> select * from system.bigtab where ID =588;
...
.....

67 rows selected.

Explain Plan
-----------------------------------------------------------
                                                              PLAN_TABLE_OUTPUT
_______________________________________________________________________________
Plan hash value: 441133017

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |    76 |  2660 |    63   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| BIGTAB |    76 |  2660 |    63   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=588)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

Statistics
-----------------------------------------------------------
               1  CPU used by this session
               1  CPU used when call started
               3  DB time
              47  Requests to/from client
              46  SQL*Net roundtrips to/from client
               4  buffer is not pinned count
            1329  bytes received via SQL*Net from client
           91534  bytes sent via SQL*Net to client
               5  calls to get snapshot scn: kcmgss
              11  calls to kcmgcs
             302  consistent gets
             302  consistent gets from cache
             302  consistent gets pin
             302  consistent gets pin (fastpath)
               2  enqueue releases
               2  enqueue requests
               3  execute count
         2473984  logical read bytes from cache
             293  no work - consistent read gets
              49  non-idle wait count
               3  opened cursors cumulative
               1  opened cursors current
               2  parse count (hard)
               3  parse count (total)
               1  parse time cpu
               2  parse time elapsed
              12  process last non-idle time
               5  recursive calls
               1  recursive cpu usage
             302  session logical reads
               1  sorts (memory)
            2010  sorts (rows)
             293  table scan blocks gotten
           86905  table scan disk non-IMC rows gotten
           86905  table scan rows gotten
               2  table scans (short tables)
              47  user calls

Hope It Helped!
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | 1 Comment »

 
%d bloggers like this: