Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Posts Tagged ‘opertingsystem’

When Linux Swaps Away My Sleep – MySQL, RHEL8, and the Curious Case of High Swap Usage

Posted by FatDBA on December 12, 2025

I remember an old instance where I’d got an alert that one of production MySQL servers had suddenly gone sluggish after moved to RHEL 8 from RHEL7. On checking, I found something odd … the system was consuming swap heavily, even though there was plenty of physical memory free.

Someone who did the first time deployment years before, left THP as enabled and with default swapiness … but this setting that had worked perfectly for years on RHEL 7, but now, after the upgrade to RHEL 8.10, the behavior was completely different.

This post is about how that small OS level change turned into a real performance headache, and what we found after some deep digging.

The server in question was a MySQL 8.0.43 instance running on a VMware VM with 16 CPUs and 64 GB RAM. When the issue began, users complained that the database was freezing randomly, and monitoring tools were throwing high load average and slow query alerts.

Let’s take a quick look at the environment … It was a pretty decent VM, nothing under sized.

$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.10 (Ootpa)

$ uname -r
4.18.0-553.82.1.el8_10.x86_64

$ uptime
11:20:24 up 3 days, 10:57,  2 users,  load average: 4.34, 3.15, 3.63

$ grep ^CPU\(s\) sos_commands/processor/lscpu
CPU(s): 16

When I pulled the SAR data for that morning, the pattern was clear ..There were long stretches on CPU where %iowait spiked above 20-25%, and load averages crossed 400+ during peak time! The 09:50 slot looked particularly suspicious .. load average jumped to 464 and remained high for several minutes.

09:00:01 %usr=26.08  %iowait=22.78  %idle=46.67
09:40:01 %usr=29.04  %iowait=24.43  %idle=40.11
09:50:01 %usr=7.55   %iowait=10.07  %idle=80.26
10:00:01 %usr=38.53  %iowait=19.54  %idle=35.32

Here’s what the memory and swap stats looked like:

# Memory Utilization
%memused ≈ 99.3%
Free memory ≈ 400 MB (on a 64 GB box)
Swap usage ≈ 85% average, hit 100% at 09:50 AM

That was confusing.. MySQL was not leaking memory, and there was still >10 GB available for cache and buffers. The system was clearly pushing pages to swap even though it didn’t need to. That was the turning point in the investigation.

At the same time, the reporting agent started reporting MySQL timeouts:

 09:44:09 [mysql] read tcp xxx.xx.xx.xx:xxx->xxx.xxx.xx.xx:xxxx: i/o timeout
 09:44:14 [mysql] read tcp xx.xx.xx.xxxx:xxx->xx.xx.xx.xx.xx:xxx: i/o timeout

And the system kernel logs showed the familiar horror lines for every DBA .. MySQL threads were being stalled by the OS. This aligned perfectly with the time when swap usage peaked.

 09:45:34 kernel: INFO: task mysqld:5352 blocked for more than 120 seconds.
 09:45:34 kernel: INFO: task ib_pg_flush_co:9435 blocked for more than 120 seconds.
 09:45:34 kernel: INFO: task connection:10137 blocked for more than 120 seconds.

I double-checked the swappiness configuration:

$ cat /proc/sys/vm/swappiness
1

So theoretically, swap usage should have been minimal. But the system was still paging aggressively. Then I checked the cgroup configuration (a trick I learned from a Red Hat note) .. And there it was more than 115 cgroups still using the default value of 60! … In RHEL 8, memory management moved more toward cgroup v2, which isolates memory parameters by control group.

So even if /proc/sys/vm/swappiness is set to 1, processes inside those cgroups can still follow their own default value (60) and this explained why the system was behaving like swappiness=60 even though the global value was 1.

$ find /sys/fs/cgroup/memory/ -name *swappiness -exec cat {} \; | uniq -c
      1 1
    115 60

In RHEL 8, memory management moved more toward cgroup v2, which isolates memory parameters by control group. So even if /proc/sys/vm/swappiness is set to 1, processes inside those cgroups can still follow their own default value (60). This explained why the system was behaving like swappiness=60 even though the global value was 1.

Once the root cause was identified, the fix was straightforward — Enforced global swapiness across CGroups

Add this to /etc/sysctl.conf:

vm.force_cgroup_v2_swappiness = 1

Then reload:
sysctl -p

This forces the kernel to apply the global swappiness value to all cgroups, ensuring consistent behavior. Next, we handled THP that is always expected to cause intermittent fragmentation and stalls in memory intensive workloads like MySQL, Oracle, PostgreSQL and even in non RDBMSs like Cassandra etc., we disabled the transparent huge pages and rebooted the host.

In short what happened and was the root cause.

  • RHEL8 introduced a change in how swappiness interacts with cgroups.
  • The old /proc/sys/vm/swappiness setting no longer applies universally.
  • Unless explicitly forced, MySQL’s cgroup keeps the default swappiness (60).
  • Combined with THP and background I/O, this created severe page cache churn.

So the OS upgrade, not MySQL, was the real root cause.

Note: https://access.redhat.com/solutions/6785021

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »