Warning: I wrote this post a while ago. There are better ways to limit a process memory. (eg create a new cgroup by
mkdir-ing in your cgroupfs, writing your PID to its processes list file.)
Some of the Twitter bots I have created that run on my tiny DigitalOcean droplet need to call external tools like youtube-dl, ffmpeg, and tesseract which can consume a very large and unpredictable amount of memory. On my droplet with 1GB memory and as much swap, this often means the process, or even the process of the bot that invoked it, gets OOM-killed and this is suboptimal. I need a simple way to restrict the maximum memory usage of a process.
On Linux, this can be done with systemd, but first you must enable cgroups v2 support in your kernel. (I won’t go in the details of how cgroups works or what it does, I’m only interested in getting it to work.) You will need root access to setup cgroups and will have to reboot your machine, but once it’s setup any user will be able to set process memory limits without root.
This tutorial was made for Debian, but is applicable to any modern Linux distribution that supports cgroups v2 unified, has a modern Linux kernel (>= 4.6 or so, check with uname -a and update your system) and a modern systemd.
Enable cgroups v2 “unified” support in your kernel by adding the following kernel flags in your
/etc/default/grub, in variable
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=1"
Update your grub configuration by running:
Update systemd to a modern version (>= 235 or so, check with
systemd --version). On Debian, I have added the stretch-backports repository, ran
sudo apt-get update and
sudo apt-get -t stretch-backports install systemd.
Reboot your system to enable the flags you added previously. Maybe it’s possible to do so without actually rebooting, but I’m not aware of it.
After reboot, check that cgroups v2 unified is enabled by running (courtesy of Piotr Dobgrogost):
[ $(stat -fc %T /sys/fs/cgroup/) = "cgroup2fs" ] && echo "unified" || ( [ -e /sys/fs/cgroup/unified/ ] && echo "hybrid" || echo "legacy")
The command must print
You can now run any process with a max memory limit without root access.
To run e.g.
echo hi without any limits, run:
systemd-run --scope --user --quiet echo hi
man 1 systemd-run for more information on the command. TL;DR: this creates a transient .scope unit in the service manager of the user, without printing any additional message.
stderr, and the process exit code all work as expected.
You can specify various properties and limits, which are all specified in
man 5 systemd.resource-control. I use two properties that set memory limits:
MemoryMax: specifies an absolute limit on memory usage, excluding swap; if the process reaches this limit, it gets oom-killed;
MemorySwapMax: specifies an absolute limit on the swap memory usage (must be used with MemoryAccounting=True).
To run e.g.
echo hi with a 1MB limit on main memory and 1MB limit on swap memory, run:
systemd-run --scope --user --quiet -p MemoryMax=1M -p MemoryAccounting=True -p MemorySwapMax=1M echo hi
To check that this worked and if the process was indeed killed if it reached the memory limits, I used a simple bash code that uses an approximate amount of memory (courtesy of Luc):
yes | tr \\n x | head -c $BYTES | grep n
So, to check if the memory limits worked, I ran:
systemd-run --scope --user --quiet -p MemoryMax=1M -p MemoryAccounting=True -p MemorySwapMax=0M bash -c 'yes | tr \\n x | head -c 1000000 | grep n'
The process does get killed, and is not killed if I increase the MemoryMax limit to 4M-ish (the memory allocations of the bash command are quite imprecise).
As an example, here’s a full command that is run by one of my bots:
systemd-run --scope --user --quiet -p MemoryMax=350M -p MemoryAccounting=True -p MemorySwapMax=250M ffmpeg -hide_banner -loglevel fatal -nostdin -y -i wcgw.mp4 -filter_complex '[0:v]trim=start=0:duration=7.517,setpts=PTS-STARTPTS[a];[0:a]atrim=start=0:duration=7.517[i];[a]split[b][c];[i]asplit[j][k];[c]reverse,fifo[d];[k]areverse[l];[b][j][d][l]concat=n=2:v=1:a=1[e][m]' -map '[e]' -map '[m]' wcgw_output.mp4
ffmpeg often needs more memory than what is available and aborts with exit code 137 when it gets oom-killed. Checking the specific exit code of your program for when it gets (oom-)killed can be useful to detect this specific exit cause.