Resource Limiting

From Extremely Corporate Wiki
Jump to navigation Jump to search


One of the ways of securing a server is to impose resource limits on users. This can prevent people from making the server unusable either by accident or on purpose.

The 3 most important "resources" are:

  1. The amount of the CPU's time dedicated to a single user's processes
  2. The amount of memory a single user can take up
  3. The amount of storage a single user can fill up

There are other things to pay attention to, such as controlling how many/whether or not ports can be bound by the user's processes. It is also the case that you may want to apply strict limits to user accounts controlled by actual people, but not to accounts used by the system for running services. For this purpose, the examples on this page will make reference to a group called users the members of which are all user accounts.

Warning

If you do any of the things to the user you normally use, be aware that this might cause problems during system updates, or other things that might need to take a lot of resources. If you try to do something like a major distribution version upgrade and it fails, the resource limits on your user might be the cause.

Single User

Sometimes you may want to configure a single process for your user to run with limited resources. There are some simple ways to do this:

  1. Set the process's scheduler priority with nice -n NUMBER COMMAND where NUMBER is the amount by which COMMAND's priority will be incremented (a negative number increases priority while a positive number decreases priority). The range of allowed priorities is -20 to 19 (inclusive).
  1. Adjust the process's ulimits using the shell built-in ulimit command. You cannot increase any limits beyond any enforced upper limits, but you can lower the limits for a single process. To see available limits, run ulimit -a. The flag required to set any given limit is shown in parentheses. For example: to set the limit on the number of open files for the current process to 200, run ulimit -n 200.

To automatically perform the above to a given program in a graphical session, you can copy the program's .desktop file from /usr/share/applications/ into ~/.local/share/applications and modify its Exec value such that the desired settings are applied whenever it is started.

limits.conf

You can configure resource limits with /etc/security/limits.conf and every file in the /etc/security/limits.d/ directory. In these files, you define limits which apply to either specific users or groups of users. Applying limits in these files is effective for restricting CPU usage and memory usage, but not storage usage. Here are some example configurations:

  • Rules qualified by * apply to every user
  • Rules qualified by just a name apply to the user with that name
  • Rules qualified by a name prefixed by @ apply to the members of the group with that name
# /etc/security/limits.conf

# This configuration makes sure no core files are produced
# and also reserves the highest process priority for the
# root user so that an unresponsive system can be more easily
# recovered.

*               hard    core            0
*               hard    nice            -19
root            hard    nice            -20
# /etc/security/limits.d/users.conf

# This configuration limits the priority of those processes to
# 10 ensuring the system never prioritizes a user's process
# over those belonging to services (default priority is 0).
# This configuration also limits each user's memory usage to
# at most 1 gigabyte.
 
# Nice limit (negative = higher priority, positive = lower priority)
@users          hard    nice            10
# Default priority
@users          hard    priority        10

# Memory limit (in kilobytes)
@users          hard    as              1000000

NOTE: For systems using PAM (most systems today), you might have to explicitly configure the reloading of your limits configuration when changing users with su. To do so, add the following line to /etc/pam.d/su (some distributions might have this already):

session         required        pam_limits.so set_all

The set_all option is significant. Without it, only limits with explicit values for the current user will be set (default limits will be ignored). Some distributions of PAM have this on by default, but it's better to be safe than sorry.

Command Line Utilities

Here are some tools which can be used as a quick-and-dirty solution to tame process(es) which are consuming too many resources.

  • cpulimit: Limit CPU usage with cpulimit -i -l PERCENTAGE COMMAND (the -i flag makes the limit apply recursively to sub-processes)
  • nice: Adjust CPU priority with nice -n ADJUSTMENT COMMAND
  • ionice: Adjust I/O priority with ionice -n ADJUSTMENT COMMAND

Quotas

Disk quotas allow you to put limits on the amount of storage space users can take up. They can be applied per-user or per-group. Be aware that a group limit does not apply to the files owned by users who are members of a given group. Group quotas apply strictly to files owned by that group. To make use of them, enable quotas on the partitions where users can write files. You can do this by adding the following mount options to the entry in /etc/fstab:

If your kernel supports journaled quotas (it probably does):

  • usrjquota=aquota.user (for user quotas)
  • grpjquota=aquota.group (for group quotas)
  • jqfmt=vfsv1

If your kernel does not support journaled quotas:

  • usrquota (for user quotas)
  • grpquota (for group quotas)

After modifying the mount options, remount each partition to which you added quotas and make sure there are no errors before proceeding.

Before you can enable quotas, you must create the files in which the quotas will be stored. You can do this by running quotacheck -cugma.

You should now be able to enable quotas with quotaon -a.

With quotas enabled, you can start editing the quotas for groups and users with edquota either as edquota user or edquota -g group. The blocks and inodes columns indicate current usage and are not editable. To define their limits, edit their respective hard and soft columns. Each "block" is 1k bytes.

You can inspect quotas using repquota -a.

Unlike limits.conf, there isn't a simple or nice way to define quotas that apply individually to every member of a given group. Instead, you can define a user quota on one member of a group, then copy it to every other member of that group. Here is a script which does just that:

#!/bin/bash

PROTOTYPE_USER=p_user
IFS=',' && for user in `getent group group | cut -d : -f 4`; do
    edquota -p "$PROTOTYPE_USER" "$user"
done

Replace p_user with the name for which you defined the quota you want to duplicate.

Replace group with the name of the group to which the users to whom you want the quota duplicated belong.

If you put this script at /usr/local/sbin/copy_quotas.sh, then the following systemd unit should work nicely:

[Unit]
Description=Copy disk quotas to members of users group

[Service]
Type=simple
ExecStart=/usr/local/sbin/copy_quotas.sh

[Install]
WantedBy=multi-user.target

You can call it something like copy_quotas.service and either run it manually, or use it with an accompanying timer like so:

[Unit]
Description=Copy disk quotas to members of users group

[Timer]
OnUnitActiveSec=1d

[Install]
WantedBy=timers.target

cgroups

For systems running systemd, it is recommended to configure cgroups via systemd units (ArchWiki article).

Note: On Void Linux, one must add CGROUP_MODE=unified to /etc/rc.conf in order to use cgroups v2 at the moment. (On recent releases of distributions which use systemd, cgroups v2 will most likely be used by default)

Some good documentation for cgroups v2 is available here.

Cgroups is broken up into sub-systems which each govern different system resources. You can see the currently enabled V1 subsystems in the /proc/cgroups file. You can see the currently enabled V2 subsystems in the /sys/fs/cgroup/cgroup.controllers file.

cgred

cgred is a daemon from the suite of utilities of the libcgroup library for assigning processes to cgroups.

/etc/cgconfig.conf

This configuration file describes the control group hierarchy which will be loaded by the cgred daemon on startup. (the configuration can also be loaded independently using the cgconfigparser utilitiy)

Here is a sample cgconfig file:

# /etc/cgconfig.conf
# The format of this file is described in cgconfig.conf(5) manual page.

group users {
    memory {
        memory.high = 100M;
    }
}

group users/alice {
    cpu {
        cpu.shares = 100;
    }
}

group users/bob {
    cpu {
        cpu.shares = 50;
    }
}

/etc/cgrules.conf

This configuration file describes the rules which determine which processes are assigned to which control groups. Here is a sample cgrules config:

# /etc/cgrules.conf
# The format of this file is described in cgrules.conf(5) manual page.

# Example:
# <user>         <controllers>   <destination>
@student       cpu,memory      usergroup/student/
peter          cpu             test1/
%              memory          test2/
# End of file

Quirks

  • Anecdotally, I've not been able to use the wildcard (*) syntax for controllers, I have to list out each one I have to use. Also, it seems that the "ditto" (%) syntax doesn't work like you might expect. You cannot copy just the user or group from the previous rule, i.e. %:firefox will ignore the :firefox and just copy the whole target from the previous line.
  • Also, the first rule to match is the one which gets applied, so if you have a more general rule which you would like to be overridden by a more specific rule, make sure the more specific rule is closer to the top of the file than the more general one!
  • Be careful about the permissions of the groups you create. The user who owns the cgroup under which a process runs can have unexpected consequences. For example: In order for polkit to work properly, processes should be in cgroups associated with the proper user. If you move processes into a cgroup which is only associated with root, some features such as authentication agents will stop working properly.