Limit memory usage per process linux

How Limit memory usage for a single Linux process and not kill the process

How Limit memory usage for a single Linux process and not kill the process.

I know ulimit can limit memory usage, but if exceed the limit, will kill the process.

Is there any other command or shell can limit memory usage and not kill the process?

2 Answers 2

Another way besides setrlimit, which can be set using the ulimit utility:

$ ulimit -Sv 500000 # Set

is to use Linux’s control groups, because it limits a process’s (or group of processes’) allocation of physical memory distinctly from virtual memory. For example:

$ cgcreate -g memory:/myGroup
$ echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
$ echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroupmemory.memsw.limit_in_bytes

will create a control group named «myGroup», cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:

Note: For what I can understand setrlimit will limit the virtual memory, although with cgroups you can limit the physical memory.

I believe you are wrong on thinking that a limit set with setrlimit(2) will always kill the process.

Indeed, if the stack space is exceeded ( RLIMIT_STACK ), the process would be killed (with SIGSEGV ).

But if it is heap memory ( RLIMIT_DATA or RLIMIT_AS ), mmap(2) would fail. If it has been called from malloc(3) or friends, that malloc would fail.

Some Linux systems are configured with memory overcommit.
This is a sysadmin issue: echo 0 > /proc/sys/vm/overcommit_memory

The moral of the story is that you should always check result of malloc , at least like

Of course, sophisticated applications could handle «out-of-memory» conditions more wisely, but it is difficult to get right.

Some people incorrectly assumes that malloc does not fail (this is wrong). But that is their mistake. Then they dereference a NULL pointer, getting a SIGSEGV that they deserve.

You could consider catching SIGSEGV with some processor-specific code, see this answer. Unless you are a guru, don’t do that.

Источник

Limiting process memory/CPU usage on linux

I know we can adjust scheduling priority by using the nice command.

However, the man page doesn’t say whether it will limit both CPU and memory or only CPU (and in any case, it can’t be used to specify absolute limits).

Is there a way to run a process and limit its memory usage to say «X» MB and CPU usage to say «Y» Mhz in Linux?

3 Answers 3

You might want to investigate cgroups as well as the older (obsolete) ulimit.

In theory, you should be able to use ulimit for this purpose. However, I’ve personally never gotten it to work.

Linux specific answer:

For historical systems running ulimit -m $LIMIT_IN_KB would have been the correct answer. Nowadays you have to use cgroups and cgexec or systemd-run .

However, for systems that are still transitioning to systemd there does not seem to be any solution that does not require setting up pre-made configuration for each limit you wish to use. This is because such systems (e.g. Debian/Ubuntu) still use «hybrid hierachy cgroups» and systemd supports setting memory limits with newer «unified hierarchy cgroups» only. If your Linux distribution is already running systemd with unified hierarchy cgroups then running a given user mode process with specific limits should work like this

For possible parameters, see man systemd.resource-control .

If I’ve understood correctly, setting CPUWeight instructs the kernel how much CPU you want to give if CPU is fully tasked and defaults to 100 , lower values mean less CPU time if multiple processes compete for CPU time. It will not limit CPU usage if CPU is utilized less than 100% which is usually a good thing. If you truly want to force the process to use less than a single core even when the machine is idle, you can set e.g. CPUQuota=10% to force process to use up to 10% of single core. If you set CPUQuota=200% it means the process can use up to 2 cores on average (but without CPU binding it can use some of its time on more CPUs).

Update (year 2021): It seems that

should work if you’re running version of systemd including a fix to bug https://github.com/systemd/systemd/issues/9512 – in practice, you need fixes listed here: https://github.com/systemd/systemd/pull/10894

Читайте также:  Виртуальный wifi роутер для windows

If your system is lacking those fixes, the command

appears to work but the memory limits are not actually enforced.

In reality, it seems that Ubuntu 20.04 LTS does not contain required fixes. Following should fail:

because the command stress is expected to require slightly more than 200 MB of RAM but memory limits are set lower. According to bug https://github.com/systemd/systemd/issues/10581 Poettering says that this should work if distro is using cgroupsv2 , whatever it means in practice.

I don’t know a distro that implements user mode cgroup limits correctly. As a result, you need root to configure the limits.

Источник

Setting limit to total physical memory available in Linux

I know that I am supposed to set mem=MEMORY_LIMIT . But I do not know where to go, during runtime, or during boot time, in order to set a limit to the total physical memory that the OS has control of.

I am running I/O benchmarks, and I would like to limit the amount of overall physical memory that is available.

5 Answers 5

I found the answer I was looking for. Basically, the parameter that sets the total available physical memory is «mem=MEMORY_LIMIT». And this is a kernel boot parameter. You need to add, say «mem=1G» for maximum of 1GB available physical memory to the kernel boot parameter. For more info on how to add kernel boot parameters look at https://wiki.ubuntu.com/Kernel/KernelBootParameters

Edit your kernel boot parameters in lilo.conf , grub.conf , grub.cfg , or menu.lst (which one depends on your particular distro and bootloader; check your distro’s documentation for more detail) to include the parameter mem=512M (or whatever size you want to emulate) on the line specifying your kernel parameters.

For instance, in Grub, there should be a line that says something like kernel /boot/vmlinuz param1=val1 param2=val2 . Add the mem=512M to that list of parameters. You can create separate entries for your boot menu by copying these entire definitions, renaming them, and configuring each with a different amount of memory, so you can quickly boot with different settings.

To add to Brian Campbell’s list, for the uBoot bootloader on the BeagleBone / Black devices, edit the kernel paramters in /boot/uboot/uEnv.txt
Add or modify the line mmcargs=setenv bootargs mem=512M [tested with Debian]

Use > free before and after [reboot] to confirm the modification

A1: Yes, you have to reboot.

A2: The kernel is rather unforgiving with respect to typos. There are no error messages. Could that be your problem? Examples: » mem=512M» and » mem=2G». Don’t forget the space following the previous parameter, mem is in lower case, and K, M or G are upper case.

I’ve followed the instructions in this page that KZcoding mentioned (part: Permanently Add a Kernel Boot Parameter)

My Linux is (vm in virtualbox):

Just changed this line in /etc/default/grub

Источник

Limit memory usage for a single Linux process

I’m running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.

So what I’d like to do is put some kind of hard limit on memory usage for a child process I’m about to run—just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?

I don’t think ulimit can be used for this, but is there a one-process equivalent?

8 Answers 8

Another way to limit this is to use Linux’s control groups. This is especially useful if you want to limit a process’s (or group of processes’) allocation of physical memory distinctly from virtual memory. For example:

will create a control group named myGroup , cap the set of processes run under myGroup up to 500 MB of physical memory with memory.limit_in_bytes and up to 5000 MB of physical and swap memory together with memory.memsw.limit_in_bytes . More info about these options can be found here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-memory

To run a process under the control group:

Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:

and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.

If your process doesn’t spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:

This will only limit «virtual» memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java’s large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.

Читайте также:  Закачка файлов windows phone

If your program spawns children, and it’s them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.

There’s some problems with ulimit. Here’s a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.

The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:

After that, you can ‘cage’ your process by memory consumption as in your question like so:

Alternatively you could use -t and -x to respectively limit the process by time or CPU constraints.

The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.

A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you’d use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.

On any systemd-based distro you can also use cgroups indirectly through systemd-run. E.g. for your case of limiting pdftoppm to 500M of RAM, use:

Note: this will ask you for a password but the app gets launched as your user. Do not allow this to delude you into thinking that the command needs sudo , because that would cause the command to run under root, which was hardly your intention.

If you don’t want to enter the password (indeed, why would you need a password to limit memory you already own), you could use —user option, however for this to work you will need cgroupsv2 support enabled, which right now requires to boot with systemd.unified_cgroup_hierarchy kernel parameter.

I’m using the below script, which works great. It uses cgroups through cgmanager . Update: it now uses the commands from cgroup-tools . Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash . This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes .

edit: On default Linux installations this only limits memory usage, not swap usage. To enable swap usage limiting, you need to enable swap accounting on your Linux system. Do that by setting/adding swapaccount=1 in /etc/default/grub so it looks something like

Then run sudo update-grub and reboot.

Disclaimer: I wouldn’t be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api’s for cgroup management but there are no command line tools for that a.t.m.

edit (2021): Until now this script still works, but it goes against Linux’s recommendation to have a single program manage your cgroups. Nowadays that program is usually systemd. Unfortunately systemd has a number of limitations that make it difficult to replace this script with systemd invocations. The systemd-run —user command should allow a user to run a program with resource limitations, but that isn’t supported on cgroups v1. (Everyone uses cgroups v1 because docker doesn’t work on cgroupsv2 yet except for the very latest versions.) With root access (which this script also requires) it should be possible to use systemd-run to create the correct systemd-supported cgroups, and then manually set the memory and swap properties in the right cgroup, but that is still to be implemented. See also this bug comment for context, and here and here for relevant documentation.

According to @Mikko’s comment using a script like this with systemd runs the risk of systemd losing track of processes in a sessions. I haven’t noticed such problems, but I use this script mostly on a single-user machine.

Источник

How to limit memory usage of a process under Linux

In various cases a process can easily eat up the memory of a server. This can happen fast and slowly(within weeks) as well. This article will show you how to find this process and how to limit its memory usage. The Linux itself does not limit the physichal memory usage of a process, either running under root privileges, or not.

Читайте также:  Маленький линукс для слабых компьютеров

Physichal memory handling in Linux

The physichal memory (RAM) is divided into equally sized parts, called memory pages. The size of pages depends on the architecture of the server, and setup by the OS. The most common page size under Linux is 4096 Bytes.
At the system boot, the vmlinuz is loaded to the very beginning of the RAM. The loading of the kernel and its modules requires some more memory, so the kernel allocates slab caches. When the allocated slab is not needed anymore, the kernel might free it. The pages used by the kernel will be never swapped out.
Beside the kernel the processes also require physichal pages for their code, static data and stack. The consumed physichal memory used a specific process called Resident Set Size, RSS.
The remaining RAM is used for page caching. There are two parts, one for caching the metadata of the filesystem (superblocks, inodes, etc.) the other part is for the data blocks stored on the disk. Monitoring applications like top, free, etc. shows two separate values(cached and buffer). The sum of these values is the page cache. The size of the page cache varies depending how much memory is used by the kernel and the processes.

In the rest of the memory the kernels keep a page pool to be able to fulfill the requests for a new page. When this pool drops bellow a certain limit the kernel retrives pages from the page cache (flush). or from processes (swaps out). In this case the RSS size of a process decreases. In some cases the kernel can steal pages from the slab.

This is an output from atop. “tot” shows the total physichal RAM, the “cache + buff” the page cache.

When watching the memory details with atop, the RSS of a process can be seen in the RSIZE column, the RGWOR show how the RSS size has been changed during the update interval.

Virtual memory handling in Linux

When a process starts in Linux, the kernel creates a virtual address space for it, this space describes all the memory a process can use. The size of the virtual space is determined by the text and data pages of the process, and some stack space. At this point no physichal memory used yet!

After the address space has been built the kernel fills the PC register with the address of the first instruction. The CPU tries to fetch the instrucion and notices that it is not in memory. The CPU generates a TRAP, handled by yhe kernel, and the kernel routine will load the missing data to the RAM. After that the kernel restarts the process from the same point. Pages, which are not referenced any time, won’t be loaded to the memory at all. The RSS can never be larger than the virtual size. For example the kernel creates a 80KB virtual space for a process, but only 20KB RSS (5 pages) are used.

Test case

Lets create a small C program, that allocates memory using malloc().

# cat leaker.c
#include
#include

int main() <
int j=0;
for(j=0;j /cgroup/mem/leaker/memory.limit_in_bytes

There is another file named “tasks“. PIDs (childs, threads also) in this file are belonging to this cgroup. Lets put the PID of the leaker into this control group.

# echo 4173 > /cgroup/memo/leakers/tasks

Now lets start the testing over. The leaker process can use only 384MB of physichal RAM. At the beginning there is plently of RAM, the system just rebooted, so the page cache is almost empty. The leaker has started and allocated 128MB of RAM (Virtual and RSS).

After 4 more iteraions (+ 512MB) the leaker reaches the limit we specified (640MB total vs. 384MB). You can see that the Virtual size is 640MB (total allocated pages) but the RSS is 319MB, due to the limit. The difference between the RSS and Virtual is swapped out to the disk. The kernel has not steal any pages from the processes!

Allocation of more 512MB RAM made the situation critical in the firs case, but now the system still have free RAM and page cache! The memory allocated by leaker simply landed in the swap. No other processes are affected!

As you can see the usage of cgroups can prevent processes to steal resources from other processes. Not just memory, but CPU time and others. More processes can be put in the same group.

The memory leak itself haven’t been fixed, but at least, it’s under control.

Источник

Оцените статью