Why ulimit is not a cure all for stack problems

I mentioned in a previous post the use of the bash shell utility ulimit. It is a nice way to analyze what resources are being used, and to modify said resource sizes. For example running ulimit -a provides a list of resources associated with your login.

core file size         (blocks, -c) 0
data seg size          (kbytes, -d) unlimited
file size              (blocks, -f) unlimited
max locked memory      (kbytes, -l) unlimited
max memory size        (kbytes, -m) unlimited
open files                     (-n) 7168
pipe size           (512 bytes, -p) 1
stack size             (kbytes, -s) 8192
cpu time              (seconds, -t) unlimited
max user processes             (-u) 709
virtual memory         (kbytes, -v) unlimited

This tells me that I can’t create core files (probably not a bad thing), because their size is limited to zero. Of course I could change this using ulimit -c 1000, to create a maximum core size of 1000 blocks. One thing people like to modify, is the stack size.  Invoking ulimit -H -a shows the hard limits. In terms of the stack size, this is the result on my system:

stack size             (kbytes, -s) 65532

That’s 65MB of stack space. That’s not an insignificant amount. Some people like to use:

ulimit -s unlimited

This does not always work, returning the error “-bash: ulimit: stack size: cannot modify limit: Operation not permitted“. This means that the kernel is configured with a hard limit to the size of the stack. This can of course be modified by the root user. But first ask yourself why you need that much stack space? Maybe the algorithm you are using isn’t that efficient? I mean 65MB is already a *lot*. Now, many times the stack is increased because of recursion – which may not always be the most efficient way of writing an algorithm.

ulimit is also used in association with programming languages such as Python and Julia where it is used to extend stack use. The problem here is that playing around with the stack size implies that you know exactly how much memory your algorithm will need, and that’s not always the case especially with recursion. There is a reason for system resources being limited – they are finite. Of course using unlimited in the context of a lot of resources is a bad idea. Too many unlimited resources means that memory could be eaten up to the point where the system can become unresponsive.

The use of ulimit is not a panacea for fixing algorithm problems.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s