Opened 3 years ago

Closed 2 years ago

#20879 closed enhancement (fixed)

Set rlimits in the containers.

Reported by: yawning Owned by: yawning
Priority: Medium Milestone:
Component: Archived/Tor Browser Sandbox Version:
Severity: Normal Keywords: sandbox-security
Cc: Actual Points:
Parent ID: Points:
Reviewer: Sponsor:

Description

The containers should have rlimits set to prevent runaway resource use, though some of these (eg: address space) are tricky and require thought.

After discussion on IRC, sensible defaults that could be applied to everything as a first pass would be something like:

RLIMIT_STACK: 8192
RLIMIT_RSS: 0 (No effect as of Linux 2.6.x)
RLIMIT_CORE: 0
RLIMIT_NPROC: 512
RLIMIT_NOFILE: 1024 (512?, lower?)
RLIMIT_MEMLOCK: 64 (KiB)
RLIMIT_LOCKS: (check how much firefox/tor uses flock, set to something low)
RLIMIT_SIGPENDING: 64
RLIMIT_MSGQUEUE: 0 (assuming nothing uses this)
RLIMIT_NICE: 0
RLIMIT_RTPRIO: 0
RLIMIT_RTTIME: 0

Child Tickets

Change History (7)

comment:1 Changed 3 years ago by yawning

Clarification, RLIMIT_STACK is 8192 KiB (when setting it, the unit used is bytes).

comment:2 Changed 3 years ago by cypherpunks

It doesn't look like Firefox is locking any memory, so RLIMIT_MEMLOCK can be set to 0.

$ pidof -s firefox
9688

$ prlimit -p 9688 -l
RESOURCE DESCRIPTION                         SOFT  HARD UNITS
MEMLOCK  max locked-in-memory address space 65536 65536 bytes

$ grep -E 'Vm(Size|Lck)' /proc/9688/status
VmSize:  1069636 kB
VmLck:         0 kB

Regarding the RLIMIT_STACK, 8 MiB is probably overkill. It's safe, but I'd try 512 KiB.

$ prlimit -p 9688 -s
RESOURCE DESCRIPTION       SOFT    HARD UNITS
STACK    max stack size 8388608 8388608 bytes

$ grep -E 'Vm(Size|Stk)' /proc/9688/status
VmSize:  1069640 kB
VmStk:       132 kB

Be careful with reducing RLIMIT_NOFILE too low. Much lower than 512 might be risky.

$ prlimit -p 9688 -n
RESOURCE DESCRIPTION              SOFT HARD UNITS
NOFILE   max number of open files 4096 4096

$ ls /proc/9688/fd | sort -n | tail -n 1
71

$ ls /proc/9688/fd | sort -n | wc -l
52

You can also consider configuring only soft limits for things like RLIMIT_AS, RLIMIT_DATA, and RLIMIT_FSIZE. Soft limits will typically result in the kernel sending the process a signal like SIGXFSZ in the case of file size limits being exceeded, which can be trapped and interpreted. Address space and data segment limits being exceeded will result in the relevant syscalls returning ENOMEM (or being sent SIGSEGV if automatic stack expansion is attempted and there is no alternate stack available). If these are soft limits, then they can all be caught safely. This can still limit damage caused by, e.g. some types of integer overflows that require overflowing a unsigned bufsz = sizeof(buf), where the buffer contains attacker-controlled data. These are not at all uncommon in C code which parses large amounts of binary data like video libraries and image libraries. A friend of mine last year burnt a 0day to an image library in Firefox (imlib2) which could be prevented by limiting the address space to 4 GiB, like most integer overflows of its kind.

Normally, a process can raise its own soft limit up until it hits the hard limit, but that requires access to the setrlimit() or prlimit64() syscall. If those are denied via seccomp, then soft limits can still be used for security. A possible UX implementation would be a pop up saying that the browser is using more than 4 GiB of memory or trying to create a very large file. This would be detected by a signal handler which results in the browser activating a separate helper process and sending the un-catchable SIGSTOP to itself. The user would be given the option to allow the browser to continue the operation, or terminate the browser. If they terminate the browser, it's killed by force. If they continue it, then the soft limit is raised by the helper process (the one raising the dialog window) via prlimit64(), and the browser is restored via SIGCONT. If a user is opening a large number of tabs, they would know that it's probably because of that that they are getting a memory warning. If they just started downloading a huge file, they would know it's probably because of that that they're getting a file size warning. If they only have 6 tabs open, but they're getting a warning saying they've exceeded 4 GiB, they'd know something is up. A wise or paranoid user would select the dialog option to not resume execution of the browser, and have it close, possibly with the option to coredump (if safe).

Last edited 3 years ago by cypherpunks (previous) (diff)

comment:3 Changed 3 years ago by yawning

First pass: https://gitweb.torproject.org/tor-browser/sandboxed-tor-browser.git/commit/?id=82fcc3247c878cff63bbf34fe0c397638a232bde

I lower the soft/hard limits to:

  RLIMIT_STACK = 512 * 1024
  RLIMIT_RSS = 0
  RLIMIT_NPROC = 512
  RLIMIT_NOFILE = 1024
  RLIMIT_MLOCK = 0  // Now proscribed via seccomp() as well.
  RLIMIT_LOCKS = 32
  RLIMIT_SIGPENDING = 64
  RLIMIT_MSGQUEUE = 0
  RLIMIT_NICE = 0
  RLIMIT_RTPRIO = 0
  RLIMIT_RTTIME = 0

I can probably go lower with NPROC/NOFILE, but erred on the side of setting hte limits somewhat conservatively.

As far as AS, DATA, and FSIZE go, I agree that they should be set *somehow* and I like your idea of applying soft limits, with UI integration. In general the sandbox needs more UI feedback (#20844), but I really need to think about all of this, so the initial release probably won't ship with them set, sorry.

At least things can only improve from here...

comment:4 Changed 3 years ago by yawning

Keywords: sandbox-security added

comment:5 Changed 3 years ago by yawning

Just as a note, changed these to work around:

  • #20970 (RLIMIT_STACK is set to 8 MiB)
  • #20979 (RLIMIT_NPROC is left untouched)

Once I switch to setting the rlimits on a per container basis, these can re-added.

comment:6 in reply to:  5 Changed 3 years ago by cypherpunks

Replying to yawning:

Just as a note, changed these to work around:

  • #20970 (RLIMIT_STACK is set to 8 MiB)
  • #20979 (RLIMIT_NPROC is left untouched)

Once I switch to setting the rlimits on a per container basis, these can re-added.

I think there are websites for browser benchmarking. You could probably test the acceptable limits by going to those websites and trying the various benchmarks with different resource limits set to get at least an idea of the upper limit, past which increasing it is useless.

comment:7 Changed 2 years ago by yawning

Resolution: fixed
Status: newclosed

I'm calling this fixed because rlimits are set. At some point in the future, they could be improved.

Note: See TracTickets for help on using tickets.