Be careful with reducing RLIMIT_NOFILE too low. Much lower than 512 might be risky.
$ prlimit -p 9688 -nRESOURCE DESCRIPTION SOFT HARD UNITSNOFILE max number of open files 4096 4096$ ls /proc/9688/fd | sort -n | tail -n 171$ ls /proc/9688/fd | sort -n | wc -l52
You can also consider configuring only soft limits for things like RLIMIT_AS, RLIMIT_DATA, and RLIMIT_FSIZE. Soft limits will typically result in the kernel sending the process a signal like SIGXFSZ in the case of file size limits being exceeded, which can be trapped and interpreted. Address space and data segment limits being exceeded will result in the relevant syscalls returning ENOMEM (or being sent SIGSEGV if automatic stack expansion is attempted and there is no alternate stack available). If these are soft limits, then they can all be caught safely. This can still limit damage caused by, e.g. some types of integer overflows that require overflowing a unsigned bufsz = sizeof(buf), where the buffer contains attacker-controlled data. These are not at all uncommon in C code which parses large amounts of binary data like video libraries and image libraries. A friend of mine last year burnt a 0day to an image library in Firefox (imlib2) which could be prevented by limiting the address space to 4 GiB, like most integer overflows of its kind.
Normally, a process can raise its own soft limit up until it hits the hard limit, but that requires access to the setrlimit() or prlimit64() syscall. If those are denied via seccomp, then soft limits can still be used for security. A possible UX implementation would be a pop up saying that the browser is using more than 4 GiB of memory or trying to create a very large file. This would be detected by a signal handler which results in the browser activating a separate helper process and sending the un-catchable SIGSTOP to itself. The user would be given the option to allow the browser to continue the operation, or terminate the browser. If they terminate the browser, it's killed by force. If they continue it, then the soft limit is raised by the helper process (the one raising the dialog window) via prlimit64(), and the browser is restored via SIGCONT. If a user is opening a large number of tabs, they would know that it's probably because of that that they are getting a memory warning. If they just started downloading a huge file, they would know it's probably because of that that they're getting a file size warning. If they only have 6 tabs open, but they're getting a warning saying they've exceeded 4 GiB, they'd know something is up. A wise or paranoid user would select the dialog option to not resume execution of the browser, and have it close, possibly with the option to coredump (if safe).
I can probably go lower with NPROC/NOFILE, but erred on the side of setting hte limits somewhat conservatively.
As far as AS, DATA, and FSIZE go, I agree that they should be set somehow and I like your idea of applying soft limits, with UI integration. In general the sandbox needs more UI feedback (#20844 (closed)), but I really need to think about all of this, so the initial release probably won't ship with them set, sorry.
Once I switch to setting the rlimits on a per container basis, these can re-added.
I think there are websites for browser benchmarking. You could probably test the acceptable limits by going to those websites and trying the various benchmarks with different resource limits set to get at least an idea of the upper limit, past which increasing it is useless.