comments([{"author":"Fred","body":"About socket tuning, modifying net.core.rmem_max and net.core.wmem_max couldn't be enough, except if your application(s) update their own socket buffer (with setsockopt(), SO_RCVBUF).\r\nIf you want these settings available for any socket, you should also update net.core.rmem_default and net.core.wmem_default, with attention using double of target value...","ip":"193.251.14.247","site":"comments.tweaked.io","time":"2018-02-17 13:38:08 +0000","body":"

About socket tuning, modifying net.core.rmem_max and net.core.wmem_max couldn't be enough, except if your application(s) update their own socket buffer (with setsockopt(), SO_RCVBUF).
\nIf you want these settings available for any socket, you should also update net.core.rmem_default and net.core.wmem_default, with attention using double of target value...

\n","ago":"53 weeks ago","id":3},{"author":"Noah Spurrier","body":"Regarding \"File Handle Limits\". Shouldn't the limit be tuned to peak demand? Check `dmesg` for errors that look like this:\r\n\r\n [510822.012643] VFS: file-max limit 400 reached\r\n\r\n(I set a limit of 400 to force this message for this example.)\r\n\r\nThere is also a per-process limit, \"/proc/sys/fs/nr_open\", which I believe is not the same as a possible `ulimit` that may also be applied per process (or cgroup, as well, perhaps?).\r\n\r\nOn a system where you find yourself bumping into the \"file-max\" limit it is likely not due to a single service because a single service should hit the \"nr-open\" limit before the system hits the \"file-max\" limit. I'm speculating here. Perhaps applications such as Apache2 where they may prefork many processes may circumvent the \"nr-open\" limit and get the entire system up to the \"file-max\" limit. I'd be interested to hear from someone who has actually had to increase the file descriptor limits.","ip":"64.56.206.254","site":"comments.tweaked.io","time":"2018-04-09 23:15:55 +0100","body":"

Regarding "File Handle Limits". Shouldn't the limit be tuned to peak demand? Check dmesg for errors that look like this:

\n\n
[510822.012643] VFS: file-max limit 400 reached\n
\n\n

(I set a limit of 400 to force this message for this example.)

\n\n

There is also a per-process limit, "/proc/sys/fs/nr_open", which I believe is not the same as a possible ulimit that may also be applied per process (or cgroup, as well, perhaps?).

\n\n

On a system where you find yourself bumping into the "file-max" limit it is likely not due to a single service because a single service should hit the "nr-open" limit before the system hits the "file-max" limit. I'm speculating here. Perhaps applications such as Apache2 where they may prefork many processes may circumvent the "nr-open" limit and get the entire system up to the "file-max" limit. I'd be interested to hear from someone who has actually had to increase the file descriptor limits.

\n","gravitar":"http://www.gravatar.com/avatar/a70b738309454fc003206983e505794b","ago":"45 weeks ago","id":2},{"author":"Shane Grant ","body":"Under filesystem tuning you have a typo, one place you list notime, and another you list noatime. ","ip":"74.128.97.197","site":"comments.tweaked.io","time":"2018-08-20 06:12:24 +0100","body":"

Under filesystem tuning you have a typo, one place you list notime, and another you list noatime.

\n","gravitar":"http://www.gravatar.com/avatar/56d95e93cd76a146af993757e2bef783","ago":"26 weeks ago","id":1}])