[Wolves] Limiting resources on php-cli children
Chris Ellis
chris at intrbiz.com
Tue Apr 14 13:30:12 UTC 2015
On Tue, Apr 14, 2015 at 1:42 PM, Simon Burke <simonb at fatsportsman.eu> wrote:
> So I have a shared hosting platform that uses suphp to ensure scripts are
> run as the respective user. However some sites are taking the mick 'a bit'
> with CPU time. The web content isn't mine to make changes to, so I would
> like to limit the the amount of cpu or be able to nice these processes.
>
> The problem is that so far I have attempted a few things but none appear to
> be doing what I'd trying to achieve. So fat I have tried using the RLimitCPU
> in the Apache config, and setting limits for users in
> /etc/security/limits/conf. Neither of which appear to be limiting the
> php-cgi processes suphp spawns (as the respective user).
>
> Can any suggest a better method? or possibly tell me why Im going about it
> the wrong way and what is a better method?
You want to look at using cgroups (control groups) they were designed
to for this
kind of problem, cgroups are also a building block of LXC and systemd. You're
going to want a recent kernel though.
Not sure how you would get Apache to fork into a specific cgroup. This might be
easy if you are using PHP fast-cgi stuff, there by you can specify a wrapper
script.
An alternate approach would be to have multiple web servers running in differing
cgroups and then use HTTP routing to map your customer virtual hosts. That
might even be a stepping stone of fully containerising your customers.
Chris
More information about the Wolves
mailing list