[Gllug] Restricting the number of child procs
Kostas Georgiou
k.georgiou at imperial.ac.uk
Mon Jul 7 12:08:00 UTC 2008
On Fri, Jul 04, 2008 at 05:07:07PM +0100, John Hearns wrote:
> On Fri, 2008-07-04 at 15:27 +0100, Frazer Irving wrote:
> > Hi everybody,
> >
> > I'm currently having trouble with a Perl script running on a Solaris
> > server. The script forks quite a bit and it seems that when the server
> > is under any sort of load the fork calls fail,
>
> Talking about fork failures, I had a case yesterday on a cluster. System
> head node locks up, no more logins, cannot fork any further processes
> due to maximum number of files open. Not possible to log into the
> console.
> Luckily, we had an ssh session already open on it. Most commands don't
> work of course, and we're reluctant to pull the power on it as this is a
> live system.
>
> echo 2000000 > /proc/sys/fs/file-max revives things for about 30 seconds
> (now there's a bit of systems-fu for you)
>
> echo 20000000 > /proc/sys/fs/file-max keeps it up long enough for us to
> find out the rogue job and delete it.
>
> Sadly, commercial etiquette prevents me from carrying a large LART in
> the boot of the car.
Just out of curiosity, how did the job managed to get more than 1024
fds? At least for RedHat the default resource limit is at 1024 open
files per process, was the default changed or was the job forking madly
as well as opening files?
Cheers,
Kostas
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list