[Gllug] Disk wait processes, load averages, and {send,fetch,proc}mail
Nix
nix at esperi.demon.co.uk
Sat Sep 28 00:26:52 UTC 2002
On Thu, 26 Sep 2002, Tethys moaned:
> I have a problem. Our main file and print server has a number of process
> hung in a disk wait (uninterruptible sleep) state:
ooops, bad sign.
> root 22716 0.0 0.0 0 0 ? DW Sep17 0:00 [lockd]
> root 23069 0.0 0.0 1968 720 ? D Sep17 0:00 mount accuhost01:
> root 23499 0.0 0.0 1972 724 ? D Sep17 0:00 mount /accucard/d
> root 24426 0.0 0.0 1972 724 ? D Sep17 0:00 mount /accucard/d
> root 25402 0.0 0.0 0 0 ? DW Sep17 0:00 [lockd]
> root 25433 0.0 0.0 1972 724 ? D Sep17 0:00 mount /accucard/d
> root 25647 0.0 0.0 0 0 ? DW Sep17 0:00 [lockd]
> root 26055 0.0 0.0 1972 724 ? D Sep17 0:00 mount /accucard/d
> root 26612 0.0 0.0 1968 720 ? D Sep17 0:00 mount accuhost01:
> root 32215 0.0 0.0 0 0 ? DW Sep17 0:00 [lockd]
> root 13787 0.0 0.0 1972 724 ? D Sep18 0:00 mount /accucard/d
> root 18714 0.0 0.0 0 0 ? DW Sep25 0:00 [lockd]
> root 18911 0.0 0.0 0 0 ? DW Sep25 0:00 [lockd]
> root 19022 0.0 0.0 1972 908 ? D Sep25 0:00 mount -t nfs -a
> root 19282 0.0 0.0 0 0 ? DW Sep25 0:00 [lockd]
>
> No, I don't know why they're hanging. But once they're in that state,
> I know of nothing short of a reboot that can clear them. Because it's
Nothing short of a reboot --- or unblocking whatever they're blocked on
in the kernel --- can.
What does `ps -o pid,args,stat,wchan' reveal for these processes? (It's
the `wchan' that's really important, because that's where they're
stuck.) If you look at the address itself (rather than the System.map-
looked-up form) you can track it down to the blocked instruction.
Has the kernel oopsed on you recently? Because one cause of this is
something keeling over inside the kernel and leaving an fs-level or
device-level lock held.
> our main file and print server, rebooting is a politically non-viable
> solution at the moment.
Oops.
> So until we get a suitably quiet time, we're stuck with them.
> Processes in disk wait state aren't in themselves a problem.
> However, because they're in the run queue, they count towards
> the load average.
I've often considered this to be a misfeature at best... IMNSHO only
stuff with things waiting on a request queue or actively processing
should count, but that might be quite expensive to compute :( (maybe
the new block I/O stuff will help here...)
> Unfortunately, this adversely affects sendmail[1], which stops
> accepting connections when the load average reaches a certain
> threshold:
Yep.
> 1247 ? S 0:00 sendmail: rejecting connections on daemon MTA: load average: 15
>
> Now I've tried to configure this with the RefuseLA option. But for some
> reason, it isn't working.
What version of sendmail? It certainly works for me (I had to knock them
*down* or my little 8Mb 486 firewall got snowed under when large email
batches came in).
> Furthermore, because sendmail isn't accepting connections, I get:
>
> fetchmail: SMTP connect to localhost failed
> fetchmail: can't raise the listener; falling back to /usr/bin/procmail -d %T
>
> This works fine with the caveat that each message is being converted to
> have DOS-style CR/LF line endings, rather than just the traditional Unix CR.
Stick the messages through
:0 f
| /usr/bin/tr -d '\015'
or something similar?
(sorry, that is too ugly to live, I know... ;} )
> So I guess I have several questions:
>
> 1. Should disk wait processes contributing the the load average be
> considered a bug?
Disk wait, no --- but these are probably not waiting for the *disk* per
se. (The wchan field will give a bigger hint about what they're waiting
for.)
> 2. Is there anything I can do about them short of a reboot?
Probably not; if you can work out what resource they're waiting on and
release it things will be happy --- but I'll admit to only having done
that once (an oops back in the 2.2 days led to the superblock lock on my
mp3 disk being held. Obviously losing access to my mp3s was critical so
I whipped up a tiny kernel module to flip the flag back. A bit ugly,
particularly since IIRC it's not an exported symbol so I had to dig
around in /dev/kmem and shove an absolute address in the module...)
> 3. Is there any debugging option to sendmail that will show the
> current RefuseLA threshold value?
No, although there is #if 0'd out code in conf.c:shouldqueue() displayed
under -d3.30 that reports whenever CurrentLA >= RefuseLA.
It'd not be hard to add a print-out-RefuseLA -d switch at *all*. I'd
roll it for you if I knew what sendmail version you had :)
> 4. Is there any way I can get sendmail to accept connections when
> the load average is greater than 12 (either via RefuseLA or some
> other method)?
RefuseLA is the one; increase it.
(If it's not working, something odd is going on. If you're using M4,
check the resulting .cf file to be sure that the flag's only being set
once...)
> This works fine with the caveat that each message is being converted to
> have DOS-style CR/LF line endings, rather than just the traditional Unix CR.
CR/LF is Internet wire format (but you knew that); LF is traditional
Unix (but you knew that) ;)
> 5. Does anyone know why fetchmail/procmail is adding extra LF characters,
> and what I can do to change it?
This is strange.
I'd have thought that sendmail was simply passing the raw wire-format
message to the delivery agent, but I'm using procmail as my delivery
agent and I don't see this, and deliver.c:putbody() in sendmail takes
considerable care to remove the CRs.
I'd suggest using the kludge procmail filter :))))
--- what version of procmail is this? All versions above 3.15.2 have
known bugs, sometimes serious (yielding message corruption).
> [1] Despite what the masses claim, the more I play with the never
> versions of sendmail, the more I like it -- you'd have to
> have a *very* convincing argument to convince me to switch
> to exim/qmail/postfix/whatever)
It's wonderfully powerful, yes, but *so* *damn* *cryptic*. I'll admit
that I prefer to configure sendmail by hacking the source than by
editing the .cf file, which says something uncomplimentary about said
.cf file...
--
`Let's have a round of applause for those daring young men
and their flying spellcheckers.' --- Meg Worley
--
Gllug mailing list - Gllug at linux.co.uk
http://list.ftech.net/mailman/listinfo/gllug
More information about the GLLUG
mailing list