[Gllug] dd and iotop weirdness
general_email at technicalbloke.com
general_email at technicalbloke.com
Fri Dec 3 01:08:45 UTC 2010
On 30/11/10 09:51, Chris Bell wrote:
> On Mon 29 Nov, general_email at technicalbloke.com wrote:
>
>
>> What on earth is going on?
>>
>
>> Yours confused,
>>
>> Roger.
>>
> Was the bootable live CD trying to access swap space on the first drive?
>
>
No, the drives were gpt/ufs formatted raid 1 volumes from a bsd box,
there was no swap space on them. Also they were a fair way in and one
was aleady partially wiped (and the bootsector would have been the first
to go) :/ I did remove the drive that stopped, scanned it for errors and
it found none, although the other slower one did turn up several bad
sectors. Much as I would have liked to get to the bottom of it I was
pushed for time so I put the two drives in two separate machines and all
was fine from there on in, I've returned the slower faulty one to the
retailer now.
Still I'm puzzled as to how one sata drive, even if it has faults, could
knock the other offline. I would chalk it down to coincidence if it
wasn't instantaneous (the LED in the caddy went out as soon as I started
ddrescue). I was wondering if it might have anything to do with "watch".
I was surprised to see the output from both watches appear in the same
console even though I started them in separate terminals, that suggests
to me some kind of shared state / class attributes that might allow the
processes to affect each other. Having said that I thought watch worked
by sending "signals" via the operating system rather than by any shared
memory shenanigans. TBH I don't know nearly enough about either watch or
signals to make any informed comments so I'll stop rambling now!
Thanks to everyone else who replied BTW :)
Oh and also, I have experienced a similar situation trying to run
several large rsync jobs simultaneously on one box in the past, to the
extent that I'm a little cagey about doing it now. Henceforth I haven't
run into it for a while so I'm a bit sketchy on the details but I seem
to remember it happening when starting a local rsync while I had an ssh
one already running, or maybe it was the other way round. Anyway, I'm
worried I might have developed a superstition there, is multiple
simultaneous rsyncing of large volumes (>100GB) something people do
without problem on a regular basis?
Roger
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list