[Gllug] JFS and bad blocks
Jack Bertram
jack at jbertram.net
Thu Oct 6 11:21:56 UTC 2005
* Peter Grandi <pg_gllug at gllug.to.sabi.co.UK> [051006 10:25]:
> Some meta points:
>
> I hate questions phrased like that; for example ostensibly you
> are only interested in knowing whether there is anyone who
> both uses JFS _and_ can give advice, but not in the advice;
> and you are ostensibly not interested in advice, even correct
> advice, from someone who is not _currently using_ JFS, even if
> they used it for twenty years :-) or know it inside out anyhow.
>
> This ain't pedantic, because in technical settings one should
> be informative and precise as to what one wants to know. Folksy
> talk like this can be very, very misleading.
No - this is pedantic, as it's very clear that I was asking for
help from experts. In fact, the sentence was a casual way of saying
"Can anyone advise me?"
> Also, questions phrased like that are often silly because they
> are about a detail when for example it is the _purpose_ that
> matters; in this case _why_ are you trying to do that? My
> advice on asking questions, for IRC, but generally applicable:
> http://tinyurl.com/bz8v4
This, on the other hand, is very fair. I should have been more precise
about *why* I was asking. Sorry.
> Note let's pretend you asked:
>
> ''How does JFS handle bad blocks? What can I do about a
> JFS filesystem that's got them?''
Spot on. Thanks.
> The answers, inferring some likely context, are:
>
> * 'jfs_mkfs' has a '-c' option. This will create a map of
> blocks to avoid allocating, after testing with 'badlocks'.
Ok, so I have an existing filesystem so this doesn't work.
> * 'jfs_fsck' has no '-c' option because once allocated a
> block cannot be remapped easily; what if the block is in
> the middle of a large extent? Could be done, probably
> not worth doing.
Ok.
> * When a hard disc has so many bad blocks that they become
> visible to the operating system, that usually means that it
> is either failing, or it should be low-level ''formatted'',
> which means usually running the manufacturer supplied ''bulk
> erase'' utility. This often will either or both rebuild the
> sparing tables and recover by rewriting marginally bad
> sectors.
So I need to get all the data off and replace the disk.
> * 'fsck.* -c' is most useful for recovery; a (usually slow)
> alternative for recovering stuff on bad media is to use
>
> dd bs=512 conv=sync,noerror
>
> which will create an image copy of the partition with all
> bad sectors replaced with zero sectors, and one that can
> often be successfully 'fsck.* -f'.
I'll try this.
> * Alternatively, or as well as that, you can try a ''best
> effort'' recovery with GNU tar and its '--ignore-failed-read'
> option.
Or this. Thanks
> As to the general issue of Linux file systems and bad block
> handling and other often unnoticed features, I have been doing
> in the recent past some moderately in depth analysis of file
> systems, for example with some detail information here:
>
> http://WWW.sabi.co.UK/Notes/anno05-3rd.html#050909
This is very interesting, thanks. I think I should be using ext3 when I
replace the disk and restore the data.
Many thanks for your helpful response.
Jack
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
URL: <http://mailman.lug.org.uk/pipermail/gllug/attachments/20051006/6506b903/attachment.pgp>
-------------- next part --------------
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list