[Gllug] Large files
Alain Williams
addw at phcomp.co.uk
Fri Sep 21 15:25:42 UTC 2001
On Fri, Sep 21, 2001 at 03:31:58PM +0100, Simon Stewart wrote:
> So, here I sit with a handy server designed for backups. Last night, I
> ran the backups across the network for every type of server that we
> have (to check that the thing worked as expected) And this is where
> the trouble begins.
>
> You see, the backups (which are simply compressed dumps of the
> filesystems) are impossible to play with:
>
> lewd:/backups1# ls -l *.bz2
> ls: rude-2001-09-20-usr.bz2: Value too large for defined data type
> ls: rude-2001-09-20-usrlocal.bz2: Value too large for defined data type
> -rw-r--r-- 1 root root 34331715 Sep 20 21:10 lateral-2001-09-20-.bz2
> -rw-r--r-- 1 root root 11432062 Sep 20 21:05 rude-2001-09-20-.bz2
>
> I can't even bzcat them to make sure that they can be restored
> fine. Anyone got any bright ideas about how to side-step this issue?
> Another fs? Can't split the files, either:
It is comparing against MAX_NON_LFS which is defined as:
((1UL<<31) - 1)
You can use the 64 bit system calls by
#define _FILE_OFFSET_BITS 64
(At least that is what I deduce by reading bits of the kernel & some header files)
So all that you need to do is to recompile: ls, cat, ...
and you should not have any problems.
Please let us know how you get on :-)
--
Alain Williams
--
Gllug mailing list - Gllug at linux.co.uk
http://list.ftech.net/mailman/listinfo/gllug
More information about the GLLUG
mailing list