[Sussex] AMD Sempron Chips and BIG backups

Ronan Chilvers ronan at thelittledot.com
Mon Dec 12 12:40:56 UTC 2005


Hi Alan

On Mon, 12 Dec 2005 11:01:00 +0000
Alan Pope <alan.pope at gmail.com> wrote:

> On 12/12/05, Ronan Chilvers <ronan at thelittledot.com> wrote:
> > I'd be using Debian Sarge and the machine would be working as a file
> > server with a few big drives in a software raid array.  The design
> 
> Heheh@ big drives. One mans "big" is another mans "tiny".

Oh, really!?  Outside now for a who's got the hoofingest, hugest, hard
drive fight!!!

> 
> > department here is managing to generate increasingly frightening
> > file sizes which is using up disk space at an alarming rate.  I was
> > thinking of shelling out on some big SATA (or even IDE) drives and
> > chaining them into a raid5 or maybe raid10 (don't know if software
> > raid supports raid10 but I think it does).
> >
> 
> Yeah, I've just bought 8 off 200GB SATA disks for home backups myself.
> Will raid them as mirrored rather than RAID 5 though.

Where did you get them?  What make?  How come not raid5? I like Seagate,
personally... had some issues with Maxtor and haven't used western
digital enough to comment.  Are the 8 drives going into several
machines?  Are you using any extension cards for extra drive channels
or have you got enough on board channels?

Have you experimented with hotplugging SATA drives?  I know in theory
they can but do they really?

> 
> > Secondly, is anyone routinely backing up large amounts of disk
> > space (as in 100GB+).
> 
> At home, no. At work, yes. We backup lots of systems, many TB per day.

OK!  I think you might win in the big hard drive fight!  I'm thinking
of a couple of hundred GBs, but with shreddies and milk every morning
I'm hoping to get bigger! :)  Even that though means a good few hundred
quid on a tape drive to avoid chopping up the backups too much.  The
design fellas are starting to produce some big files (14GB for a
quark file and associated imagery) so disk usage is starting to
increase at a rather sickening pace!

> 
> > How are you doing it?
> 
> Most systems at work use non-linux-specific (AIX) IBM technology
> called "flashcopy (tm)". We snapshot the filesystems and databases
> from their fast (FAST-T or ESS disks) to another set of (sometimes
> cheaper/slower SATA) disks. We either do this hot (with the systems
> up) or cold (with them down) which takes a matter of a couple of
> minutes to start. The users don't notice that bit if they are done
> hot. Then for $(N) hours it sits in the background copying disk blocks
> from the online disks to the backup disks. Later, when that's
> finished, it gets backed up to tape.
> 
> This can be done under linux with LVM I believe. I have not tried it
> myself but this link might be interesting to you.
> 
> http://www.tldp.org/HOWTO/LVM-HOWTO/snapshotintro.html

Thanks for that.  I'll have a read.  What filesystems are you using?
I've been reading about XFS and like the fact that it handles large
files well, but ext3 has a better reputation for data integrity as far
as I can see.  If its AIX are you using JFS?  Should I consider the JFS
linux stuff?

What I'm after is a server build which will give us room to grow the
disk usage, both in terms of having sufficient disk space initially and
being able to add new disk space in future.  LVM is particularly
interesting here so I'll read up on the snapshot stuff...

Cheers

-- 
Ronan
e: ronan at thelittledot.com
t: 01903 739 997
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://mailman.lug.org.uk/pipermail/sussex/attachments/20051212/b503683a/attachment.pgp 


More information about the Sussex mailing list