[Gllug] Linux SANs (was: Maintaining music libraries in various formats)
Walter Stanish
walter.stanish at saffrondigital.com
Thu Aug 20 17:10:41 UTC 2009
> Can I ask why you are looking to build a SAN? Why not just buy one
> from one of the many good vendors out there which will probably be
> better and more reliable/scalable eg you can buy them with dual
> controllers for redundancy etc.
We've approached a fair number of vendors. Their offerings vary
significantly, from 'totally transparent' (they handle hardware
failures, provide an interface for management-level operations
such as online growth / filesystem snapshots / multi-site
replication management, guarantee smooth operation and growth
within certain parameters - always filesystem-specific) to the
other extreme of 'lump it on you' (they sell you hardware you
can use at the 'block IO' level, talk big about their capabilities,
but in essence leave it completely up to you to do the (rather
daunting) research on filesystems / architecture / access
methodologies / management options.
> How is it going to be a "SAN" if it's got ZFS on it? Don't you
> mean a "Nas" then?
Some of the storage architectures have many tiers, I'm not 100%
clear on the terminology breakdown, if indeed there is an
accepted standard. Basically you can have an architecture as
complex as something like:
- multiple dedicated storage gateways, providing network
storage services across multiple protocols
- multiple dedicated storage machines, each with multiple
JBODS or iSCSI targets (optionally in a mesh configuration!)
- multiple cache nodes with dedicated hardware (SSD, etc.)
- dedicated management system
- multi-site replication
.. or as simple as:
- a higher-spec box with a bunch of disks and a network
filesystem
Obviously we're looking at something closer to the former,
since once reliability becomes a serious requirement the
nightmare scenario of a single point of failure actually
realising it's painful potential is enough to stimulate
additional expenditure.
So *insert required terminology* where appropriate.
> You could buy a server with a couple of disk shelves on
> external scsi connectors if you really wanted... not sure
> how scalable this solution would be from a hardware point
> of view compared to a vendor supplied one though.
That's the problem we're facing... it's amazingly difficult
to get clear stats / experience in this area once you pass
a certain scale, it seems nobody's talking clearly about
their particular large-scale setup, which leads people with
urgent requirements to simply fall back on a smart-talking
vendor, who regularly cite previous installs for "impressive
sounding organisation here" don't seem to ever be able
to properly talk to you about architecture.
This becomes more of a problem where you needs to have
clients across multiple OS, in our case 'doze, Linux and
FreeBSD.
(Hint for Unix people with skills, time and some cash
to invest in hardware - you could do worse than get
yourself in to this area, it's definitely going to grow.)
> Nexenta Core isn't bad, but you need to know the solaris
> and zfs management stuff on the command line as there
> are no other management tools with it, they want you to
> buy the pro version for that.
Yes, we'd be buying the 'pro' version.
- Walter
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list