[Gllug] Linux SANs

Hari Sekhon hpsekhon at googlemail.com
Thu Aug 20 17:39:36 UTC 2009


Walter Stanish wrote:
>> Can I ask why you are looking to build a SAN? Why not just buy one
>> from one of the many good vendors out there which will probably be
>> better and more reliable/scalable eg you can buy them with dual
>> controllers for redundancy etc.
>>     
>
> We've approached a fair number of vendors.  Their offerings vary
> significantly, from 'totally transparent' (they handle hardware 
> failures, provide an interface for management-level operations
> such as online growth / filesystem snapshots / multi-site 
> replication management, guarantee smooth operation and growth 
> within certain parameters - always filesystem-specific) to the
> other extreme of 'lump it on you' (they sell you hardware you
> can use at the 'block IO' level, talk big about their capabilities,
> but in essence leave it completely up to you to do the (rather
> daunting) research on filesystems / architecture / access 
> methodologies / management options.
>   

Some vendors will sell you the consultancy to set it up for you etc so 
you don't need to know too much about it, just pay them a few £K for a 
few days to do it for you.

>> How is it going to be a "SAN" if it's got ZFS on it? Don't you
>> mean a "Nas" then?
>>     
>
> Some of the storage architectures have many tiers, I'm not 100%
> clear on the terminology breakdown, if indeed there is an 
> accepted standard.  Basically you can have an architecture as
> complex as something like:
>  - multiple dedicated storage gateways, providing network
>    storage services across multiple protocols
>  - multiple dedicated storage machines, each with multiple
>    JBODS or iSCSI targets (optionally in a mesh configuration!)
>  - multiple cache nodes with dedicated hardware (SSD, etc.)
>  - dedicated management system
>  - multi-site replication
>   
£££. I specced out a nice SAN only to have budget withheld and the thing 
stalled... I've been trying to get one in for 2 years so it's not a 
recession problem! (we're actually still doing just as well as before - 
probably because we're so extremely lean). We badly need one, but budget 
is always the problem. I think the cost of doing business hasn't set in 
yet. Now if I can find an employer with proper budget...

> .. or as simple as:
>  - a higher-spec box with a bunch of disks and a network
>    filesystem
>   
That's actually not a SAN.

> Obviously we're looking at something closer to the former,
> since once reliability becomes a serious requirement the 
> nightmare scenario of a single point of failure actually
> realising it's painful potential is enough to stimulate
> additional expenditure.
>   
A  good situation to be in with regards to my previous budgetary reference!

>> You could buy a server with a couple of disk shelves on
>> external scsi connectors if you really wanted... not sure 
>> how scalable this solution would be from a hardware point
>> of view compared to a vendor supplied one though.
>>     
>
> That's the problem we're facing... it's amazingly difficult
> to get clear stats / experience in this area once you pass
> a certain scale, it seems nobody's talking clearly about
> their particular large-scale setup, which leads people with
> urgent requirements to simply fall back on a smart-talking
> vendor, who regularly cite previous installs for "impressive
> sounding organisation here" don't seem to ever be able
> to properly talk to you about architecture.
>   
Yes information is not very forthcoming, probably because usually only 
larger richer environments have the big toys and they tend to be 
secretive about how they do things for obvious reasons.

> This becomes more of a problem where you needs to have
> clients across multiple OS, in our case 'doze, Linux and
> FreeBSD.
>   
This bit won't be a problem for any half decent SAN. The interfaces will 
work across different OSs, that's the point of standards. Linux and 
Windows have iSCSI agents and FC HBAs and everything can access network 
filesystems should you choose that as an additional interface to the 
SAN, which most good sans should have no problem with, ie the one I 
specced could do it. Snapshots and all the rest of the goodies are also 
usually a given with any enterprise grade SAN.

> (Hint for Unix people with skills, time and some cash 
>  to invest in hardware - you could do worse than get
>  yourself in to this area, it's definitely going to grow.)
>   
I actually tried that but without a training budget it's not easy to get 
in to, materials and information are not freely available. It tends to 
be vendor training only from what I can see. If I could find an employer 
to do this with, I'd jump at the chance. Sounds like you might be in a 
better employer to try this with. I'm planning to revisit a way around 
this in September/October.

>> Nexenta Core isn't bad, but you need to know the solaris 
>> and zfs management stuff on the command line as there
>> are no other management tools with it, they want you to
>> buy the pro version for that.
>>     
>
> Yes, we'd be buying the 'pro' version.
>   
I'm not convinced Nexenta would be the right solution for your 
requirements, it's a product that runs on your own hardware, how 
scalable is that hardware going to be... that's why you'd want a vendor 
supplied hardware that basically plugs together and the software comes 
with it to manage and grow it.

The enterprise grade SAN solution sounds like the one to go for in this 
case. I'd be interested to know how you get on.

One final note on the cross-site replication, we did some calculations 
and this can require serious bandwidth between sites for real time block 
level replication of write changes if you choose to do it.

-h

-- 
Hari Sekhon
http://www.linkedin.com/in/harisekhon

-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list