[Gllug] best way to update a single production server?
Joel Bernstein
joel at fysh.org
Wed May 6 08:54:18 UTC 2009
On 6 May 2009, at 09:42, Simon Wilcox wrote:
> On 6/5/09 00:08, James Hawtin wrote:
>
>> One solution I like, is xen and a san, if you get a san with
>> redundancy,
>> dual controllers, power supplies etc, that ticks to failure resistant
>> box.
>
> You'd think, wouldn't you. As I spent the whole of the bank holiday
> weekend in the datacentre picking up the pieces after our
> super-duper-can't-stop-won't-stop SAN stopped, I'm a bit sore on this
> point :-(
>
> Sure, it's failure resistant but software bugs can still cause
> outages.
Out of interest, what happened? Would you have been able to avoid it
by running extra boxes?
> You'll need to factor in some downtime for firmware updates or make
> sure
> you provision N+1 storage so that you can migrate volumes onto a spare
> to take the main out for updates.
AIUI one benefit of SAN in this environment is that by separating the
storage from the interface, they can be scaled and made redundant
independently. Redundant SAN kit is expensive, but less so than having
N+1 redundant replicated storage duplicated for every box!
Running Xen or SAN without suitable redundancy seems just to paint
yourself thinner than the number of servers you operate would suggest.
Many boxes dependent on few spindles / controllers just magnifies your
risk exposure. Obviously the same is true to some degree of server
virtualisation though and these risks are generally acceptably
mitigated by the far greater cost of providing more hardware. It seems
dangerously revisionist though, no matter how useful these
technologies are, to pretend that greater leverage on fewer individual
physical hardware assets doesn't imply the potential for much lower
availability in just a few failures, mitigated only by the software's
ability to migrate volumes/VMs automatically on failure? And obviously
that software is written by people, so...
/joel
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list