[Gllug] Virtual disk allocation advice requested
David L Neil Mailing list a/c
GLLUG at getaroundtoit.co.uk
Wed Jul 2 14:35:07 UTC 2008
Bruce Richardson wrote:
> On Mon, Jun 30, 2008 at 08:11:20PM +0100, Richard wrote:
>> On Mon, Jun 30, 2008 at 01:43:29PM +0100, Bruce Richardson wrote:
>>> For example,
>>> it is perfectly possible to ensure that one domU comes up before any
>>> others (you simply have to add to or modify the standard Xen start-up
>>> scripts) and run the NFS server from there.
>> It's quite hard to ensure this. On my server I start the guests in a
>> known order, with extra sleeps between the core guests (NFS, mail,
>> DNS) and the rest. But there could still be a case where a guest has
>> to do an fsck or whatever where the services wouldn't be up by the
>> time the other guests start. The next step beyond that is to start
>> the NFS guest and test that the NFS service is responding before
>> starting the other guests. I don't know of anything you can download
>> which actually does this though.
> This is a problem faced by environments not based on Xen, though.
> There's no magic answer and you usually have to rely on effective
> monitoring to tell you when you do encounter such a failure.
> With your entire environment based in the one Xen box (something that
> should normally only be happening for a test environment) you do have
> some options, though; you could start the key infrastructure domains
> using 'xm create -c' and scrape the output, looking for a successful
> service startup, or I suppose you could divert the domU serial port to a
> pty and have it send a specific success signal over that.
> Where you have an environment that uses Xen extensively but it is not a
> test environment, the answer is simply not to put your eggs in the one
> hypervisor basket.
It is not what I thought I was asking, way-back, but this twist to the
discussion has been interesting.
The interpretation that is emerging, in my mind at least, is that in a
corporate situation where /home is on an NFS server, it might be better
to have a dedicated server (one machine-one task model) both for
security and for performance-dependence reasons. ie some things might be
'virtualisable' but others best not.
If we were talking a corporate or ISP with a rack of servers or a whole
server farm to manage (and planning to rationalise/virtualise), would
the following make sense?
1 functions to keep under pure-Linux/BSD/etc:
NFS of /home dirs
ID and auth functions of the likes of LDAP, MySQL, Kerberos
2 functions which could run in (para)virtualised DomUs:
mail: SMTP, POP/IMAP
LDAP for non-ID directories
group/departmental/shared file servers: NFS, Samba, sFTP
background routine job processing
Can proxy web servers and load balancers be lumped in as "HTTP" or are
they worthy of special attention?
Gllug mailing list - Gllug at gllug.org.uk
More information about the GLLUG