[Gllug] Virtual disk allocation advice requested

David L Neil Mailing list a/c GLLUG at getaroundtoit.co.uk
Wed Jul 2 15:26:30 UTC 2008


Bruce,

Bruce Richardson wrote:
> On Mon, Jun 30, 2008 at 11:54:56AM +0100, David wrote:
>>> I would never run NFS or anything like that from a dom0; it's a waste of
>>> the resources used by dom0 and a huge security risk.  If dom0 is
>>> compromised then the attacker gains access to all the domUs.  Running
>>> network services from dom0 just makes this much more likely.
>> Accordingly I thought to use the NFS shared area almost only as a
>> transfer mechanism, and still use for example, sFTP etc to transfer
>> stuff between VMs/domains, ie web-dev to acc-test (sub)domains. Given
>> that the NFS storage will not be intrinsic to the operation of any VM
>> does it still provide such an attack vector?
> Yes.  If you are running any network service from dom0 then that service
> is a potential vulnerability; if the service is successfully compromised
> then the attacker has access to dom0 and dom0 has *full* access to all
> the domUs.  Don't do this if you can do it any other way.  For example,
> it is perfectly possible to ensure that one domU comes up before any
> others (you simply have to add to or modify the standard Xen start-up
> scripts) and run the NFS server from there.

The Xen docs don't seem to agree with this. My notes are that they
advise against relying upon the completion of events in one domain from
another, at start-up.

Elsewhere Rich has suggested adding deliberate delays. Do you have other
ideas?


>>> For security, I prefer to have the domUs bridging across one physical
>>> interface (or bonded pair) and dom0 accessible via a separate one (on a
>>> different subnet and network segment if at all possible.)
>> Have you now moved away from 'disk' to talking about virtual network
>> interfacing? Yes, I thought it might actually be easier to allocate each
>> DomU its own MAC and IPaddr. 
> 
> That's what I do.  But while each Xen domain has its own virtual
> interface, to communicate with the outside world the virtual interfaces
> have to be associated with a physical interface on the box.  For
> security, I have dom0 on one physical interface and all the domUs on
> another; if those physical interfaces are connected to different
> switches/networks, this means that there is no danger of traffic to the
> dom0 being sniffed/intercepted from one of the domUs.  It also has the
> extra benefit that admin access to dom0 is not affected by heavy network
> traffic on the connection used by the domUs.

Interesting and valuable.

My situation is a little-old 'test server' in a SOHO environment. It is
one of less than half a dozen machines (at busiest/most populous) and
almost non-critical. That doesn't alter the security implications
however! I will have to rely upon a mix of NAT, port forwarding, and
internal security precautions (at this time only the acceptance testing
web server is externally accessible, but I did make my (separate)
desktop VPN-able at one time and would quite like to do something
similar again).

I'm enjoying learning about the nuts-and-bolts of virtualisation, and
having fun putting something together that will (hopefully, eventually)
be useful for both home and work usage; but also I'm starting to become
quite wary about being swept along by the hype-tide when it comes to
what I'd be prepared to put in a VM in a client situation (which is not
all-bad, provided it is realistic!)

Many thanks,
=dn

-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list