[Gllug] Constrained virtual machines
Daniel P. Berrange
dan at berrange.com
Fri Dec 1 12:47:26 UTC 2006
On Fri, Dec 01, 2006 at 11:09:38AM +0000, Richard Jones wrote:
> On Fri, Dec 01, 2006 at 10:44:02AM +0000, Richard Huxton wrote:
> > Richard Jones wrote:
> > >Be aware that disk performance under any VM I've seen sucks compared
> > >to the real hardware ... So if you're testing a database, where disk
> > >I/O performance really matters, it's not clear the test will be fair.
> >
> > Ah, but I don't care if disk access is 10 times slower than for-real. So
> > long as I can then scale CPU and memory by the relevant amount to keep
> > things balanced.
>
> Actually I'm afraid that the effects might be more subtle than that.
> Remember that disk I/O isn't just "XX megabytes/sec". It's a complex
> mixture of types of operation, each having their own latency,
> throughput and other peculiarities. I would expect[1] that if issuing
> commands directly over SATA to a directly connected disk you would
> expect to see an entirely different set of results from issuing
> commands through a hypervisor into a contended host domain and then
> down to disk. The latencies I would expect to be considerably
> greater, but effects like disk seek times would be lost in the noise,
> and throughput would be reduced by only a constant factor. Be
> interesting to see some real world tests through ...
With Xen and a paravirtualized kernel you can get disk I/O to be within
95% of baremetal performance, or better. Network I/O can be even better
than that because network I/O between the guest & host NIC merely requires
a page bit flip & one copy. Since xen 3.0.4 the guest will use huge
MTUs and let the host NIC do TCP segmentation as needed so we get near
wirespeed network access for the guests. This is order of magnitude better
than disk/network I/O in VMWare - you simply can't get this kind of
performance without having para-virtualized disk/net drivers.
Of course things get more complicated as you scale up number of guests
running concurrently, but figures i've seen for scaling Xen upto 8 guests
are very good indeed in terms of overall throughput & balanacing I/O
load fairly. What's much harder to measure is how things perform when
guests have radically different I/O loads to each other, simply because
there are soo many tunables. You get really non-intuitive scenarios
when it comes to looking at how much memory to give to each guests -
depending on access patterns. It may actually be beneficial to take
memory away from I/O bound guests (without actually impacting their
throughput) & give it to others increasing overall system utilization.
Regards,
Dan.
--
|=- GPG key: http://www.berrange.com/~dan/gpgkey.txt -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- berrange at redhat.com - Daniel Berrange - dan at berrange.com -=|
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 196 bytes
Desc: Digital signature
URL: <http://mailman.lug.org.uk/pipermail/gllug/attachments/20061201/541e01f4/attachment.pgp>
-------------- next part --------------
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list