[Nottingham] Opportunistically maximising resource utilisation

Martin martin at ml1.co.uk
Tue Sep 23 12:32:29 UTC 2008


Michael Erskine wrote:
> Un-ask this question: nice 19 is the correct answer to the question
> you need to ask :)

Sorry, no it isn't.

That considers a mere microcosm of a more holistic view. Even at "nice
19" you can easily bring a system to it's knees.

There's some tweaking that can be done if you have full control of the
box and you can choose what kernel you run and what kernel scheduler is
used. And even then, an innocuous task that exceeds a critical minimum
of spare RAM or exceeds the sweet spot for disk IO or network IO
(especially with NFS!) and you're suffering...


Two crazy but simple plausible examples:

I wish to run a Primes Sieve using every spare bit available on the HDD
to quickly find as many primes as possible (low RAM but high IO and high
disk cache usage/poisoning);

Or

I wish to run a POVRay trace of a few million objects with full
radiosity (high RAM usage, CPU intensive).

And both to run in the background without disturbing the user other than
that the PC is to be left on.


So how can the program check that it isn't unduly clobbering other
higher priority processes?

Using ulimit was one idea but that doesn't allow for a graceful
completion of the offending process...

You could limit the programs to NOT use up too much resource in the
first place for any instant, but then you're wasting time and resource
when additional resource is available. Very slow and wasteful.

There's a lot more to resource utilisation than just CPU usage...


Further ideas?

Cheers,
Martin

-- 
----------------
Martin Lomas
martin at ml1.co.uk
----------------



More information about the Nottingham mailing list