[Nottingham] Opportunistically maximising resource utilisation

Martin martin at ml1.co.uk
Tue Sep 23 12:54:22 UTC 2008


Martin wrote:
[---]
> That considers a mere microcosm of a more holistic view. Even at "nice
> 19" you can easily bring a system to it's knees.
[---]
> You could limit the programs to NOT use up too much resource in the
> first place for any instant, but then you're wasting time and resource
> when additional resource is available. Very slow and wasteful.
> 
> There's a lot more to resource utilisation than just CPU usage...
> 
> 
> Further ideas?

I guess most of the kernel scheduling effort is focused purely on just
keeping the CPU 'busy'...

In the past, the CPU was often the most expensive part and also the
bottleneck for getting jobs through a computer. The costs balance meant
that you always provided enough surrounding resource (RAM, virtual RAM,
IO) to keep a balanced system.

Now, the balance is that the CPU is fast and cheap, RAM and storage are
cheap, but the IO is comparatively slow (or expensive) for PCs... And so
now a RAM swap is now disproportionately expensive...


Have a "nice" for all resources?

Or have the application aware of what is available so that it can choose
between algorithms or tweak algorithms to make the best use of the
available resource?... (As is done already with testing for and using
CPU features such as MIMD etc.)


The problem is that the present OS design deliberately tries to give the
impression that there is infinite resource available! Bit of a problem
if you know you will hit the limits and yet you wish to get the most out
of what is available without suffering sudden death...

Or do we need a sort of Xon Xoff but for general resource utilisation?


?

Cheers,
Martin

-- 
----------------
Martin Lomas
martin at ml1.co.uk
----------------



More information about the Nottingham mailing list