[dundee] Opinions on the Sun Ultra 24 Box - Good bang-per-buck?

Andrew Clayton andrew at digital-domain.net
Fri Apr 25 13:56:23 BST 2008


On Fri, 25 Apr 2008 12:48:28 +0100, Rick Moynihan wrote:

> 2008/4/25 Andrew Clayton <andrew at digital-domain.net>:
> >
> > On Fri, 25 Apr 2008 11:38:19 +0100, Rick Moynihan wrote:
> >
> >  > 2008/4/25 Andrew Clayton <andrew at digital-domain.net>:
> >  > > On Thu, 24 Apr 2008 22:54:29 +0100 (BST), Lee Hughes wrote:
> >  > >
> >  > >  > yeah, but will the software (os/app's) take advantage of
> >  > >  > all that threading capability?
> >  > >
> >  > >  I'll use the terms cores and cpu's interchangeably below..
> >  > >
> >  > >  Of course. Linux itself is quite capable of dealing with many
> >  > > cpu's, with support for 4096 cpu's currently being worked on
> >  > > for merging from the folks at SGI.
> >  > >
> >  > >  You want real world examples,
> >  > >
> >  > >  Compiling is an obvious one with make supporting parallel
> >  > > compiling natively, e.g make -j <num cpu's + 1>
> >  > >
> >  > >  If you use a source based distribution, you'll like many
> >  > > cores...
> >  > >
> >  > >  Saw a recent reference to an IBM machine that does 3K/sec
> >  > > (that's 3 kernel builds a second)
> >  > >
> >  > >  Grip supports multiple cpu's for encoding audio.
> >  > >
> >  > >  I'm sure games will start supporting multiple cores, in fact
> >  > > IIRC some version of quake does/did.
> >  > >
> >  > >  Any java app you use...
> >  > >
> >  > >  And just having the capacity that comes from having multiple
> >  > > cores, video encoding not interfering with your compilation
> >  > > for example.
> >  > >
> >  > >  That heavy javascript site causing firefox to hammer one of
> >  > > your cores.
> >  > >
> >  > >  So yeah, Linux has been ready for this for a long time and as
> >  > > multiple cores become more prevalent, I'm sure more apps will
> >  > > be written specifically to take advantage.
> >  >
> >  > This is all true, but with the current state of hardware  (at
> >  > least under x86) I suspect that simply adding more CPUs & cores
> >  > leads diminishing returns, due to problems with shared memory;
> >  > i.e. cache and memory flushes under the hood.  Sure an 8/16 core
> >  > machine sounds great, but I'm not sure if I could ever keep all
> >  > the cores busy during my normal workload.
> >
> >  Yeah, It can depend on what your doing, long running cpu
> >  intensive jobs probably make the most efficient use of multiple
> >  cores/cpu's.
> >
> >  And I was just reading this thread,
> >  http://www.ussg.iu.edu/hypermail/linux/kernel/0804.2/3300.html
> >
> >  "I have 128 cpus, that's 128 grabs of that spinlock every quantum.
> > My next system I'm getting will have 256 cpus."
> 
> Am I right in reading that thread as illustrating the problems of
> shared memory?

Actually no. It was just discussing some scheduler bugs...

> The quote you pasted taken out of threads surrounding context seems to
> imply something like "I'm happy with my 128 CPU's blocking on memory
> writes, so I'm going to waste lots of money on 256 CPU's next time!!!"
>  When I think it's really just saying, "this is really going to bite
> when we have 256 CPUs!"

Your assertion is indeed what he was implying, although it seems that
isn't actually a problem and doesn't show up in profiles.

http://www.ussg.iu.edu/hypermail/linux/kernel/0804.2/3377.html
"Firstly, cpu_clock() is only used in debug code, not in a fastpath.
Using a global lock there was a conscious choice, it's been in -mm for
two months and in linux-next as well for some time. I booted it on a
64-way box with nohz enabled and it wasnt a visible problem in
benchmarks."

BTW I think that 128 cpu's is a single Niagra2 chip. The quoted thread
itself wasn't really what I was pointing out more just that, many
cores is now something which we are seeing outwith the one off purpose
built systems.
 

Andrew



More information about the dundee mailing list