[Gllug] [SPAM -4] Re: Help with Script.

John Hearns hearnsj at googlemail.com
Thu Jan 13 15:24:41 UTC 2011

On 12 January 2011 22:03, Richard W.M. Jones <rich at annexia.org> wrote:
> Why do you need to get them in sequence?  Surely just getting
> all of them is sufficient.

That's a damn good question.

If you assume a system with a rack-top switch in each rack then yes, I
suppose it doesn't matter which order you
rack up the servers. Those  systems with LCD displays on ILOM
processors could be rpogrammed to show the hostname,
and if you ever needed to figure out which one to pull for maintenance
you could use IPMI to illuminate a locate light signalling
which one to pull out and replace.
However it is somewhat traditional to number the servers in some sort
of order for the bottom up. (*)

Blade type enclosures don't have this problem.
Indeed SGI ICE clusters use the rack unit processors to discover
blades when the cluster is being installed.
They have a propery of re-discovering replaced blades - if you pull a
failed blade and insert a new one, the 'blademond' process notices
this, and will reconfigure the DHCP server etc., reimage and power up
the blade all without you really noticing.

(*) Its also intructive to consider multi-rack Beowulf style clusters.
High performance interconnects differ of course, and the topology of
the entwork
connections is a topic of endless academic interest.
With gigabit ethernet, if you have a system with rack-top switches
either connected via high-capacity stacking cables to form a virtual
or with the rack switches connected into a central 'star' switch you
will find that HPC type jobs will run slightly faster if you have all
processors running
a job within the same rack. Internode latency is lower as less switch hops.
And before you argue about that one, I had a customer at a big UK
manufacturing firm who did the test runs and produced the graphs for
So I was called on to arrange the batch scheduler such that you could
run jobs using nodes on a particular rack - ie you group the nodes
into particular racks,
and submit the job with a flag which says 'run on one of these groups'
So my answer is yes, you do need to keep your servers in order so that
you can wring out the maximum performance from a cluster in certain

You might also find a cluster with a mixed architecture - some racks
of nodes might have a second network interface and switch dedicated to
parallel traffic,
again you set the batch scheduler up to run parallel jobs on these racks.
Gllug mailing list  -  Gllug at gllug.org.uk

More information about the GLLUG mailing list