[Nottingham] Yesterday's (not so) cancelled Navigation Talk (21st Aug)
martin at ml1.co.uk
Tue Aug 26 15:56:37 UTC 2008
> First off, many thanks again to Jason for coming to the rescue and putting
> together a presentation at short notice-- it certainly made for an
> educational and thought-provoking evening for all of us, I dare say.
Indeed so. Thanks also to yourself for doing the mini-writeup/summary.
> Jason's talk seemed to be largely borne out of his frustrations with newer
> equipment and multi-purpose OS combinations not working as easily or as
> smoothly as he remembers them in the good ole' days of computing.
That concurs with my experience and thoughts also.
> The use of threading was called into question, and part of Jason's idea
> for tackling the sorts of problems he routinely encounters today proposes
> the use of lots of small, simple processors in an excruciatingly local
> (possibly even wireless) network of sorts, each to be assigned its own
> individual thread, instead of queing up for a large complex processor.
... FPGA(s) even?
Or does that just move the 'problem' into 'firmware'?
> Is this part of where GNU Hurd took a wrong turn at Albequerque, and will
> it eventually lead to its complete downfall?
Is Hurd strolling ever more slowly into an ever deepening quagmire? Or
is the monolithic kernel glitz so 'easy' in comparison? And still so for
SMP and NUMA systems?
Will the general case solution for SMP migrate over to something
> Trying to marry up a multi-pupose OS which has to try and keep up with the
> myriad plethora of mix-and-match hardware available today as opposed to
> the very standard single-manufacturer built boxes of yesteryear, such as
> the ZX Spectrum (Ahh, nostalgia ain't what it used to be.) was put forth
> and discussed as a possible factor in some of Jason's recent frustrations,
> as well.
Standards...? What standards?! Do we need an API into firmware for the
kernel stuff to offer a common 'virtual hardware' (standard) interface?
> Are large cumbersome, multi-purpose OSes losing out on reliability due to
> the simplicity and purpose-built design of embeded systems? Have
As the number of system transistors increases into the billions, and the
number of lines of code coordinating everything runs into many millions,
are we pushing the entropy envelope too far to be practical and
I think understandability went out the windows long ago!
> programmers for easily updateable (ie. non-embeded) systems now become a
> bit too lazy, unleashing software that maybe isn't quite ready, knowing
> that it can "just be updated over the internet" at a later time?
I think that is very much the case for commercial products. Most new PC
motherboards require their BIOS reflashing with something more recent to
clear out various glaring bugs that have since been fixed during the one
month shipping time... Overlapping the debug time with delivery time
improves profits, even if that is at the additional expense and
inconvenience of the users... (Some companies even get the users to do
the debug in the first place and then also pay extra for the fixes!)
> 'till the brutal end, but, alas, some multi-purpose OSes running on
> mix-and-macth hardware which has been tasked with supporting had crashed
Why is that?
Computers should just work, simply. We've had over half a century to get
these electronic gizmos reliable...
Sounds like a good fun evening. Thanks Jason and Fluff, and all the
Shame I missed it!
martin at ml1.co.uk
More information about the Nottingham