[Gllug] Controversial Joel Spolsky article

Nix nix at esperi.org.uk
Sun Dec 21 09:45:57 UTC 2003


On Sun, 21 Dec 2003, Bruce Richardson uttered the following:
> On Sat, Dec 20, 2003 at 10:14:14PM +0000, Bernard wrote:
>> That 
>> was only possible because PCs didn't need a sysadmin. The story of the 
>> PC is a struggle to escape from control by central IT. Now of course 
>> Microsoft is trying to move everything back by running applications on 
>> huge central server farms.
> 
> That's paranoia.  And simply wrong.  The earliest PCs didn't need
> sysadmins because they were used for simple things.  PCs have become
> more complex and even before they did, businessmen who installed PCs on
> their desks soon found they needed support staff to keep them running.
> Those support staff became the new PC IT world.

Quite so.

When MS were running their `zero administration' ad campaign, some
companies believed it and tried to get by almost without sysadmins.

All that this meant was that the Unix sysadmins got called in to rescue
them when the escalating disasters stopped the PC users from getting any
work done.

(I hope this wasn't common, having been somewhere where it did
happen. It wasn't fun.)

> It's not some sinister conspiracy by Microsoft or baleful sysadmins
> trying to reclaim their territory.

[tries to look baleful]

[gives up]

(It's really annoying; even when I'm angry with someone the most they'll
do is laugh. Why can't I look like a terrible foreboding Power and
blight upon the land of lusers? hm, possibly because I'm not one.)

>> There are some exceptions but most Windows programs used editable text 
>> configuration files until relatively recently.
> 
> Like the windows registry?  Which "most" programs were these?  A few of
> them used .ini files but only for the most basic, windows-related launch
> parameters.

Actually, he's right here; in the Windows 3.x days and before, almost
everything used a mess of .ini files. The only programs that that rolled
their own binary config files was, well, from MS, and even they'd
disguise it inside document template files or something. (The Curse of
NORMAL.DOT...)

In the DOS days, alas, binary config files were legion :( I thought they
weren't before I started writing this, but I've tried to find textual
config files among my old archives and there nary a one.

>> The decision to move away 
>> from text configuration files is a fairly recent innovation.
> 
> No, it isn't.  It predates Windows - you mentioned Windows, not me, at
> this point. I'd been talking about the microcomputer industry from DOS
> and before.  PC developers used binary configuration files for almost
> everything right from the start, for three main reasons.

Yes: the use of text for .INI files was unusual and actually disparaged
(!)

Without a rich text manipulation toolbox, all you could do with text
config files was run $EDITOR on them. Oh, and DOS `find'. What a choice.

> 	1.  It saved precious bytes on tiny storage systems.

Indeed: I was glad of that when I had 30Mb of disk space.

> 	2.  It obscured the contents and their intent.  This was a) the
> 	only protection against meddling on systems that had no
> 	filesystem security and b) commercial paranoia, given that
> 	microcomputers introduced a new
lack of text? :)

This `reason' annoyed me (and probably/hopefully everyone else) from the
start. It particularly annoyed me in the one place where even Unix boxes
use binary: basic filesystem layout. I had my file allocation tables
eaten by an electrical storm once: luckily that was before my ten-year
hiatus from taking backups of my home machines so I was able to get them
back again...

(thanks to the blessings of CD-RWs and dar(1) my first backup since 1993
is running now, driven by a pile of rather ugly perl and /bin/sh. I've
been relying on replication, and thank goodness had no disk failures
over that time. I've had enough of trusting to luck, though.

I'd not call this scripting `enterprise-grade'. Not yet. But then I only
wrote it yesterday evening, give it time... given that
`enterprise-grade' mostly seems to mean `dysfunctional and expensive'
it'll hopefully soon fail to be `enterprise-grade' on *both* counts
rather than just one.)

> 	3.  Lack of the right tools and ignorance of their utility.
> 
> I don't know which PCs you were using, but config.sys and autoexec.bat
> were almost the only text config files in the whole bloody PC world for
> years.

Alas, I fear you're right. .INI files were a temporary aberration,
devoid of the programmatic complexity you need for anything but simple
stuff.

I think the lack of a decent scripting language is what really killed
configuration stuff stone dead: anything complex is normally done by
getting scripting languages to write config files or to call things with
huge command-lines (also a no-go in the DOS world). Without a scripting
language worthy of the name, textual config files wouldn't have been
*that* useful, even had they existed.

>        Even where DOS wordprocessors were sophisticated enough to use
> printer description files, they were usually binary.

He must be thinking of Windows .INI files.

I was going to bring up things like Turbo Pascal, but I just checked my
old copy, and, guess what? binary config files.

>> It's a 
>> deliberate decision, not an accident. Given that most Windows users 
>> shouldn't try to manually edit configuration files I don't see much 
>> wrong with it.
> 
> Obviously, you're not an admin.  Most Windows users shouldn't but some
> should.  Filesystem security is how a mature OS protects applications
> from uninformed tampering.  Putting configuration parameters into binary
> files simply creates needless obscurity.

Quite so. It's annoying, makes it much harder to track down problems
(oops, grep won't find indications of problems anymore) and doesn't
actually stop editing in extremis (who here *hasn't* repaired the
partition table part of a DOS MBR by hand?)

It just makes it so annoying that you rarely bother. That can't be a
good thing; computers are meant to be devices that *help* through
automation, and being unable to tell that automation what to do *even
though it is configurable* simply because the configuration file was
written with ease of programming and `speed' in mind is sheerest idiocy.

(Well, it can also be a sign of a sendmail-era program written in the
days when optimizing config file parsers for speed over readability
really was meritorious. But that's not been true for over a decade now.)

>                                           Looking through text
> configuration files and logs often provides essential information to the
> admin or clueful user trying to troubleshoot.  Without that you are
> dependent a) on the developer having remembered to include everything
> relevant in the GUI representation and b) on the GUI working.

a) is the worst, IMHO: configuring software by pointing at pretty
pictures works until you have complex requirements, when all of a sudden
you need a language to express yourself in.

Oops, no language. (Well, there's ActiveState's perl port, but that's a
leak-across from Unix. :) )

>> That's another aspect of what Joel was talking about. Windows systems 
>> are optimised to work without the need for a sysadmin.
> 
> I can tell you from painful experience that that is a fantasy.  I can't
> believe that you've ever worked in a real computing environment of any
> size of you can make that claim with a straight face.

Windows systems, as they come from the manufacturer, are optimized to
need a sysadmin physically hovering over them at all times. (What do you
mean, you want remote software installation capability? We give you
telnetd and most of the admin tools need GUIs. That should be good
enough for anyone.)

(This is a slight calumny: it's got a *bit* better than that in the last
five years or so. But not very much.)

[snip `removing useful capabilityies is abstraction']
> The abstraction argument simply isn't relevant, but...

I think I know what he might mean.

It *is* information hiding: the capabilities of the system, and
information about how it does it, *is* indeed hidden.

I think we need to distinguish here between information hiding (hiding
*unnecessary* details of implementations from their interfaces: good)
from information *trapping* (hiding *necessary* details: bad).

A huge number of interfaces trap information, violating the old
`necessary and sufficient' rule. It's almost invariably a sign of poor
interface design (the one example I can think of where it isn't is
crypto, where such trapping is the interface's raison d'etre), but
because library design is so almost entirely a self-taught skill
and because so many of the examples out there are so poor (JDK anyone?)
a lot of people never even realise that it is bad.

I can't really grasp how that could be, myself: obviously these are
people who've *accepted* that poor interfaces ban them from doing things
without good reason. Of course, `banning from doing things without good
reason' (because the designer didn't think of it) is the biggest flaw in
GUI systems. Nobody can think of *everything*, but the structure of
non-scriptable GUI systems requires authors to think of everything rather
than relying on a language to express the boundaries of the problem space
and let other developers fill in the rest.

(Apparently the only place in the Windows PC world which *does* roll a
lot of API and little languages is MS, and they don't provide them to
anyone outside of MS. Typical.)

[snip description of *useful* abstraction]

-- 
`...some suburbanite DSL customer who thinks kernel patches are some
 form of military insignia.' --- Bob Apthorpe
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list