[Wylug-help] Re: Reproduceable kernel build

Nigel Metheringham Nigel.Metheringham at dev.InTechnology.co.uk
02 Jun 2003 16:09:50 +0100


On Mon, 2003-06-02 at 16:01, Tom Ward wrote:
>  I'd been misled because of some sources
> saying that you shouldn't use bzImages on particularly slow machines, you
> should use vmlinuzes because the decompression time is a problem - with
> the implication of bzip2 because it can be quite disturbingly slow on old
> machines. But I guess gzip -9 (from the Makefile) is going to be pretty slow
> itself...

gzip -9 is the compression command...

the gzip decompression code is pretty fast - to the extent where I would
suspect you might find that its faster to load the compressed code from
disk and decompress it than to load a larger (uncompressed version)
straight from disk.... however there are other things happening - you
have to relocate the code so you have somewhere to do the decompression
and get the kernel into the right place in memory.

Going back to the original query, the gcc build process used (as in I
haven't done a gcc build myself for a few years) to have a utility it
used to compare the output of the stage2 and stage3 builds (self check
on the compiler - which should be able to build itself sanely and
consistantly).  Some architectures, including the MIPs R3000 which I
generally used, had an object format which included timestamps in, and
so the comparison program was needed to skip the variant parts in those
files.

Could be worth having a look at this.

Personally I think I'd mod the errant macros to take their settings from
a file, and then distribute that file (which is basically your internal
version stamp) as part of the source package.

	Nigel.
--
[ Nigel Metheringham           Nigel.Metheringham@InTechnology.co.uk ]
[ - Comments in this message are my own and not ITO opinion/policy - ]