[TynesideLUG] Your Computer is not a Fast PDP-11

Jeff Joshua Rollin jeff at jeffjoshua.club
Mon Feb 8 20:45:46 UTC 2021


On 2/8/21 8:37 PM, Alex Kavanagh wrote:
>
> Re-instating on the mailing list:
>
> On Mon, 8 Feb 2021 at 14:46, Jeff Joshua Rollin <jeff at jeffjoshua.club> 
> wrote:
>
>
>     On 08/02/2021 10:01, Alex Kavanagh wrote:
>>
>>
>>     On Sat, 6 Feb 2021 at 20:01, Jeff Joshua Rollin
>>     <jeff at jeffjoshua.club> <mailto:jeff at jeffjoshua.club> wrote:
>>
>>         Hi,
>>
>>         I mentioned during today's LUG having read a paper a few
>>         years ago which
>>         complained that modern C is still coded (and indeed the
>>         language was
>>         designed) to expose a virtual PDP-11 to the programmer. I
>>         didn't think I
>>         would find it again but, having Googled, I think I have. (If
>>         it's not
>>         the same one, it basically touches on the same points.)
>>
>>
>>     I don't think it is as clear cut as to say "modern C is still
>>     coded ... to expose a virtual PDP-11".  It /was/ coded that way
>>     and inertia tends to maintain the design pattern because,
>>     simultaneously, during the last 40-odd years, processors have
>>     also been coded to a particular execution model to get the best
>>     performance out of the dominant systems language: C.  So they
>>     effectively seem to have locked each other into a vicious cycle
>>     of maintaining the status quo.
>>
>>     It would have been interesting if the Transputer
>>     <https://en.wikipedia.org/wiki/Transputer> had made more of an
>>     impact, as it begat the concurrent programming languages occam
>>     (and occam 2).  But the ideas did end up in AMD/Intel in how to
>>     interlink CPUs and other chips together.
>
>
>     Yes, it's slightly more complicated than that. Admittedly some of
>     that article went a little bit over my head, but I'm learning.
>
>     It will be interesting to see what happens with any of the current
>     crop of operating systems being coded in Rust; I'm sure you're
>     aware of at least some of them - and of course there's software
>     other than OSes being coded in Rust too. IIUI correctly, the
>     RISC-V ISA doesn't try to help programmers doing concurrency in
>     any way, although to date I think their focus has mostly been in
>     embedded systems where perhaps it's not as important as trying to
>     keep multicore desktops and servers occupied as much as possible.
>     It does allow for extensions, however, so perhaps in future they
>     will end up having to add them. The fate of all (successful) ISAs
>     seems to be to grow larger and larger...
>
>     Had to look up the Transputer again. For some reason I thought it
>     was based on multiple MC68K's; perhaps I'm thinking of something
>     else, or perhaps it was because Atari tried their hand at one (the
>     Atari ST and successors used MC68Ks, like pre-PowerPC Macs,
>     Commodore Amigas, and early Suns). Yet another example of
>     innovative technology being squashed by the market and economies
>     of scale, I guess.
>
>
> I don't think that Rust necessarily targets modern CPUs.  Obviously, 
> it wants to run on x86, but that has the (somewhat) ridiculously huge 
> CISC instruction set (tautology alert!).  So lots of Intel/AMD silicon 
> goes to taking CISC and making it RISC.  But to do it efficiently, you 
> need a really long pipeline (as there's lots of decoding steps to get 
> those tasty CISC instructions into lots of little, fast, edible RISC 
> instructions).  So now, with your really long pipeline (10s of 
> instructions), you get branches, which stall the pipeline.  So you 
> have to to branch prediction, which means running lots of speculative 
> branches at the same time.  This needs even more silicon, what with 
> the extra registers sets (one per branch), ALUs, Cache, etc.  And then 
> when a branch finally comes good, you have to do the whole 'rename' 
> thing so that you can carry on predicting.
>
> x86x processors do a huge amount of work, and compilers often have to 
> work with them just to get serious performance.  And strangely, doing 
> it in silicon was the fastest way to do it as compilers would be too 
> slow to do all that optimisation.  But, strangely now, CPUs are fast 
> enough that smart-compilers can now optimise code for RISC 
> processors!  Hence ARM cores, simple little things, can fly fast, 
> churning through heavily optimised code /faster/ than x86 CPUs can.  
> And the silicon is cheaper.  Funny old world.
>
> Anyway, back to Rust.  Rust doesn't target any particular processor.  
> Although the compiler has lots of intermediate representations (IR) 
> one of them is LLVM so that it can be compiled to ... x86 :)  
> Seriously, though, that's just one of the IRs and other's are various 
> stages of getting closer to a machine level representation of the 
> code, rather than the abstract world of variables, iterators, 
> functions, closures, traits, and types that inhabit the language.  
> Rust seems to target a kind of a idealised processor, but that you 
> have to really care about how memory is allocated and used; in that it 
> is a strange 'high-level' language, as it cares about ONE aspect of 
> the low level; what the memory is.  But it doesn't really care about 
> the memory as you don't have to care /where it is/, just that /memory 
> exists and you can't share it without being deliberate about it/.
>
> Anyway, I'll stop here, before this turns into some kind of weird 
> sub-child of an email and blog post!
>
> Cheers
> Alex.
>
> -- 
> Alex Kavanagh
> Home: http://alex.kavanagh.name <http://alex.kavanagh.name>
> @ajkavanagh


Ha, could be worse; the only reason I brought Rust up is because I 
forgot I deleted a whole lot of text in my last email, about Rust, 
because I realized I was getting off the point.

What do you mean, "You do that a lot?" ;-p


Jeff.



More information about the Tyneside mailing list