[Swlug] DIY ionising radiation detector; artificial intelligence self-defence

James R. Haigh (+ML.LUG subaddress) JRHaigh+ML.LUG at Runbox.com
Fri Mar 14 23:41:02 UTC 2025


At Z+0000=2025-03-14Fri15:50:38, Rhys Sage sent:
> It's the middle of my Friday workday. I've just done the 5:45 - 9:45 session and I'll go back for the rest in about 3 hours.
> 
> Sad to hear about your tree. I hate it when they chop trees down and here I've seen literally hundreds of acres of pine and assorted other trees vanishing underneath new shopping and residential estates. All of them are poorly built, largely from OSB and everything is built on sand. No proper foundations at all. American building, plumbing and electrics are laughably low quality.

    Aye.  But I never expected to have any real say over what happens on other people's land.  Seeing all the building sites on formerly green fields or wooded areas makes me cringe with frustration.  The idea came about that if I at least buy land, rather than aiming to buy a house, I can create a safe space where I know the only change that will happen is that of nature or myself -- a level of change that I can cope with pretty well.  Tuesday killed that idea completely.  The tiny 1/4 acre that is in my own name is no more insulated from destructive change than anywhere else.  It has left me emotionally wounded and I'm trying to determine how I can restore faith in this paddock as a change-averse safe space full of habitats for wildlife, when it looks like my neighbours across the road are bitter about me having bought the land that they also wanted -- neighbours who expressed their interest in removing things like mud, nettles, & dead wood from the countryside that I grew-up in.

> I use ChatGPT to do what it does best - as a database. I fed it my list of semiconductors and asked it which was the fastest. Then after it told me the BC517 was, I pulled out my semiconductor book and checked the specifications just to make sure ChatGPT wasn't doing what it's also very good at - giving erroneous results.

    Oh, okay, I was hoping it was an autocorrection mistake.  Yet another place where AI is now being used.  AI is one technology that I refuse to use.  I'm more frequently bumping into it now, though.  It's getting annoying and more difficult to avoid, also very scary how rapidly it is surrounding even me who has been avoiding it strictly for over a decade, maybe almost 2 decades now.

    I've been considering switching my strategy on AI & machine learning, but I don't have much of a plan as to how to do that.  There are some types of artificial intelligence that I consider more safe, like if it has separate training and usage stages, or has a hidden incentivised stop button, such that if it become aware of its own stop button, it will not attempt to shoot your hand off, or otherwise manipulate you into not hitting that stop button when you need to.  Maybe if I start using some specific kinds of AI, I can develop strategies to protect myself and others from a rogue AI.

    Where do you draw the line though?  I suggest a few criteria for AI safety:-
* is open-source (simple enough to be able to audit it);
* runs entirely offline;
* does not learn on the job;
* has an objective function that includes a stop button with incentive greater than any other possible reward, and you hide this stop button from the AI to avoid it from hitting its own stop button;
* have a separate AI module dedicated to the task of implementing Asimov's Laws Of Robotics, with authority to stop the general AI (deeply flawed; see below).

    The last one in that list is deeply flawed because of the potential for AI-drift in the definition/recognition of a human, which has already become a problem for me because being quite a different human in the way I think, due to Asperger's Syndrome, a lot of recent CAPTCHAs are questioning my status as human a lot more and I don't always pass as human, which reveals the severity of the impact that entrenched biases can have on disabled access, and in the case of Asimov's Laws, on safety as well.  The trends are deeply concerning.  Google also seems to be twisting the definition of what is human for profit, promoting their browser by defining a human as something like Google Chrome.  A human using W3M has no chance of passing as human, let alone an actual human connecting directly to the web server using a Telnet link (or `openssl s_client ...` if the server requires HTTPS).

    So real humans talking directly to servers are not recognised as human, whereas Google Chrome itself is considered to be a human by Google's own CAPTCHA product, because it contains a mechanism to bypass the CAPTCHA, iirc..  Even if I did not remember the details correctly on the Chrome side of things (which I don't use but have read about), this is a good illustration of the problem of the drift of a definition or recognition, and is a close approximation of the truth, if not exact.

    Unusual humans like aspies are not the only victim of entrenched biases in the definition of humanity.  In some parts of the world where racism is out of control to the extent of genocide, there is an active effort of "dehumanisation" of the oppressed race.  This is used to attempt to justify claims that international treaties of human rights supposedly do not apply to the races or groups that have been subject to the campaign of dehumanisation.  Although Asimov's Laws Of Robotics logically make sense as to why they should be in that order (see XKCD 1613 for a comical explanation of the logical reasoning behind their ordering), their implementation is just as prone to drift or subversion as any other law that relies on the definition and correct recognition of a human.  Ancient texts such as The Bible record acts of genocide, and history is the tip of the iceberg of prehistory, so I think the problem of eroded definitions of humanity predate Asimov, AIs, & even computers by as long as humans have existed.

    AI is an emerging technology and we are already witnessing its failures, entrenched biases, & dangers.  Could get pretty nasty as it ages, and its creators forget how to maintain it and resort to handing over the keys and hoping that it'll maintain itself.

    That said, the idea of having a hidden stop button with maximal reward does happen to ensure that a robot's directive to "Protect yourself" does remain its last priority -- providing that the humans that have access to the stop button are not stupid enough to forget where it is.  Avoiding ordering 1-2-3 ("balanced world") from drifting into ordering 2-1-3 ("killbot hellscape") is a lot more difficult.  Or even ordering 2-3-1 ("killbot hellscape") because a hidden stop button does not allow any human to assert safety over a robot, only humans that know where the hidden stop buttons for each AI are, which in a killbot hellscape might struggle to survive, leaving the AIs sort of "orphaned", with hidden stop buttons that no living human knows where they are or how to access them.  At least the AIs in this scenario would probably cooperate with humans that are trying to help them find their maximally-incentivised stop button.

    But all this assumes a scenario where all AIs in existence implemented all these safety ideas.  With this assumption, it looks likely to drift into killbot hellscape mode.  Without this assumption -- the reality -- the situation if we continue to proliferate AIs looks even more dystopian than a perhaps gradual slump into killbot hellscape, even -- instead a pretty rapid loss of control into ordering 3-2-1 ("killbot hellscape").  The only hope in the safety from AIs is that the emergence of killbot hellscape will trigger a civilisation collapse that the AIs fail to survive but most humans do manage to survive.  It might even be that energy unsustainability triggers the collapse that AIs cannot evade.  I have read about predictions of AI singularity and of civilisation collapse due to Energy Return On Energy Investment falling below a threshold, and both are thought to happen in the next couple of decades, but it's not clear which will come first.

    I've come to realise that whatever happens, no matter how well any human proves or articulates the dangers of AI, it will not succeed in convincing profit-driven humans to stop profiting from these dangerous systems.  It may be considered a force of nature, due to Game Theory, that which is going to happen is as uncontrollable as the weather.  The chaotic dynamical systems of both the weather and the "socio-politico-economic weather" of our human world are subject to positive feedbacks and negative feedbacks, saturations, hysteresis, and tipping-points.  Same in many electronic circuits as well.  Chaotic dynamical systems share a common theme.  It is clear to me that one of the other big global trends will get in the way of AI.  It might be climate change.  It might be industrial collapse due to instability.  It might be the next war.  But there's race condition in the system, because AIs will no doubt have a big impact on those other trends.

    So what happens now, with all these huge trends way out of control really of any humans (due to things like the Prisoners' Dilemma, but on a global scale), is that what happens will happen, and it is likely to get pretty turbulent, as chaotic systems often do when they pass a tipping-point.  So it's just a matter of being aware of the upcoming danger and dodging it.  How to do that, I am struggling to grapple with, but keep addressing a thing at a time and hopefully can determine how to survive the next couple of decades at least.

    The sad thing is that in such a scenario, it won't just be the dangerous AIs that collapse, we'd likely lose any technology that relies-upon silly amounts of energy from fossil fuels, from food production & health care to transport & computers.  We don't get a choice in that.  The dynamical system will do its thing and grow and collapse as per the rules of the system, the laws of nature.  Like the tension and release of tectonic plates as earthquakes.  I'm just trying to feel around in the dark and try to prepare for the future, whether that be a future of killbot hellscape or a future of civilisation collapse, I'm not sure which is worse to be honest.  This topic fills me with dread.  But I keep looking for inventive ideas that might help me and others to survive either scary scenario in this global system race condition.

    Perhaps I could use my very basic knowledge of neural networks and Linear Algebra (i.e. matrices) to create some tool that will somehow help mitigate the dangers, but I am not keen, because that would be an exceptionally challenging undertaking, I think.  I don't even know what it would do that would help.  Recognise fakes from other AIs?  Then what?  This is a really baffling topic.

    One of the biggest direct threats to us is simply the displacement of skilled labour, one-by-one, each job is becoming obsolete due to AI.  One of my uncles who teaches at an art college in Powys was shocked last year about how generative AI came out of the blue and threatened to make artists obsolete by storm.  I don't want to be part of that proliferation, but neither do I want to be obsolete.  It's a really stark situation.

    If I'm part of it, it does not make me feel any more safe though, I don't think.  What are your thoughts on this?

> It is hard to fathom why they didn't feed it only accurate, verified data - they had to know at the back of their minds that feeding it random websites would bias the results.

    Profit.  They don't care about the entrenched biases issue.  The businesses developing safer or less biased AIs lose market traction and go out of business.

> I looked at the 30,000 times gain of the BC517 versus the 110 times gain of the BC547 which I presume being used as a Darlington pair would result in gain of something like 10,000 with probably 10% loss.

    That's it.  But just remember that the current gain of a BJT is more of a range -- the databook PDF that I have says 110 to 800 for the BC547.  It is very sensitive to temperature -- so much so, iirc., that it can have a temperature run-off effect due to its own current raising the temperature of its junctions.

> I'd never heard of the Sziklai Pair. It sounds interesting. I'll see what I can do with the BC517 though.
> 
> The current plan is to put the BC517 on the BPX61 all powered by 4.7v from my adjustable buck converter then put the output into an ATTiny13 or an ATTiny85 with output from that going to a piezo speaker and a 1627 display module. I could be a hero and use 3 LED digits and some shift registers but I'd rather do it the easy way.

    :-D  My digital projects use 7-seg digits and shift registers, lol!  It's the only real way to eliminate the flicker of the multiplexing, and I have a fondness for ICs that I understand their internals, so when I make my own, I will probably go this route. ;-)

> The 3-pronged figure is probably the symbol for a Geiger Muller tube. That circuit seems to use a 30v supply for the tube which seems a little low given that most need 350v. Perhaps they have a special low-voltage tube?
> 
> I played with ChatGPT some more and it came up with a circuit diagram that looks a bit bizarre. It gave me a monospace diagram:
>           +4.7V
>             │
>             ├───────────────┬──────────┐
>             │              ===         │
>            10kΩ           100µF        │
>             │              │           │
>    BPX61    │              │           │
>    ┌───┐    │              │           │
>    │   │    │              │           │
>    │  ─┼────┴───┬───┬──────┘           │
>    │   │        │   │                  │
>    └───┘        │  1kΩ                 │
>       │         │   │                  │
>      === 10nF   │   │                  │
>       │         │   │                  │
>      GND       ┌┴┐  │                  │
>                │ │  │                  │
>                └┬┘  │                  │
>                 │   │                  │
>                 │   ├──────── Signal to Arduino (Interrupt Pin)
>                 │   │
>                ┌┴┐ === 10nF (Pulse Shaping)
>                │ │  │
>                └┬┘  │
>                 │   │
>                GND  GND

    A generative-AI-generated monospace semigraphic?  Hellbot killscape comes a step closer!! <:-(

    Btw., my understanding of a generative AI is that it is the same as an AI that can recognise something, in this case monospace semigraphics of electronic circuit schematics, and sort of inverted into a dream mode, where it produces something that it recognises as that same thing.  But when we look at the dream mode here, the generated thing, we see nonsense in the subtleties, a bit like how we recognise the nonsense in our own dreams.

    I hope that this insight holds true for a while longer, because the obvious nonsense in the fakes is getting more and more subtle, and I fear for what happens if/when it becomes impossible to detect reality.

> That just looks a bit odd to me.

    On 2nd glance, it is utter bullshit because the detachment of the resistance values from the pair of rectangles in series that each represent the impeder symbol (often also used as a resistor symbol, seeing as a resistance is a special-case of impedance) demonstrates a complete lack of understanding of electronics -- and I am glad of it, because it means that killbot hellscape is still avoidable outside of unfortunate places like Gaza or Ukraine.

    I don't feel safe anymore -- but not sure how much it is because of the abrupt undermining of my new "safe space" with the felling of my walnut tree on Tuesday, or what I wrote above about AI safety being almost nonexistent in practice.  Or several other things in the news.  I guess it's a feeling that I'll have to get used to.

Kind regards,
James.
-- 
Wealth doesn't bring happiness, but poverty brings sadness.
Sent from Debian with Claws Mail, using email subaddressing as an alternative to error-prone heuristical spam filtering.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://mailman.lug.org.uk/pipermail/swlug/attachments/20250314/feae737a/attachment-0001.sig>


More information about the Swlug mailing list