[GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

bap at shrdlu.com bap at shrdlu.com
Sun Jan 28 16:55:01 UTC 2024

On 2024-01-28 14:06, Carles Pina i Estany via GLLUG wrote:

> I am normally not active on this forum, but read most of the messages
> and do have some knowledge in linux. For me to state that a policy
> another site, just because it makes sense, should also
> appy here just because you think it makes sense, is a wrong
> If you think that policy should apply here, lets discuss and agree to
> Not just retrospectively apply it to a person, that for all intents
> purposes is trying to help Hendrik is not conductive to free and open
> discussions.
> For me the usage of LLMs, if done correctly, is a great tool. As any
tool it
> has its limitation, the famous hallucinations to name but one.

I managed development of an AI project between 1995 and 2000. In my
role as "speaker to suits" I had to explain it to people` with zero
understanding of IT. The more I came to understand it the less I liked
it. For our project it was fine because our requirement was for
credibility and not accuracy. We are currently at the "unrealistic
expectations" phase of the hype-cycle and blind acceptance is rife.. 

The problem is that it generates plausible answers and not necessarily
correct ones. In situations where correctness is important the
error-rate needs to be watched. If users are accustomed to blindly
accepting whatever the computer says 99.99% right might be worse than
90%. My advice is that it's often a mistake to use LLM for any job that
you can't do better yourself.

> Let's try to be nice to eachother, especially when somebody is doing
> his/her/its best to help Jan

Indeed. Although we don't necessarily have to agree.

I would prefer that LLM generated material should always be flagged as

I'm unsure whether I should go further and consider discussions in
which LLM-generated arguments have been presented as  'tainted.'

Bernard Peek
bap at shrdlu.com

More information about the GLLUG mailing list