[GLLUG] Using LLM for support answers - please don't (Was Re: British Gas DKIM failure?)

Andy Smith andy at bitfolk.com
Sun Jan 28 15:05:02 UTC 2024


On Sun, Jan 28, 2024 at 02:37:45PM +0000, Jan van Bergen via GLLUG wrote:
> How Carles used it, with giving a reference stating that the lines in
> question came from a LLM, while still making sure that the info is correct
> to me is very much how you should use tools like this, and I had absolutely
> no issue with it.

My issues with it are pretty much the same as StackOverflow's issues
with it.

Any of us could have done the same, including Henrik themselves. Do
we want support venues that are just people pasting ChatGPT to each
other, web searches pulling back hits that are just more of that?

We have to spot that it's from an LLM and check the reference
ourselves. We don't know whether Carles did that for us. We can't
generally trust the LLM user to do that.

Carles could have asked the LLM the question, done the research
themselves to check that what the LLM came back with is correct, and
then written a response that they believe to be true and factual, in
which case that's fine. But we don't know that happened because it's
just a paste from ChatGPT.

> Maybe you're a language virtuoso and don't need tools to write,
> not everybody is like that.

Nice personal attack noted, but we aren't talking about writing prose.

> Let's try to be nice to eachother, especially when somebody is doing
> his/her/its best to help

I think my request was politely phrased and backed up with good
reasoning, whether you agree with the reasoning or not. I don't
think that pasting ChatGPT responses is someone doing their best to
help people.


https://bitfolk.com/ -- No-nonsense VPS hosting

More information about the GLLUG mailing list