Re: [xsl] ChatGPT results are "subject to review"

Subject: Re: [xsl] ChatGPT results are "subject to review"
From: "Piez, Wendell A. (Fed) wendell.piez@xxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Date: Fri, 7 Jul 2023 16:37:45 -0000
Norm, yes indeed, they are saying 'hallucinating', which is a terrible
euphemism, because it:

- Sounds fairly benign (b/c hallucinating people are not generally destructive
or antisocial)
- Implies there is someone 'there' to 'hallucinate', which has not been
demonstrated
- Implies corrigibility, b/c hallucinating people can often be restored indeed
hallucinations often go away on their own
- implies there is some kind of 'normal' that characterizes it when it is not
'hallucinating', as opposed to the hallucinating itself being the normal and
'correct' (just not adequate) operations

In other words, what they are calling 'hallucinating' is what the system does
correctly, when it turns out that is inconvenient (or 'misaligned' in some way
detectable by and meaningful to people).

All this is wrong, but then the purpose of euphemism ("calling it nice") is
*not* to be true or accurate, quite the contrary.

When not saying "lying" for effect (which also implies subjectivity), I prefer
the term "fabulating" to describe what it is that (I am led to believe) the
bots are doing.

Bringing it painfully back on topic, there is a world of difference between an
XSLT executed by a deterministic processor built over algorithms implementing
a testable specification, and a transformation executed by an LLM in any
scenario. To my knowledge they haven't hooked the two together but they will.
A robot equipped with a validating parser and a conformant XSLT engine could
presumably do an even better job 'faking it' than one without, and it might be
harder to detect. (Or easier, if it proves unable to find human-like solutions
to making things valid and executable. And on whether you look at the code or
the output.)

Which comes back to what I just said, namely the unit tests.

Cheers, Wendell


-----Original Message-----
From: Norm Tovey-Walsh ndw@xxxxxxxxxx
<xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Sent: Friday, July 7, 2023 11:42 AM
To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: [xsl] ChatGPT results are "subject to review"

> (Or it would be lying were it capable of lying. What it is doing is
> what it was programmed to do, namely chat with you about your topic of
> choice.)

The term of art for LLMs just making [expletive] up seems to be
bhallucinatingb.

                                        Be seeing you,
                                          norm

--
Norm Tovey-Walsh <ndw@xxxxxxxxxx>
https://norm.tovey-walsh.com/

> A man may by custom fortify himself against pain, shame, and suchlike
> accidents; but as to death, we can experience it but once, and are all
> apprentices when we come to it.--Montaigne

Current Thread