Friday, September 12, 2025

Phillip Winn on LLMs — 'They never lie, even when they spout falsities'









This morning I emailed my longtime friend and expert on all things computer-related Phillip Winn as follows:

"Every day I'm more impressed with Perplexity Pro, the level of detail and accuracy and speed are mind-blowing."

I was referring to its near instantaneous and dead accurate answers (above) to a question I'd asked about a small detail in the formatting of my new blog (this one) on Blogger.

He replied thus:

Just try to remember, despite being mind-blowing in many, many ways, LLMs are not technically answering the questions you ask. Whatever you ask, they are answering the meta-question: What could an answer to this question look like, statistically?

Sometimes the result is itself a good and correct answer. Sometimes the result is an answer that looks good superficially but is nonsense. But even hallucinatory nonsense satisfies the meta-question it's really answering, because made-up references and instructions to click on things that don't exist still answer the meta-question: 

What would a statistically plausible answer to this question look like? It doesn't make them not useful or amazing! But they're not trustworthy, and in ways that break our instincts developed all our lives, because they never lie, even when they spout falsities.

1 comment:

  1. Oh hey, that's my email!

    I've set my phone to remind me of this weekly, every Monday morning: "LLM answer every question with 'What would a statistically-plausible answer to this question look like?'"

    It becomes more clear week by week as even the old tricks I once used to make good use of the models are failing, and as I push deeper into the type of programming issues it answers very, very poorly.

    ReplyDelete