Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
I think were probably on the same page, tbh. OTOH, I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.
I like your cruise control+ analogy. Its not quite self driving… but, it’s not quite just cruise control, either. Something half way.
LLMs don’t have human understanding or metacognition, I’m almost certain.
But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That’s weird to think about. It’s something half way.
With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.
And then… I don’t know what happens after that. There’s going to come a time where we cross that point and we just can’t tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.
I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.
These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.
I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.
I hear you. Agreed.
Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn’t have one foot on the break and the other on the accelerator.
I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say “I know what you were after, but here’s the best IRL approximation”.
Bijan did a fun review of Qwen 3-8 Josefied that’s entertaining and explains the basic idea
https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0
Nah, I’ve only messed around with ChatGPT and Grok. My interest in AI originates from the philosophical side of it - mainly the dangers and implications of creating AGI. I’m not tech-savvy enough for anything deeper - I even needed ChatGPT to walk me through installing Linux.
It’s super simple (if you want it to be)
https://www.jan.ai/
https://www.jan.ai/docs/desktop/quickstart
PS: You might like the thing I’m building too. The TL;DR premise is: what if you could make an LLM either tell the truth or lie loudly?
https://codeberg.org/BobbyLLM/llama-conductor
“LLMs don’t have human understanding or metacognition”
Then what’s the (auto-completing) fucking problem? It’s just a series of steps on data. You could feed it white noise and it would vomit up more noise. And keep doing it as long as there’s power.
Intelligent?
If it was just autocomplete in the dismissive sense, white noise should make it derail into white noise. Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.
“Not human understanding” ≠ “no reasoning-like computation.”
Those aren’t the same thing.
People doing the "Fancy autocomplete” thing are doing the laziest possible move: not human, therefore nothing interesting happening. I disagree with that.
It doesn’t “understand,” like we do and it’s not infallible, but calling it “fancy autocomplete” is like calling a jet engine “fancy candle.”
Same category of thing, wildly different behavior.
No, it doesn’t. You’re in sci-fi land. There is no “it” “trying to make sense”. That cogitation is happening in YOU, not the motherboard.
“The cogitation is happening in YOU” is just the philosophical zombie argument dressed up as a gotcha. Sure, there’s no ghost in the machine - but that’s true of your neurons too. Your brain is also “just” electrochemical signals on wet hardware. Does that mean your understanding is happening somewhere else?
The point isn’t whether there’s a homunculus sitting inside the GPU having feelings. The point is that the functional operations happening - maintaining context, resolving ambiguity, applying something structurally similar to inference across novel inputs - are more than pattern-matching in the (dismissive sense) people mean when they say “autocomplete.”
Touché.
Intelligence doesn’t require “self” and we’re a living proof of that. The way LLMs and humans operate have much more similarities than people like to admit. We’re just applying higher standards to AI.