Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
I hear you. Agreed.
Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn’t have one foot on the break and the other on the accelerator.
I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say “I know what you were after, but here’s the best IRL approximation”.
Bijan did a fun review of Qwen 3-8 Josefied that’s entertaining and explains the basic idea
https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0
Nah, I’ve only messed around with ChatGPT and Grok. My interest in AI originates from the philosophical side of it - mainly the dangers and implications of creating AGI. I’m not tech-savvy enough for anything deeper - I even needed ChatGPT to walk me through installing Linux.
It’s super simple (if you want it to be)
https://www.jan.ai/
https://www.jan.ai/docs/desktop/quickstart
PS: You might like the thing I’m building too. The TL;DR premise is: what if you could make an LLM either tell the truth or lie loudly?
https://codeberg.org/BobbyLLM/llama-conductor