I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).
Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.
I’m not against AI use in software development… But you need to understand what the tools you use actually do.
An LLM is not a dev. It doesn’t have the capability to think on a problem and come up with a solution. If you use an LLM as a dev, you are an idiot pressing buttons on a black box you understand nothing about.
An LLM is a predictive tool. So use it as a predictive tool.
The one use of AI, at the moment, that I actually like and actually improves my workflow is JetBrains’ full line completion AI. It very often accurately predicts what I want to write when it’s boilerplate-ish, and shuts up when I write something original.
Yes they do have the abikity to think and reason just like you (generally mush faster and slightly better)
https://medium.com/@leucopsis/how-gpt-5-compares-to-claude-opus-4-1-fd10af78ef90
96% on the AIME with zero tools. Only reading the question and reasoning through the answer
https://www.datacamp.com/blog/gpt-5
Absolutely not. This comment shows you have absolutely zero idea how an LLM works.
This is not true. They do not think or reason. They have code that appears to reason, but it definitely is not.
Once it gets off track it doesn’t consider that it is obviously wrong.
A simple math problem can fail and it is really obvious to a human for example.
No, they can’t think and reason. However, they can replicate and integrate the thinking and reasoning of many people who have written about similar problems. And yes, they can do it must faster than we could read a hundred search result pages. And yes, their output looks slightly better than many of us in many cases, because they are often dispensing best practices by duplicating the writings of experts. (In the best cases, that is.)
https://arxiv.org/pdf/2508.01191
https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/