I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).
Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.
This is not true. They do not think or reason. They have code that appears to reason, but it definitely is not.
Once it gets off track it doesn’t consider that it is obviously wrong.
A simple math problem can fail and it is really obvious to a human for example.