These people were using ✨agentic✨ AI. They were using ✨thinking✨ models
The A"I" industry is full of those misleading buzzwords, isn’t it?
“Agentic” would be theoretically “able to take decisions by itself, like an agent”, but in practice the word is spammed so often in this context that it lost its meaning.
“Thinking”? Look, you can push and pull definitions as much as you want, but those models don’t think. Nor do people who claim otherwise.
So are the AI-posters lying or what?
I think the AI-posters are a mix of the following, in order of least to most malevolent:
Outright Malice\
Stop trying to gauge “intentions” (whatever this means). Focus on the fact that they’re vomiting certainty on something that is blatantly incorrect.
Eventually every vibe coder reaches the point where the returns start heavily diminishing.
From my experience as translator, translating things is easier than proofreading. I expect the same to apply to code - writing code to be easier than reviewing and maintaining it. (What programmers in Lemmy often say reinforce this for me.)
o rly.
The A"I" industry is full of those misleading buzzwords, isn’t it?
“Agentic” would be theoretically “able to take decisions by itself, like an agent”, but in practice the word is spammed so often in this context that it lost its meaning.
“Thinking”? Look, you can push and pull definitions as much as you want, but those models don’t think. Nor do people who claim otherwise.
Stop trying to gauge “intentions” (whatever this means). Focus on the fact that they’re vomiting certainty on something that is blatantly incorrect.
From my experience as translator, translating things is easier than proofreading. I expect the same to apply to code - writing code to be easier than reviewing and maintaining it. (What programmers in Lemmy often say reinforce this for me.)