No. You are the one who knows, without doubt, they used ChatGPT and can’t be wrong. If you think “hey, there are other options, don’t jump to unproven conclusions” is to like to argue I’m not the one with a problem.
I’m open to being proven wrong, but you need a bit more than “trust me, I must know”.
The article says they used ChatGPT or some similar LLM bot. It says they used a chatbot, and that’s what the word chatbot means by default. A skilled reporter mentions if it was something else.
The reporter used a chatbot such as ChatGPT to ask if there’s anything suspicious in the image, the chatbot, by coincidence, happened to point out something in the photo that the reporter could then recognise as AI-generated indeed, and got on typing his article again.
The only part of this that is not mentioned in the article is that the reporter confirmed the referred spot in the image with his own eyes, but that is such an integral part of a reporter’s education that you need specific reasons to work against the assumption that this was done.
No. You are the one who knows, without doubt, they used ChatGPT and can’t be wrong. If you think “hey, there are other options, don’t jump to unproven conclusions” is to like to argue I’m not the one with a problem.
I’m open to being proven wrong, but you need a bit more than “trust me, I must know”.
The article says they used ChatGPT or some similar LLM bot. It says they used a chatbot, and that’s what the word chatbot means by default. A skilled reporter mentions if it was something else.
The reporter used a chatbot such as ChatGPT to ask if there’s anything suspicious in the image, the chatbot, by coincidence, happened to point out something in the photo that the reporter could then recognise as AI-generated indeed, and got on typing his article again.
The only part of this that is not mentioned in the article is that the reporter confirmed the referred spot in the image with his own eyes, but that is such an integral part of a reporter’s education that you need specific reasons to work against the assumption that this was done.
No it doesn’t.
No it’s not
The article doesn’t say the kind of chatbot, not chatbot means LLM or ChatGPT.
I’m not going to continue. It’s just going in circles.
Are you sure you’re not the LLM?
You can see my comment history to determine if I’m an LLM or not :)
In any case, have fun in your circles!