You must log in or register to comment.
Save you a click:

I see that LLM’s are doing their utmost to train fools to not use them. That might just be the only useful thing I’ve seen one of them do to date.
Keep up the good work, Google AG!
What the hell is google antigravity?
New drive deletion tool.
Awesome! Now I don’t have to type rm -rf .
Just as a few months ago, when another AI/LLM destroyed another project in a similar way, this happened due the failure to classify AI/LLM risks properly.
- In both these instances the person treats the AI/LLM as a person capable of reason, it is not, remember what Isaac Asimov wrote about the robots in his universe, they are capable of extreme logic, but can’t deal with reason at all. The same applies here, the AI/LLM can absolutely be logical, but it can’t use reason.
- Failure to treat the AI/LLM as a potentially bad actor, the person looked at the AI/LLM, probably ran a few simple tests, then granted it full access to highly sensitive files. Since the software is quite good at chatting with humans like we chat with each other, it makes the developer think they and trust it as if it were a person.
Reddit account created 3 days ago for the explicit purpose of posting this. All it’s replies are extremely affirmative and friendly, and one even admits that they are AI but for purposes of translation.
Dead internet theory; it’s engagement bait.



