- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.
No, no, apparently not everyone, or this wouldn’t be a problem.
i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit
the actual danger of it all should be apparent, especially in any field related to health science research
and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more
the movie idiocracy was a prophecy that we were too arrogant to take seriously.
now go away, I’m baitin
When is that movie set again? I want to mark my calender for the day the US finally gets a compitent president.
Movie was set in 2505… We’re speed-running it. We should get our first pro-wrestler president in our lifetime.
It’s a shit ouroboros, Randy!
It’s new quantities, but an old mechanism, though. Humans were making up shit for all of history of talking.
In olden days it was resolved by trust and closed communities (hence various mystery cults in Antiquity, or freemasons in relatively recent times, or academia when it was a bit more protected).
Still doable and not a loss - after all, you are ultimately only talking to people anyway. One can build all the same systems on a F2F basis.
The scale is a significant part of the problem though, which can’t just be hand waved away.
i’m not understanding what you’re saying. “Still doable and not a loss”??
sounds like something AI would say
There’s an old Monty Python sketch from 1967 that comes to mind when people ask a librarian for a book that doesn’t exist.
They predicted the future.
Thanks for this, I hadn’t seen this one!
Are you sure that’s not pre-Python? Maybe one of David Frost’s shows like At Last the 1948 Show or The Frost Report.
Marty Feldman (the customer) wasn’t one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)
This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.
Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.
Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse
Skill issue, just use the Library of Babel






