• brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 hour ago

    Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

    Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

    • Clay_pidgin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      52 minutes ago

      I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 hours ago

    I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 hour ago

    I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

    It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

    So it’s not really much better.

    Hallucinations become a bigger problem the more info they have (that you now have to double check)

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      52 minutes ago

      At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 minutes ago

        That probably makes sense.

        I haven’t played around since the initial shell shock of “oh god it’s worse now”

  • Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    3 hours ago

    Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

    No, no, apparently not everyone, or this wouldn’t be a problem.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      52 minutes ago

      In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

  • U7826391786239@lemmy.zip
    link
    fedilink
    English
    arrow-up
    99
    ·
    edit-2
    4 hours ago

    i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit

    https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

    https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

    the actual danger of it all should be apparent, especially in any field related to health science research

    and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      9
      ·
      3 hours ago

      Are you sure that’s not pre-Python? Maybe one of David Frost’s shows like At Last the 1948 Show or The Frost Report.

      Marty Feldman (the customer) wasn’t one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        28 minutes ago

        It’s always a treat to find a new Monty Python sketch. I hadn’t seen this one either and had a good laugh

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    4 hours ago

    This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.

    Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 hours ago

      Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse