cross-posted from: https://lemmy.world/post/30173090

The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about “it’s complicated” and “pain on all sides” and “nuance is required”, and refusing to confirm anything that seems to hold Israel at fault for the genocide – even publicly available information “can’t be verified”, according to Sesame.

It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.

  • sndmn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    2 days ago

    I suspect most of the major models are as well. Kind of like how the Chinese models deal with Tienanmen Square.

    • Zagorath@aussie.zone
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      7
      ·
      2 days ago

      Actually the Chinese models aren’t trained to avoid Tiananmen Square. If you grabbed the model and ran it on your own machine, it will happily tell you the truth.

      They censored their AI at a layer above the actual LLM, so users of their chat app would find results being censored.

      • Lorem Ipsum dolor sit amet@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        23 hours ago

        Yes, they are. I only run LLMs locally and Deepseek R1 won’t talk about Tiannamen square unless you trick it. They just implemented the protection badly.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 minutes ago

          Not really. Why censor more than you have to? That takes time and effort, and it’s almost certainly easier to do it using something else. The law isn’t that particular, as long as you follow it.

          You also don’t risk causing the model to go wrong, like trying to censor bits of the model has a habit of doing.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        2 days ago

        Which would make sense from a censorship point of view as jailbreaks would be a problem. Just a filter/check before the result is returned for *tiananmen* is a much harder to break thing than guaranteeing the LLM doesn’t get jailbroken/hallucinate.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        1 day ago

        Wow… I don’t use AI much so I didn’t believe you.

        The last time I got this response was when I got into a debate with AI about it being morally acceptable to eat dolphins because they are capable of rape…