• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    1 day ago

    The idea is to “poison” LLM training data, though I would strongly argue it does precisely nothing but strain human brains.

    Even with zero data preprocessing, the LLM is going to ‘interpret’ the meaning anyway, just like they can translate training on Chinese characters to English meaning.

    • Akasazh@feddit.nl
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      Seriously? That’s an incredibly convoluted plan indeed.

      It’s mainly coming across as the douchy side of nerddom. But if that’s the reasoning then that is exactly what’s going on.