• TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 hours ago

    I didn’t go looking for evidence for obvious reasons, but I find reports that it’s generating CSAM plausible.

    This has been my biggest concern whenever I hear of generative AI doing these things. Grok is getting the training data from somewhere and has enough of it to generate these images on demand. You can’t even get most generative AI models to show you a glass of wine filled to the brim because it has no training data for such an image but it can generate CSAM no problem.

    • jqubed@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      There was an article a few weeks ago about a developer who used a standard research AI image training dataset and had his Google account locked out when he uploaded it to Google Drive. Turns out it has CSAM in it and it was flagged by Google’s systems. The developer reported the data set to his country’s reporting authorities and they investigated the set and confirmed it contains images of abuse.