Wikipedia has a new initiative called WikiProject AI Cleanup. It is a task force of volunteers currently combing through Wikipedia articles, editing or removing false information that appears to have been posted by people using generative AI.

Ilyas Lebleu, a founding member of the cleanup crew, told 404 Media that the crisis began when Wikipedia editors and users began seeing passages that were unmistakably written by a chatbot of some kind.

  • Shdwdrgn@mander.xyz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 个月前

    Seems like that would be an easy problem to solve… require all edits to have a peer review by someone with a minimum credibility before they go live. I can understand when Wikipedia was new, allowing anyone to post edits or new content helped them get going. But now? Why do they still allow any random person to post edits without a minimal amount of verification? Sure it self-corrects given enough time, but meanwhile what happens to all the people looking for factual information and finding trash?

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 个月前

      Or at least give it a certain amount of time before it goes live. So if nobody comes around to approve it in 24 hours, it goes live.

      Usually bad edits are corrected within hours, if not minutes, so that should catch the lion’s share w/o bogging down the approval queue too much.

    • RubberDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 个月前

      Croudsourcing is the strenght that led to the vast resource and also the weakness as displayed here. So probably there will be a need for some form of barrier. Hence my suggestion.