• Flatfire@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    20 hours ago

    Not all AI use is bad, and it sounds to me like you didn’t read that article itself. They have no desire or intention to use AI in a way that directly effects the information on the site, how it’s presented to visitors or to use it in a way that would manipulate how articles are edited.

    The only potential note is translation, but translation is such a massive undertaking that by providing a means to discuss and interact between languages, the information becomes more broadly available and open to correction as needed by native speakers.

    Also, Britannica does employ the use of AI within their own system as well, even providing a chatbot by which to ask questions and search for information. It is, in this way, more involved than Wikipedia’s goals.

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      They have no desire or intention to use AI in a way that directly effects the information on the site, how it’s presented to visitors or to use it in a way that would manipulate how articles are edited.

      To be fair in June they tried to introduce AI-generated “simple summaries” to articles, but the editor community was so vehemently against it, that in the end they shelved the idea.

      • Flatfire@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Fair enough. I missed this push amidst every other AI related enshittification tactict at the time I guess. That said, this is how it should work. An organization proposes a change and the change is withdrawn or halted after the userbase is able to weigh in. I’m pleased that they didn’t barrel ahead with it despite the outcry.

        I feel for the Wikimedia foundation right now. They’re under mounting pressure to compete with corporations that hold a monopoly on how people access their sites and subsequently the information on them. The goal is to provide open information, but that information is no less open to the AI that aims to scrape, rehost, and re-use the work of individuals who have volunteered their time to it.

        I think it would have been easy for them to effectively do what Reddit did, and lock down the access to the site and its content in order to develop their own AI tools to perform similar tasks trained on their dataset exclusively. Instead, they’ve listened and I hope they continue to listen to their dedicated members who believe in the foundation’s original goals.

        • kazerniel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          52 minutes ago

          Agreed, despite its faults, Wikipedia/Wikimedia is one of the most ethical organisations I know of, to a large degree because of how much average users can take part in its various decision-making processes. Most of its bureaucratic processes happen in the open - I sometimes enjoy reading through 15-years-old discussions about why/if a certain page should be deleted or a certain user banned.