Around the beginning of last year, Matthew Prince started receiving worried calls from the bosses of big media companies. They told Mr Prince, whose firm, Cloudflare, provides security infrastructure to about a fifth of the web, that they faced a grave new online threat. “I said, ‘What, is it the North Koreans?’,” he recalls. “And they said, ‘No. It’s AI’.”

Those executives had spotted the early signs of a trend that has since become clear: artificial intelligence is transforming the way that people navigate the web. As users pose their queries to chatbots rather than conventional search engines, they are given answers, rather than links to follow. The result is that “content” publishers, from news providers and online forums to reference sites such as Wikipedia, are seeing alarming drops in their traffic.

As AI changes how people browse, it is altering the economic bargain at the heart of the internet. Human traffic has long been monetised using online advertising; now that traffic is drying up. Content producers are urgently trying to find new ways to make AI companies pay them for information. If they cannot, the open web may evolve into something very different.

Archive : https://archive.ph/nhrYS

  • nthavoc@lemmy.today
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    3 days ago

    You mean kind of like how the web was when it first started in the 90’s with curated websites and when Yahoo was a thing?

    • chromodynamic@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Kind of, but with automation. So if you trust site A 90%, and site A trusts site B 90%, then from your PoV, site B has 81% trust* (which you can choose to replace with your own trust rating, if you want).

      Could have applications in building a new kind of search engine even.

      • I’m just guessing how the maths would work, it probably requires a little more sophisticated system that that, such as starting sites at 50% and only increasing or decreasing the rating based on sites you already trust.
      • nthavoc@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        You’ve got a good idea if you can mitigate other malicious automated processes that will fudge the numbers like what’s going on with Google. If you can emphasize true self-policing in this idea like most things did pre-social media slop, you could potentially improve upon an old idea. There’s also a search engine that looks really promising in that it filters out corporate or AI sponsored content. I was pleasantly surprised. Someone on Lemmy shared this with me: https://marginalia-search.com/

        Lemmy is kind of self-policing in the sense that people can “de-federate”. The problems is, not a lot of people want to “de-federate” and treat it like Reddit. Back in the day, you literally got kicked off your ISP for doing something really stupid. Everyone knew who “that guy” was and that person was pretty much left to the wasteland of the internet if they could get a connection back. I don’t know how to get that level of self policing back, but worth a shot.