When it comes to dealing with advertisements when they’re surfing on their browsers. I’ve just learned recently about how Google has or is killing UBlock Origin on the Chrome browser as well as all Chromium based browsers too.

We’ve heard for years about people complaining, bitching, whining and vice versa about how they keep seeing ads. And those trying to help them, keep wasting time to tell these people that they’re surfing without extensions. Whether it’d be on Chrome or Firefox or another browser.

By this point, I’ve long stopped being that helper because if you cared at all about the advertisements you see, you would’ve long had gotten on the wagon of getting adblockers by now. You bring this onto yourself.

  • Nutteman@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    14 days ago

    That would require an actual AGI to emerge, which it has not and is not going to. LLMs are fancy text prediction tools and little more.

    • capital@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      14 days ago

      Are you assuming LLMs are the only way humans could ever try making an AGI? If so, why do you assume that?

      • anothermember@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        14 days ago

        I agree that AGI is dangerous but I don’t see LLMs as evidence that we’re close to AGI, I think they should be treated as separate issues.

        • capital@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          14 days ago

          Given what I think I know about LLMs, I agree. I don’t think they’re the path to AGI.

          The person I replied to said AGI was never going to emerge.

      • Nutteman@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        14 days ago

        There’s more important shit than worrying about if an unproven sci fi concept will come to being any time soon.

        • capital@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          14 days ago

          Yeah, agreed. That’s not what I asked though.

          This response is a bit of a misdirection since we all discuss shit that isn’t the most important all the time.

      • Jack Riddle@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        If people start developing a new more promising kind of “ai”, we can talk about it ðen. For now, ð þing we call “AI” sucks and just steals.

    • Ceedoestrees@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      14 days ago

      What we see in AI as an average consumer is like the RC hotwheels to a state of the art tank being used by big corps.

      Just imagine that if an early LLM can fool an engineer into thinking it’s sentient, what a state of the art system can do, one designed to predict the market, run propaganda bots on social media or straight up manufacture news stories with the footage to back it up.

      The AI being used by big corporations is so advanced, it’s one of the reasons countries have been trying to digitally isolate themselves. It’s really not an if, it’s a when.

        • Ceedoestrees@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          14 days ago

          I do. I did get a little lost in the weeds with my point though, as I was talking in a more general sense about how AI is already powerful and dangerous - because AI safety is a subject in this thread.

      • huginn@feddit.it
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        The “AI” being used by big corporations is still fundamentally an LLM and has all the flaws of an LLM. It’s not a hot wheels car vs a tank, it’s a hot wheels car vs a $2 billion RC car

        • Ceedoestrees@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          13 days ago

          I’d like to get into how both me and OP are talking about how fast AI, not just LLMs, is scaling, and the potential it has across a variety of industries - most concerning to me is it’s use by investment firms. But I need to go to the barber because I already have enough split hairs.

          • huginn@feddit.it
            link
            fedilink
            arrow-up
            1
            ·
            13 days ago

            It is my understanding that the fundamental architecture (the general purpose transformer) is identical between the “AI” used by Black Rock and by OpenAI

            If you have some evidence to the contrary I’d always appreciate the chance to learn.

            But the transformer based architecture is fundamentally flawed: it will always hallucinate.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 days ago

      which it has not and is not going to

      So you’re confident that AGI is not fundamentally possible? That would contradict basically every single scientist in the world and this is exactly why this issue is so difficult. Ironically, proving my point for the OP’s question lol