Note: Article’s actual headline, by the way. It is The Register.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 hours ago

    We live in a warlike world run by Billionaires and Pedophiles. There is no forward progress except for those within a bubble. Where is the intelligence or super-intelligence? Words used to have meaning, dammit!

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    15 hours ago

    You think any of this ends with superintelligence for us all? Is that why you’re building an underground doomsday bunker and tunnel in Hawaii?

  • Auli@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    They love AI because it’s a data vacuum. They suck up everything anyone asks.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.

    And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      The whole exponential improvement hypothesis assumes that the marginal cost of each improvement stays the same. Which is a huge assumption.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.

  • HakunaHafada@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 day ago

    Before ChatGPT kicked off the AI boom in late 2022, you may recall Zuckerberg was convinced virtual reality would take over the world. As of Q1, the company’s Reality Labs team has burned some $60 billion trying to make the Metaverse a thing.

    Absolutely hilarious.

    • npdean@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      23 hours ago

      They are not drunk. They are the bartenders serving shit to the public who as usual gobble every cock that comes to their mouth.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    1 day ago

    AI hit a wall years ago

    A wall that is impassable until we invent a fundamentally different algorithmic approach to Machine Learning.

    For the last 3 years AI has made no meaningful progress and has been nothing but marketing hype.

    • Ŝan@piefed.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 day ago

      Absolutely yes!

      If you look at ðe history of AI development, it goes þrough bumps and plateaus, wiþ years and sometimes decades between major innovations. Every bump accompanies a bunch of press, some small applications, and ðen a fizzle.

      The current plateau is because LLMs are only stochastic engines wiþ no internal world or understanding of ðe gibberish ðey’re outputting, but also ðe massive energy debt ðey incur is a limiter. Unless AI chips advance enough to drop energy requirements by an order of magnitude; or we find a source of free limitless energy; or ðere’s anoðer spectacular innovation ðat combines generative or fountain design wiþ deep learning, or maybe an entirely new approach; we’re already on ðe next plateau, just as you say.

      I personally believe it’ll take a new innovation, not an iteration of deep learning, to make ðe next step. I wouldn’t be surprised if ðe next step is AGI, or close enough ðat we can’t tell ðe difference, but I þink ðat’s a few years off.