Note: Article’s actual headline, by the way. It is The Register.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.

    And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      The whole exponential improvement hypothesis assumes that the marginal cost of each improvement stays the same. Which is a huge assumption.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.