Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    “The cogitation is happening in YOU” is just the philosophical zombie argument dressed up as a gotcha. Sure, there’s no ghost in the machine - but that’s true of your neurons too. Your brain is also “just” electrochemical signals on wet hardware. Does that mean your understanding is happening somewhere else?

    The point isn’t whether there’s a homunculus sitting inside the GPU having feelings. The point is that the functional operations happening - maintaining context, resolving ambiguity, applying something structurally similar to inference across novel inputs - are more than pattern-matching in the (dismissive sense) people mean when they say “autocomplete.”

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Sure, there’s no ghost in the machine - but that’s true of your neurons too.

      Touché.

      Intelligence doesn’t require “self” and we’re a living proof of that. The way LLMs and humans operate have much more similarities than people like to admit. We’re just applying higher standards to AI.