• Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    It’s not a perspective. It just is.

    It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).

    A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 hours ago

      You do know how replication works?

      When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…

      That’s kinda the gold standard?

      The paper in question has been cited by 371 other papers.

      I’m pretty comfortable with it as a citation.

      • Tattorack@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        Citation like that means it’s a hot topic. Doesn’t say anything about the quality of the research. Certainly isn’t evidence of lacking bias. And considering everyone wants their AI to be the first one to be aware to some degree, everyone making claims like yours is heavily biased.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          18 minutes ago

          I’m sorry dude, but it’s been a long day.

          You clearly have no idea WTF you are talking about.

          The research other than the DeepMind researcher’s independent follow-up was all being done at academic institutions, so it wasn’t “showing off their model.”

          The research intentionally uses a toy model to demonstrate the concept in a cleanly interpretable way, to show that transformers are capable and do build tangential world models.

          The actual SotA AI models are orders of magnitude larger and fed much more data.

          I just don’t get why AI on Lemmy has turned into almost the exact same kind of conversations as explaining vaccine research to anti-vaxxers.

          It’s like people don’t actually care about knowing or learning things, just about validating their preexisting feelings about the thing.

          Huzzah, you managed to dodge learning anything today. Congratulations!