• Almacca@aussie.zone
    link
    fedilink
    English
    arrow-up
    86
    ·
    4 hours ago

    “We need to get beyond the arguments of slop vs sophistication,” Nadella laments, emphasizing hopes that society will become more accepting of AI, or what Nadella describes as “cognitive amplifier tools.” “…and develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other.”

    There’s that word ‘hope’ again. It seems to be the main driving force behind a.i. implementation.

    Also, I think “cognitive real-time amplifier product” makes for a better acronym.

    • LordMayor@piefed.social
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 hours ago

      Talk about putting the cart before the fucking horse.

      Calling them “cognitive amplifier tools” doesn’t make them “cognitive amplifier tools”. They’ve yet to show that they do anything of the sort.

      Spitting out some basic code that probably has to be edited anyway is not a “cognitive amplifier tool”.

      Generating smooth skinned avatars with questionable anatomy is not a “cognitive amplifier tool”.

      Writing creepy, repetitive prose is not a “cognitive amplifier tool”.

      Hell, the term “cognitive amplifier tool” is just nonsense anyway. For fuck’s sake, I hate these people.

      • k0e3@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 hours ago

        That term really creeped me out. It’s so cringey to a point that it scares me. Delusional asshole.

      • Almacca@aussie.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 hours ago

        They hope that’s what they’ll become. Spoiler: they won’t. You amplify your cognition by using it, not relegating it to a dumb computer.

        • elfin8er@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          They don’t even need to do that. They just need the general population to believe that they’re “cognitive amplifier tools”. If you say something enough times, eventually people will start believing it even if it isn’t true.

    • drcobaltjedi@programming.dev
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      3 hours ago

      There’s that word ‘hope’ again. It seems to be the main driving force behind a.i. implementation.

      “Hope is the worst of evils, for it prolongs the torment of man.” - Friedrich Nietzsche

      Dude’s gonna be disappointed when it’s still called slop because people recognize it as slop

      • gwl@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        Huh, it’s for some reason put against.it as a URL in your message - which it turns out it’s a valid URL… But costs £4000/year to buy

    • buddascrayon@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      The “hope” is that they will make shitloads of money from gullible dipshits buying their bullshit.

      • Almacca@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        That’s more of a bet, which is the other word I keep seeing used about a.i.'s financial prospects. More or less the same thing though, so yeah.

          • Almacca@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            Refer to the second sentence above. But I guess, with a bet, there’s money involved. Hope doesn’t specifically require that.

  • TigerAce@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    38
    ·
    4 hours ago

    Fascists: “We don’t like to be Called fascists!”

    AI companies: “We don’t like our slob created from stolen content to be called slob!”

  • BoycottTwitter@lemmy.zip
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    59 minutes ago

    It’s hard to avoid right now, particularly if you’re a user of Microsoft ecosystem products.

    Solution:

    Windows -> Linux
    Office 365 -> Libreoffice to replace most of Office 365 and to replace Outlook consider Thunderbird or Seamonkey
    Edge/Chrome -> Firefox or a fork of Firefox.

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    28
    ·
    4 hours ago

    It’s not the general public’s fault that AI was released before it was ready for prime time. It’s not like the word “slop” was picked out of a hat, either. AI earned the word with its laughably bad output.

    I’ll make the CEO a deal, though. If AIs stop producing slop, I won’t call it slop anymore.

    Microsoft Copilot is the tip of the spear for the firm, powered entirely by ChatGPT and Microsoft’s savvy early investments in OpenAI.

    “Savvy”? Really? I think it’s years too early to make that assessment.

    Indeed, in closing, Nadella seems to admit that AI doesn’t truly have “societal permission” right now, referencing widespread backlash and mockery that continues to dog the technology.

    What kind of “permission” is he looking for? Permission to steal and ignore copyright? Permission to build untold numbers of data centers and do vast environmental damage? Permission to lie and sell a substandard product? He probably wants all three, and more.

    • captainlezbian@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      The permission he’s talking about is being seen as cool or normal. This whole thing is him whining that we think his loser product that causes more problems than it solves is a loser product that causes more problems than it solves

    • Kay Ohtie@pawb.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      I think he’s trying to do the bullshit of creating a sense that people who utilize GPT systems are somehow repressed class in some fashion. I don’t think that the term about cognitive amplification is intended to truly mean that, so much as create the sense that if you aren’t treating it like that output is equal that somehow you’re being judgmental unfairly.

      Basically he doesn’t want output from those things to be graded on an equal playing field with stuff that wasn’t created that way, because he knows it fucking fails.