• UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Twenty years ago, some savvy engineers created the Rumba. It’s a kind of annoying overpriced vacuum cleaner, but you can at least see the edges of a useful appliance.

    Ten years ago, everyone was smelling their own farts about the advent of autonomous taxis. And, idk, we can at least pretend we’re in the ballpark right now.

    Today, it’s noticeable how AI feels like a surrender on actual autonomous machines. Like, the idea of doing actual robotics is out the window. We’re only going to spend our money on a solved problem - data processing - and see what we can wring out of it.

    I do wonder what this will mean in another ten or twenty years.

    • cabb@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      51 minutes ago

      It’s not hyped in the media, probably because it’s Chinese companies doing it, but there have been massive strides towards creating humanoid robotics in the past couple of years. They aren’t nearly to the point of being autonomous but there’s a simple humanoid robot that’s selling for 5900 today.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      10 hours ago

      Yeah, that robotics stuff is happening still (e.g. little robot mowers using machine vision for easier guidance).

      But you are right that the prospect of advanced robotics gets a fraction of the attention that chatbots get. Trillions of bets on datacenter bound LLMs that can generate images, videos, and text but a relative pittance for advancements that would translate to physical labor…

      I get that there’s value, but the value proposition seems way out of whack.

    • beejboytyson@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      16 hours ago

      So this is scary. We already know that POTUS has no quams about using ai to make shit fake videos. I would be surprised if his plan for indefinite term is actually chat gpt plan.

      Ps. Chat gpt DOES actively try to manipulate you.

      • realitista@lemmus.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        13 hours ago

        Ps. Chat gpt DOES actively try to manipulate you.

        I think mostly it just tells you what it thinks you want to hear. If you push it hard enough to tell you something, it will tell you that. Grok, OTOH, is explicitly trained to try to give you biased answers.

      • Klear@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        14 hours ago

        Ps. Chat gpt DOES actively try to manipulate you.

        No, it doesn’t “try” to do anything. Don’t antropomorphise it.

        • blind3rdeye@aussie.zone
          link
          fedilink
          arrow-up
          3
          ·
          10 hours ago

          It’s a short hand way of communicating. Like saying that a good search engine tries to find the most relevant sites. Or a streaming algorithm tries to recommend videos that you’ll watch. It’s not that we are saying these things are conscious or whatever. We’re just describing what they do.

    • shadowfax13@lemmy.ml
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      22 hours ago

      there is plenty of work going on autonomous front but its solely focused on means to kill the peasants who try to inconvenience the oligarchs. take a look at palmers anduril industries. there are other such companies based in the confederate states.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        21 hours ago

        There’s a lot of salesmanship going on. But whether it really does anything or it just outsources work to a Mechanical Turk or blows smoke up your ass?

        Even Lavender AI seems more like an excuse for conventional manual massacre than a real Skynet for Palestinians.

        What does Anduril do that the NSA or the CIA wasn’t already doing, except maybe putting a thick coat of bullshit on the conclusion?

        • shadowfax13@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          nsa and cia can’t whack large number of americans without some risk of mutiny from the soldiers. ai based large scale precision strike does give them both the capability and excuse to do so. what’s easier, to ask 100k soldiers to kill 2-3 people each and feel no guilt or a dozen narcissistic psychopaths to kill 10k people each and feel godlike ? its not really rocket science just track their cell and do a missile/drone strike at the location with basic face detection. they don’t care for few false-positives. its not like any of the targets will be living near oligarchs palace or mega-yacht

          you are right the lavender is far from skynet but it lets the israeli scums say that they are not indiscriminately killing innocents but based on petabytes of data analysed by microsoft. and you can’t punish ai for mistakes. bombing kids collecting water or destroying a dialysis center, oopsie sorry the ai hallucinated, we will ask trump for a multi-billion refund to buy more bombs for proper testing.

          • UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            13 hours ago

            nsa and cia can’t whack large number of americans without some risk of mutiny from the soldiers

            I haven’t seen any evidence of that. If anything, the number of ex-military in the new ICE would suggest the opposite.

            it lets the israeli scums say that they are not indiscriminately killing innocents but based on petabytes of data analysed by microsoft

            Sure. Drop a bomb, kill a dozen people, get a little print out that says “all of these people were terrorists”.

            It works well in Israel largely because the public (and particularly the military) are juiced on hate. They’ve already done 90% of the work of indoctrination up front.

            You don’t even talk about hallucinations. You just say “data indicated terrorists” and move on.

            But it’s trivial to tell a computer to tag a picture of a person as <img person=“terrorist”>. That’s not something Palmer Luckey needs a multi billion dollar payday to accomplish. Just like with Elon, a bunch of this tech is smoke and mirrors. It’s six guys dragging a CyberTaxi on stage, so a few dorks can step out and say “The Future is Magic Cars!”