edited from talent to job

  • LouNeko@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    Anti-Cheats. Train an AI on gameplay data (position, actions, round duration, K/D, etc.) of caught cheaters and usw that to flag new ones. No more Kernel level garbage, just raw gameplay data.

    • jimmycrackcrack@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      21 hours ago

      It’s also good since it’s low stakes. I mean I’d be furious if misidentified after I paid to use the game and but at the end of the day it’s only a game.

  • Tattorack@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    2 days ago

    Any body-breaking heavy labour. Emphasis on body-breaking; there’s nothing wrong with hard work, but there are certain people that believe hard work = leaving your body destroyed at 50.

  • steeznson@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    23 hours ago

    I think many things that solicitors do could be easily replaced with AI since it’s just parsing the contents of documents and then writing a few templated summaries.

    • jimmycrackcrack@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      Yeh I think people like this idea because of a kind of ironic poetic justice since it’s those guys who wanted to replace everyone else except themselves with AI, but if you think about how much you hated those uncaring bastards operating like robots just to extract an ounce of profit at whatever the human cost, imagine now actually being a robot. Also, if you ever had to deal with bullshit from those guys and resented having to grin and bear it even though you don’t think they’re particularly qualified and also know nothing about your job, imagine having to be “managed” by a fucking robot that tries to say patronising encouraging things because it’s learned the very best pattern of speech to get the behaviour it wants out of you. Admittedly at least some of the decision making might be a bit more rational, but then every now and then AI gets things totally out of wack in the strangest ways and you’ll have to just take those decisions, from a damn machine.

    • Numuruzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      I get what you’re going for but I have a hard time imagining this as a good thing so long as companies are profit driven.

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    30
    ·
    3 days ago

    Perhaps it’s not possible to fully replace all humans in the process, but harmful content filtering seems like something where taking the burden off humans could do more good than harm if implemented correctly (big caveat, I know.)

    Here’s an article detailing a few peoples’ experience with the job and just how traumatic it was for them to be exposed to graphic and distributing content on Facebook requiring moderator intervention.

    • BougieBirdie@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      2
      ·
      3 days ago

      CEO is usually my answer as well when people ask

      Like, honestly too. The humans running the show are outrageously expensive, cause huge ecological harm, make their decisions based on vibes with no understanding of their domain, and their purposes are inscrutable to the average worker. They’re honestly the perfect target for AI because they already behave like AI.

      I don’t think I actually want to live in a world where AI is running the show, but I’m not sure it’d be any worse than the current system of letting the most parasitic bloodsucking class of human being call the shots. Maybe we ought to try something else first.

      But make sure to tell the board of directors and shareholders how much more profitable they’d be if they didn’t have to buy golden parachutes

      • Norin@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        3 days ago

        I’d say that you could replace quite a few high level academic administrators for these same reasons.

        They already behave like AI; but AI would be cheaper, more efficient, and wouldn’t change every 2 years.

        And I mean that as an insult to admin, not a compliment to AI.

        • Flying Squid@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          24 hours ago

          I was going to ask if you would want an AI to decide on tenure, then I thought about the people who usually decide on tenure and maybe AI is a better option.

  • EnderMB@lemmy.world
    link
    fedilink
    arrow-up
    35
    arrow-down
    2
    ·
    3 days ago

    Preface: I work in AI, and on LLM’s and compositional models.

    None, frankly. Where AI will be helpful to the general public is in providing tooling to make annoying tasks (somewhat) easier. They’ll be an assisting technology, rather than one that can replace people. Sadly, many CEO’s, including the one where I work, either outright lie or are misled into believing that AI is solving many real-world problems, when in reality there is very little or zero tangible involvement.

    There are two areas where (I think) AI will actually be really useful:

    • Healthcare, particularly in diagnostics. There is some cool research here, and while I am far removed from this, I’ve worked with some interns that moved on to do really cool stuff in this space. The benefit is that hallucinations can actually fill in gaps, or potentially push towards checking other symptoms in a conversational way.

    • Assisting those with additional needs. IMO, this is where LLM’s could be really useful. They can summarize huge sums of text into braille/speech, they can provide social cues for someone that struggles to focus/interact, and one surprising area where they’ve been considered to be great (in a sad but also happy way) is in making people that rely on voice assistants feel less lonely.

    In both of these areas you could argue that a LLM might replace a role, although maybe not a job. Sadly, the other side to this is in the American executive mindset of “increasing productivity”. AI isn’t a push towards removing jobs entirely, but squeezing more productivity out of workers to enable the reduction of labor. It’s why many technological advancements are both praised and feared, because we’ve long reached a point where productivity is as high as it has ever been, but with jobs getting harder, pay becoming worse and worse, and execs becoming more and more powerful.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      I was super nervous AI would replace me, a programmer. So i spent a long time learning, hosting, running, and coding with models, and man did I learn a lot, and you’re spot on. They’re really cool, but practical applications vs standard ML models are fairly limited. Even the investors are learning that right now, that everything was pure hype and now we’re finding out what companies are actually using AI well.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        There are a fair number of “developers” that I think will be displaced.

        There was a guy on my team from an offshoring site. He was utterly incompetent and never learned. He produced garbage code that didn’t work. However he managed to stay in for about 4 years, and even then he left on his own terms. He managed to go 4 years and a grand total of 12 lines of code from him made it into any codebase.

        Dealing with an LLM was awfully familiar. It reminded me of the constant frustration of management forcing me to try to work with him to make him productive. Excrpt the LLM was at least quick in producing output, and unable to go to management and blame everyone else for their shortcomings.

        He’s an extreme case, but in large development organizations, there’s a fair number of mostly useless developers that I think LLM can rationalize away to a management team that otherwise thinks “more people is better and offshoring is good so they most be good developers”.

        Also, enhanced code completion where a blatantly obvious input is made less tedious to input.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I’ll give you that one. LLMs in their current state help me write code that otherwise I would be putting off or asking someone else to do. Not because it’s hard but because I’ve done it 1000 times and I find it tedious, and I’d expect an entrylevel/jr to take it with stride. Even right now I’m using it to write some python code that otherwise I just don’t want to write. So, I guess it’s time to uplevel engineers. The bar has been raised, and not for the first time in our careers.

  • rickdg@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    3 days ago

    The kind of dangerous jobs where people still get payed to risk their life and health.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      My greatest fear is we’ll get the robots (like, Animatrix: Second Renaissance of I, Robot general purpose robots) but before we have any sort of progressive change of revolution. That we’ll be one step from a truly carefree life.