• aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 hour ago

    It’s great at bullshitting that it did what you wanted, even if it obviously didn’t, which I guess is what counts for results at Microsoft.

    It would be much better if they treated it as the slightly better (yeah, I said it) auto complete that it is instead of the beginning of fucking sky net – which was supposed to be a bad thing anyway, remember?

    But that wouldn’t move the needle on all of the share prices, so instead we have to pretend it can do people’s jobs when it fucking obviously cannot.

    So, instead they keep pushing this AI (auto-complete insanity), and keep burning more and more cash. Imagine if we just put a portion of these billions (approaching trillions) into anything that could actually help anyone. Or don’t, because it’s pretty fucking depressing to think about.

  • Lyrac@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    Big over-promise. We’re heavily incentived to use an AI coding agent at work. I try to be optimistic and treat it like a tool to help me do things I already know how to do but a little bit faster. It takes multiple iterations of “no, this still isn’t working” to get something that I can touch up and push for review. The idea that I can prompt it and then step away for ten minutes to make coffee and return to an app is ludicrous.

    Maybe one day that will be possible. Then I’ll find a new job I guess

  • llama@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 hours ago

    Actually it won’t be finishing anything because code is disposable now and nobody cares what trivial app somebody can churn out

  • lightnegative@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    10 hours ago

    Writing code is the reward for doing the thinking. If the LLM does it then software engineering is no fun.

    It’s like painting - once you’ve finally finished the prep, which is 90% of the effort, actually getting to paint is the reward

    • PolarKraken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      What a great way to frame it, I love this! I typically spend something like 60-80% of time available for a given task thinking through approaches and trade-offs, etc. Usually there comes a point when the way forward becomes clear, even obvious.

      After that? Bliss. I’m snapping together a LEGO set I designed, composed of pieces I picked (maybe made one or two new ones!), and luxuriating in how it all feels, when put together.

  • Prior_Industry@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    16 hours ago

    I mean it gets there in the end but it’s often three of four prompts before it provides working code for a relatively simple powershell script. Can’t imagine that it scales to complex code that well at the moment, but then again I’m not a coder.

  • YesButActuallyMaybe@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    ·
    18 hours ago

    Ah get outta here! Next time they’ll say that co pilot also chooses my furry porn and controls my buttplug while it codes for me.

  • ☂️-@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    18 hours ago

    where are my penguin boys at. 🐧

    seriously people. the majority of you don’t have to put up with this, you know that right?

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    22 hours ago

    What they forget to mention is that you then spend the rest of the week to fix the bugs it introduced and to explain why your code deleted the production database…

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 hours ago

    A more appropriate line would be that Copilot can shit out code faster than you can pinch off your own loaf.

    • Thorry@feddit.org
      link
      fedilink
      English
      arrow-up
      71
      ·
      1 day ago

      Also just because the code works, doesn’t mean it’s good code.

      I’ve had to review code the other day which was clearly created by an LLM. Two classes needed to talk to each other in a bit of a complex way. So I would expect one class to create some kind of request data object, submit it to the other class, which then returns some kind of response data object.

      What the LLM actually did was pretty shocking, it used reflection to get access from one class to the private properties with the data required inside the other class. It then just straight up stole the data and did the work itself (wrongly as well I might add). I just about fell of my chair when I saw this.

      So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection. But since it seemed to work in the few tests he did and the unit tests the LLM generated passed, he thought it would be fine.

      Also the unit tests were wrong, I explained to the dev that usually with humans it’s a bad idea to have the person who wrote the code also (exclusively) write the unit tests. Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots. With LLMs this is doubly true, it will just straight up lie in the unit tests. If they aren’t complete nonsense to begin with.

      I swear to the gods, LLMs don’t save time or money, they just give the illusion they do. Some task of a few hours will take 20 min and everyone claps. But then another task takes twice as long and we just don’t look at that. And the quality suffers a lot, without anyone really noticing.

      • airgapped@piefed.social
        link
        fedilink
        English
        arrow-up
        14
        ·
        23 hours ago

        Great description of a problem I noticed with most LLM generated code of any decent complexity. It will look fantastic at first but you will be truly up shit creek by the time you realise it didn’t generate a paddle.

      • Kissaki@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection.

        Big baffling facepalm moment.

        If they would at least prefix the changeset description with that it’d be easier to interpret and assess.

      • criss_cross@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        They’ve been great for me at optimizing bite sized annoying tasks. They’re really bad at doing anything beyond that. Like astronomically bad.

        • Kissaki@feddit.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          18 hours ago

          They did say why they’re doing it

          Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots.

          Did that not make sense to you?

          I usually wouldn’t do that, because it’s a bigger investment. But it certainly makes logical sense to me and is something teams can weigh and decide on.

      • WaitThisIsntReddit@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        A couple agent iterations will compile. Definitely won’t do what you wanted though, and if it does it will be the dumbest way possible.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          1 day ago

          Yeah you can definitely bully AI into giving you some thing that will run if you yell at it long enough. I don’t have that kind of patience

          Edit: typically I see it just silently dump errors to /dev/null if you complain about it not working lol

          • Darkenfolk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            And people say that AI isn’t humanlike. That’s peak human behavior right there, having to bother someone out of procrastination mode.

            The edit makes it even better, swiping things under the rug? Hell yeah!