I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

  • Sicklad@lemmy.world
    link
    fedilink
    arrow-up
    50
    ·
    2 days ago

    From my experience it’s great at doing things that have been done 1000x before (which makes sense given the training data), but when it comes to building something novel it really struggles, especially if there’s 3rd party libraries involved that aren’t commonly used. So you end up spending a lot of time and money hand holding it through things that likely would have been quicker to do yourself.

    • kewjo@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      2 days ago

      the 1000x before bit has quite a few sideffects to it as well.

      • lesser used languages suffer because there’s not enough training data. this gets annoying quickly when it overrides your static tools and suggests nonsense.
      • larger training sets contain more vulnerabilities as most code is pretty terrible and may just be snippets that someone used once and threw away. owasp has a top 10 for a reason. take input validation for example, if I’m working on parsing a string there’s usually context such as is this trusted data or untrusted? if i don’t have that mental model where I’m thinking about the data i might see generated code and think it looks correct but in reality its extremely nefarious.
      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 days ago

        Its also trained on old stuff.

        And because its old, you get some very strange side effects and less maintainability.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        2 days ago

        It’s decent at reviewing its own code, especially if you give it different lenses to look though.

        “Analyze this code and look for security vulnerabilities.” “Analyze this code and look for ways to reduce complexity.”

        And then… think about the response like it’s a random dude online reviewing your code. Lots of times it raises good issues but sometimes it tries too hard to find little shit that is at best a sidegrade.

    • Eq0@literature.cafe
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 days ago

      The pycharm AI integration completes each line. That’s very useful when you are repeating a well known algorithm and not distracting when you are doing something unusual. So overall, for small things AI is a speed up. I haven’t tried asking chatgpt for bigger coffe chunks, I haven’t had the greatest experience with it up to now and ii don’t want to spend more time debugging than I am already.

      • ripcord@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        Oh man, the Codeium auto complete in PyCharm has been just awful for me. Slow enough that it doesnt come up fast enough that I ever expect it (and rarely comes up when I pause to wait for it) then goes away instantly when I invariably continue typing when it comes up. Then won’t come back if I backspace, erase the word and start retyping it, etc. And competes with the old school pycharm auto complete sometimes which adds another layer of fun.