I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

  • Naich@lemmings.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    3
    ·
    2 days ago

    You can either spend your time generating prompts, tweaking them until you get what you want and then using more prompts to refining the code until you end up with something that does what you want…

    or you can just fucking write it yourself. And there’s the bonus of understanding how it works.

    AI is probably fine for generating boiler plate code or repetitive simple stuff, but personally I wouldn’t trust it any further than that.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      2 days ago

      There is a middle ground. I have one prompt I use. I might tweak it a little for different technologies, languages, etc. only so I can fit more standards, documentation and example code in the upload limit.

      And I ask it questions rather than asking it to write code. I have it review my code, suggest other ways of doing something, have it explain best practices, ask it to evaluate the maintainability, conformance to corporate standards, etc.

      Sometimes it takes me down a rabbit hole when I’m outside my experience (so does Google and stack overflow for what it’s worth), but if you’re executing a task you understand well on your own, it can help you do it faster and/or better.