I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

  • southernbrewer@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 day ago

    I’m enjoying it, mostly. It’s definitely great at some tasks and terrible at orhers. You get a feel for what those are after a while:

    1. Throwaway projects - proof of concepts, one-off static websites, that kind of thing: absolutely ideal. Weeks of dev becomes hours, and you barely need to bother reviewing it if it works.

    2. Research (find a tool for doing XYZ) where you barely know the right search terms: ideal. The research mode on claude.ai is especially amazing at this.

    3. Anything where the language is unfamiliar. AI bootstraps past most of the learning curve. Doesn’t help you learn much, but sometimes you don’t care about learning the codebase layout and you just need to fix something.

    4. Any medium sized project with a detailed up front description.

    What it’s not good for:

    1. Debugging in a complex system
    2. Tiny projects (one line change), faster to do it yourself
    3. Large projects (500+ line change) - the diff becomes unreviewable fairly quickly and can’t be trusted (much worse than the same problem with a human where you can at least trust the intent)