Curious where others might stand.

My day to day “coding” is reviewing, revising and running plans against LLM/code-assistant tools. I juggle around 2-3 sessions of this on various features or tasks at a time.

  • Flamekebab@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    Unless it’s the most rudimentary logic I tend to have to hold the LLM’s hand through the entire design. Sometimes that involves breaking the damn thing’s fingers.

    I mostly use them to write stuff that I can do but hate because the syntax is a faff (argparsers, docstrings, shite like that). There’s just too much contextual knowledge needed for them to be much use with the codebase I work with, much of which is not in the codebase so much as in the metatextual knowledge needed. I write a lot of software for testing hardware and there’s a needle that has to be threaded - the specifications are not bulletproof, the hardware being tested failing may be the desired outcome (so code that passes on it is inherently wrong), and there are various limitations and priorities that inform test approach.

    The actual code I write is fairly minimal but the knowledge needed to write it is extensive.

    Occasionally I’ll take on side projects at work to get my hands dirty on something more involved but simpler.

    • etchinghillside@reddthat.comOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      This is about the same behavior I’m accustomed to. I will say that my current work is more greenfield at the moment.

      First plan is about 60% there. And then we have a few iterations to get that in good shape.

      Once the plan is together I send that to a virtual machine to implement the code with low/no supervision.

      Then it goes into a draft PR that I will review and provide further guidance on changes or updates. Those iterations will happen on either my virtual or local machine, depending on how much the work is.

      I could imagine It would be pretty difficult with hardware, where you have to compile, and potentially transfer to a chip to run or test further. But I will say, if you can give an LLM access to the whole loop of write, compile, test and reading error logs the results can sometimes be impressive.

      • Flamekebab@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I don’t think that’d be much good when the way the specs are written are not based purely on technical merits. We also aren’t paid to test hardware - we have hardware because we develop software to test hardware on. Knowing how to navigate those waters is the key skill. Once I’ve got a solution that’ll work getting it implemented is relatively trivial as our approach is extremely atomic.

        In general what you describe sounds like a tremendous amount more overhead than we currently have for little to no gain. What I could do with is a few more engineers that I could train up to have sufficient contextual knowledge, not another junior to babysit. I trained one up and he’s tremendously useful - apart from when he leans too heavily on LLMs. That cost a side project two months of unnecessary faff and sapped team morale massively in the process. I ended up dragging the damn thing over the finish line after he refactored it into something that was exhausting to work with.

        • etchinghillside@reddthat.comOP
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Gotcha – I have no doubts that LLMs can steer things to shit at breakneck speeds.

          I certainly wouldn’t mind some more (competent) employees.