Curious where others might stand.
My day to day “coding” is reviewing, revising and running plans against LLM/code-assistant tools. I juggle around 2-3 sessions of this on various features or tasks at a time.
I haven’t written a line of code in years.
Nevermind that I am not a developer.
What kind of work do you do? Every time I read stuff like this, I find it hard to believe, but maybe the code/languages/frameworks I’m using just aren’t as easy for LLMs as what other people are using. The results I’ve had trying to get it to write C++ have been atrocious.
I’m not against AI code assistance, and I like keeping an open mind. For the moment though, the only success I’ve had is with using it to explain some feature or API faster than manually looking up the documentation. But actually reading the docs, I’ve found, helps me remember things better.
API/Data Engineering for SAAS products. No one will die if I do something dumb - it’ll just cost money and/or reputation if things go very bad.
How long till you lose the ability to manually write code, you reckon?
I certainly foresee this happening and/or my ability to perform any future white boarding interviews adequately.
You won’t be the only one, surely interviews are going to have to change once no one can do white boarding anymore!
A recent JetBrains survey I saw found that 85% of devs are using AI in some capacity.
Yup. I guess it’s also worth noting my past couple jobs have been from my network - so the interviews have been more of a informal behavioral/culture fit. But I won’t pretend my network will always give me that flexibility.
This is a problem for a ton of developers today. In my last round of interviews I heard a lot of consternation from people who were perfectly capable during the code review about their newly discovered inability to write code without AI.
The code review has always been a bigger hurdle than the writing in the past.
I’m currently working on fixing stuff like this as an external consultant. Like OP, people confidently assumed the LLMs could handle writing the code.
Oh boy, were they wrong. As soon as the code was pushed to production, the entire stack collapsed.
Entirely possible I’ll eat crow and look back on this as a major mistake.
I think the issue would be with complacency. After reviewing so many changes it could be easy to say “eh - it’s probably good” and merge it. I don’t have confidence in it’s output the first, second, or third time.
I think another issue is if I was using it in a domain or something I wasn’t familiar with like hardware programming. That’d be a bit like the blind leading the blind.
One of the main problems I found was that AI would sometimes write code that looked good, was well documented and even worked flawlessly. But it would take 15-20 complicated lines to perform a task that happened to be a language feature and could have been done with a single function call.
Other times it would write code that appeared to work at first glance, and after a reasonable inspection also seemed good. Only after trying to re-write that task myself did I realize the AI had missed a critical but subtle edge case. I wouldn’t have even thought to test for that edge case if I hadn’t tried to design the function myself.
I’ve also heard someone else mention that AI will often rewrite code (often with subtle differences) instead of writing a function once and calling it several times. The AI code may look clean, but it’s not nearly as maintainable as well written code by humans.
I do have to admit that it is significantly better than poorly written code by overworked and underpaid humans.All of this is ignoring the many times the code just didn’t compile, or had basic logic errors that were easy to find, but very difficult to get the AI to fix. It was often quicker to write everything myself than try to fix AI code.
Sounds like you’re easily replaceable.
by who? by a person who is capable of reviewing quality code? that is who they are already.
If they actually havent written any code then that means they didnt have to correct anything which means the LLM doesnt actually need them much if at all. Im assuming the post title is simply not true and they did indeed make some corrections and adjustments.
I guess they get the corrections and adjustments done by telling the LLM what corrections and adjustments to make?
if you ever vibecoded, you’d know, that using an agent doesn’t mean you accept every suggestion. it does not also mean you edit the suggestions. it means you continue guiding the agent until it comes up with a “perfect” solution that may even be accepted without any edits. so if you are a good “prompter”, given a good model you can be very very efficient without writing even a line of code. does this mean you can be replaced by the said model? absolutely not.
lol if that were true then yes there absolutely are replaceable by anything that can ensure they’re getting the right requirements. Of course it isn’t true, because LLMs are nowhere near the level of actual proper development standards but here we are.
By ChatGPT8 when you don’t need to review it anymore for the level he’s at.
Chatgpt8? Why do we need 8 when 5 was supposed to be utterly revolutionary?
Isn’t everybody?
I’m on the fence here.
I’m still steering and guiding the design - with knowledge gained over many products, features and incidents - and am reviewing it.
To scale higher such that I could be replaced then I think the change sets would have to be smaller and/or we have perfected bug/incident detection and remediation such that we can bypass human review of the code.
Sounds like you’re a technical manager now – or, at least, most of the way there – just with LLMs as your reports instead of junior devs…
I still write my code by hand in xed. I’m not exactly anti-AI (my feelings are mixed); I’m just the kind of programmer who wasn’t using IDEs even before the LLM craze started…
Sucks to be you
Unless it’s the most rudimentary logic I tend to have to hold the LLM’s hand through the entire design. Sometimes that involves breaking the damn thing’s fingers.
I mostly use them to write stuff that I can do but hate because the syntax is a faff (argparsers, docstrings, shite like that). There’s just too much contextual knowledge needed for them to be much use with the codebase I work with, much of which is not in the codebase so much as in the metatextual knowledge needed. I write a lot of software for testing hardware and there’s a needle that has to be threaded - the specifications are not bulletproof, the hardware being tested failing may be the desired outcome (so code that passes on it is inherently wrong), and there are various limitations and priorities that inform test approach.
The actual code I write is fairly minimal but the knowledge needed to write it is extensive.
Occasionally I’ll take on side projects at work to get my hands dirty on something more involved but simpler.
This is about the same behavior I’m accustomed to. I will say that my current work is more greenfield at the moment.
First plan is about 60% there. And then we have a few iterations to get that in good shape.
Once the plan is together I send that to a virtual machine to implement the code with low/no supervision.
Then it goes into a draft PR that I will review and provide further guidance on changes or updates. Those iterations will happen on either my virtual or local machine, depending on how much the work is.
I could imagine It would be pretty difficult with hardware, where you have to compile, and potentially transfer to a chip to run or test further. But I will say, if you can give an LLM access to the whole loop of write, compile, test and reading error logs the results can sometimes be impressive.
I don’t think that’d be much good when the way the specs are written are not based purely on technical merits. We also aren’t paid to test hardware - we have hardware because we develop software to test hardware on. Knowing how to navigate those waters is the key skill. Once I’ve got a solution that’ll work getting it implemented is relatively trivial as our approach is extremely atomic.
In general what you describe sounds like a tremendous amount more overhead than we currently have for little to no gain. What I could do with is a few more engineers that I could train up to have sufficient contextual knowledge, not another junior to babysit. I trained one up and he’s tremendously useful - apart from when he leans too heavily on LLMs. That cost a side project two months of unnecessary faff and sapped team morale massively in the process. I ended up dragging the damn thing over the finish line after he refactored it into something that was exhausting to work with.
Gotcha – I have no doubts that LLMs can steer things to shit at breakneck speeds.
I certainly wouldn’t mind some more (competent) employees.
My job is 50% coding and 50% delegating/meetings/etc…
We have an issue where we have a VERY proprietary language/system that is very difficult for llms to work with. And the new devs are getting frustrated because copilot keeps trying to push an adjacent language lol. They are ligitimitly having issues coding without llms anymore.
I see this all the time with CAPL, copilot gets close but messes so much up its not worth it. Not enough on stack overflow to train it, and until recently the docs weren’t even available online to crawl.
I often use LLMs to give me code snippets in a language I don’t know.
When I started programming (back in the dark days when StackOverflow was helpful), it took me months to learn a language well enough to do what I wanted, and I had several weeks where I would be frustrated that I just couldn’t find what I was doing wrong, or what was the name for what I wanted so I could search for it.
AI has allowed me to drastically speed up my learning time for new languages, at the expense of me not really understanding much. I’ll accept that compromise if I just want one script, but it’s a hard habbit to drop when actual understanding is needed.
Aside from telling me what language features exist, or showing me the correct syntax (exactly what a language model is designed for), I have found AI is mostly just confidently wrong.
Dang, lucky. Besides the most boilerplate of stuff I try do I get this set up to work. It gives a 60% but also 10% wrong start most of the times on a new project or feature. Helpful for debugging like 70% of the time. Otherwise I’m stilling monkeying around anytime I need to putter anything
I’ve had poor luck with it’s debugging, where I think it shines right now is when I need to code something that’s basically standalone iin a language I don’t really know.
Give me git hook to X or right a bash script to Y.
Are you forced to do that by your boss or deadlines?
I would say I’m the main motivator. But it is motivated to fulfill deadlines. It’s not something that I hide from my boss or coworkers.
I work in an industry where LLMs aren’t helpful. And I’ve already automated away most of what they might have helped with.
Personally I don’t like using LLMs for anything. Hell, I barely tolerate tools like lint. Gimme Notepad++ for my actual development work and I’m happy.
I regularly write code.
My customer gave the go-ahead to use LLM in our project very recently. We’ll be trying it out. I’m interested to scope out its use and limitations especially. I’m skeptical it will increase efficiency for me overall. The project is too complex, my/our requirement on quality too high, and I’m thorough to the last var name and code formatting for readability and obviousness. I’m not sure whether I could find it acceptable to compromise on those.
Between customer communication, planning, review-prep, guiding and helping my team members, and doing reviews, and other tasks within the company, time for my own work can be reduced by a lot. Still, I have tasks I work on, and that includes coding.
I’m curious about your setup. We’re a small company trying to up productivity until we can hire more people and so far we’ve been trying some agents that are very good for some tasks but we’re still writing a lot of code.









