It is unsurprising that coding is the prime example of generative AI’s prowess. Unlike more subjective tasks like writing jokes, code either works or doesn’t. This black-and-white distinction makes it the perfect subject matter for training AI models.
It is unsurprising that non-technical people think this. It’s slightly surprising that VPE/CTO-types are letting CEOs think this.
Claiming coding is not “subjective” is almost laughable. One of the core premises for any engineering discipline is that the choice is often not between “right or wrong” but somewhere between “poor” and “best”.
One thing non-technical execs have a hard time wrapping their heads around is that there’s a Grand Canyon sized gap between the code compiling and a platform that is performant, efficient, maintainable, and sustainable. Creating a pile is spaghetti that technically “works” is easy. Creating a first class user experience that is fast and cost effective is hard.
The other distinction is between effort spent maintaining the pile of spaghetti vs adding to a well-built system.
Vibe coding would be OK for crapping out a prototype that mostly works and will be replaced, except those invariably become the core of production systems.
crapping out a prototype that mostly works and will be replaced, except those invariably become the core of production systems.
Yes, I completely agree. On the other hand, my experience suggests that’s been status quo for longer than I’ve been in the industry, and long before LLM’s were a thing.
While I fully anticipate the pretentious “you just worked at shitty places” from random internet strangers, trust me, I know. But also, it’s not just a phenomenon at the shitty places I’ve worked. There’s a literal Windows start menu in Windows 11 that’s built in React. Every other day I read stories about Facebook and Google and what was that womens’ site that just exposed a bunch of users’ driver’s licenses. This is endemic.
I used to think it was because of clients having unrealistic expectations of what they can afford thus making completely suboptimal decisions based solely on the money and management that won’t/can’t say no, but the truth is it’s quite common and almost any working environment can/could be called shitty on a whim with limited to no evidence.
That’s about where I started shouting at my phone
I don’t think that’s true. Someone needs to fix all the bugs in the AI’s code, but the CEOs haven’t realized it yet.
And then they’ll have to deal with a demographic problem as the ones that know how good code looks and how to fix it retire.