Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some…
Oh, sorry, I didn’t mean to imply that consumer-grade hardware has gotten more efficient. I wouldn’t really know about that, but I assume most of the focus is on data centers.
Those were two separate thoughts:
Models are getting better, and tooling built around them are getting better, so hopefully we can get to a point where small models (capable of running on consumer-grade hardware) become much more useful.
Some modern data center GPUs and TPUs compute more per watt-hour than previous generations.
Can you provide evidence the “more efficient” models are actually more efficient for vibe coding? Results would be the best measure.
It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).
Can you provide evidence the “more efficient” models are actually more efficient for vibe coding? Results would be the best measure.
Did I claim that? If so, then maybe I worded something poorly, because that’s wrong.
My hope is that as models, tooling, and practices evolve, small models will be (future tense) effective enough to use productively so we won’t need expensive commercial models.
To clarify some things:
I’m mostly not talking about vibe coding. Vibe coding might be okay for quickly exploring or (in)validating some concept/idea, but they tend to make things brittle and pile up a lot of tech debt if you let them.
I don’t think “more efficient” (in terms of energy and pricing) models are more efficient for work. I haven’t measured it, but the smaller/“dumber” models tend to require more cycles before they reach their goals, as they have to debug their code more along the way. However, with the right workflow (using subagents, etc.), you can often still reach the goals with smaller models.
There’s a difference between efficiency and effectiveness. The hardware is becoming more efficient, while models and tooling are becoming more effective. The tooling/techniques to use LLMs more effectively also tend to burn a LOT of tokens.
TL;DR:
Hardware is getting more efficient.
Models, tools, and techniques are getting more effective.
Can you cite some sources on the increased efficiency? Also, can you link to these lower priced, efficient (implied consumer grade) GPUs and TPUs?
Oh, sorry, I didn’t mean to imply that consumer-grade hardware has gotten more efficient. I wouldn’t really know about that, but I assume most of the focus is on data centers.
Those were two separate thoughts:
Can you provide evidence the “more efficient” models are actually more efficient for vibe coding? Results would be the best measure.
It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).
Did I claim that? If so, then maybe I worded something poorly, because that’s wrong.
My hope is that as models, tooling, and practices evolve, small models will be (future tense) effective enough to use productively so we won’t need expensive commercial models.
To clarify some things:
There’s a difference between efficiency and effectiveness. The hardware is becoming more efficient, while models and tooling are becoming more effective. The tooling/techniques to use LLMs more effectively also tend to burn a LOT of tokens.
TL;DR: