- cross-posted to:
- lobsters
- cross-posted to:
- lobsters
This isn’t a very good article IMHO. I think I agree (strongly) with what it’s trying to say, but as it’s written, it just isn’t it.
Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot.
Runtimes/“VMs” like the JVM also allow nice things like stack traces. I don’t know about the author but I much prefer looking at a stack trace over “segmentation fault (core dumped)”. Having a runtime opens new possibilities for concurrency and parallelism too.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it’s smaller and faster and more customisable because it’s native Rust code.
This just doesn’t make any sense. COSMIC is more configurable because it wants to be, this has absolutely nothing to do with Rust vs Javascript.
Dennis Ritchie and Ken Thompson knew this. That’s why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything’s in a container all the time, the filesystem abstracts the network and the GUI and more.
And here the author just contradicts themselves. So wrappers, runtimes and VMs are bad, except when it’s Ken Thompson doing it in which case adding containers and a language runtime into a kernel is a great idea actually?
Lastly, I didn’t address the efficiency arguments in the quotes because it’s mostly just true… but I do think it requires some more careful consideration than “JS bad Rust good”. Consider this unscientific sample of different apps on my PC and how much of my (expensive!) RAM they use:
- Spotify (Electron): 1G
- Ghostty (Zig/GTK): 235M
- Decibels (Typescript/GTK): 140M
- Anyrun (Rust/GTK): 110M
Note that Electron, and only Electron, is a supermassive black hole of bloat. Whatever is going on here, it’s not Javascript.
Even Electron apps can be small: Stretchly, a break reminder, normally takes something like 20 MB of memory.
Yeah, modern computers often feel like a scam. Obviously, some things are faster and obviously, we can calculate more complex problems.
But so often, programs are only optimized until they reach a level of “acceptable” pain. And especially with monopolistic, commercial software that level is close to infinity, because well, it’s acceptable so long as customers don’t switch to competitors.Either way, the slowness that was acceptable twenty years ago is generally still acceptable today, so you get much of the same slowness despite being on a beefier PC.
What’s the point of being healthy when they have rascal scooters?
here I am, using my legs like a sucker.
Not to judge, but you’re supposed to walk on them, not suck them.
Don’t tell me how to live my life.
…and that’s why you need 16GB and a decent CPU to navigate the web
What is utterly stupid is with modern compression and rendering techniques - if it weren’t for developers shipping a whole ass library to prod for one function that is simplifying 8 lines of code… 56k would still be usable for light browsing and access. It’d be slow still… But far from literally impossible now.
The sheer amount of “fat” on some (most) sites and applications is just depressing.
Back in the day when we were all amazed at Yahoo!'s loading speed I pulled the homepage HTML. 79K. Imagine that.
Good luck watching a video on 56k
That’s not what he is saying.
I mean, the text on a website isn’t the problem for not being able to use 56k.
It’s only images and video that take up space, the libraries used on websites are all cached at this point so that’s hardly relevant to ongoing usage of a website.
Seems someone said it before me… But you missed the point.
I’ll respond to your statement generally though.
Basic survival on 56k was doable. Shoutcast or Pandora could even be streamed with occasional buffering while browsing more light, or less heavy, sites. On the topic of video - low quality 240 would be “manageable” again, thanks to modern compression.
Was it a good experience? Rarely. Was it passible? Certainly; and if a site optimised for load time and reduced bandwidth - it could even be near broadband “experience” with some caching tricks.
Im not saying everyone needs to be code gods and build a 96k fps… But optimizing comes from understanding what you are writing and how it works. All this bloat is the result of laziness and a looser grasp on the fundamentals. As to why we should take a harder look at optimization?
-
Datacenter / cloud costs are rising… Smaller footprint - smaller bill.
-
Worldwide hardware costs are rising… Less people will be building fire breathing monsters. Better optimization - better user experience - more users. Recent examples (of poor optimization:) fallout and early 2077.
-
- The fastest code is the code you don’t run.
Not really. The code can be slow, even if I do not run it. Also, sometimes additional code can do optimization (like caching), which is more code = faster. Or additional libraries, complexity and code paths can in example add multicore execution, which could speed up. So, I do not buy the less code is faster logic.
I think the most obvious example is loop unrolling. An unrolled loop can be many times more code, but runs faster because you’re not updating a counter or doing conditional jumps.
I think it really depends on what your code is doing. I do agree that less isn’t always faster or as they mentioned in the post, safer. Taking a raw input is fast, but not very safe for a variety of reasons. I personally make “simple to understand” as the highest priority for my code.
On a slightly different example, the suckless project has a huge emphasizes on lightweight code, which they call “suckless”. I don’t think in this case faster is the goal, but having less code and be simple as possible (not even configuration files allowed, you just recompile program) and almost no documentation in the code either. But the idea is the same, of having “lightweight” code.





