• 0 Posts
  • 909 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • I appreciate the online update/kill switch/repaiarability, lock out concerns, but these systems are surprisingly good for safety

    On an early outing with my kid driving, we were going on a freeway next to a long line of cars waiting at an exit. Well suddenly someone pulls right in front of us, in a way that even if it happened to me I think I would have hit it, and certainly the car couldn’t brake in time and my kid swerved instead, a good call but one I’m sure would have left us running into the ditch at the speed we were going and no experience with that maneuver. However it was like a professional driver, managing to dramatically yank the car around the sudden slow car and neatly back in the lane after avoiding.

    I was shocked my kid pulled that off with only 10 hours of driving experience, turns out the car had an evasive steering assist. Saved our asses.

    Tons of videos about the emergency braking tests that should easily convince anyone of their value to safety.



  • It’s pretty much a vibe coding issue. What you describe I can recall being advocated forevet, the project manager’s dtram that you model and spec things out enough and perfectly model the world in your test cases, then you are golden. Except the world has never been so convenient and you bank on the programming being reasonably workable by people to compensate.

    Problem is people who think they can replace understanding with vibe coding. If you can only vibe code, you will end up with problems you cannot fix and the LLM can’t either. If you can fix the problems, then you are not inclined to toss overly long chunks of LLM stuff because they generate ugly hard to maintain code that tends to violate all sorts of best practices for programming.


  • This all presumes that OpenAI can get there and further is exclusively in a position to get there.

    Most experts I’ve seen don’t see a logical connection between LLM and AGI. OpenAI has all their eggs in that basket.

    To the extent LLM are useful, OpenAI arguably isn’t even the best at it. Anthropic tends to make it more useful than OpenAI and now Google’s is outperforming it on relatively pointless benchmarks that were the bragging point of OpenAI. They aren’t the best, most useful, or cheapest. The were first, but that first mover advantage hardly matters when you get passed.

    Maybe if they were demonstrating advanced robotics control, but other companies are mostly showing that whole OpenAI remains “just a chatbot”, with more useful usage of their services going through third parties that tend to be LLM agnostic, and increasingly I see people select non OpenAI models as their preference.




  • I think the wealth tax would be hard to get satisfactorily right. Either too little to feel like ‘justice’ or too much and you have people losing controlling interest in a company despite never really wanting it to get valued that much and never wanting to sell it.

    Also, I think if you are head of a private company, you have a lot more ‘invisible wealth’ than the head of a public company, so there’s opportunity for a tax dodge through making your company private.

    I like the idea of treating leveraging assets to actually have something spendable as income.


  • Fun story, my car had a recall for the brake light coming on randomly. After they replaced the part, then the brake light wouldn’t come on at all. Then they made it so the brake light would only sometimes come on. I said screw it and finally fixed it myself. The pedal pushed down on two different things, one to actually operate the brakes, and a separate little button for the electronic brake indication for the lights and for the cruise control to disengage (the cruise control also stayed active even when hitting the brake pedal).

    Anyway, they screwed up setting the electronic button and I had to position it correctly in the little bracket, where it gets pressed if the brake pedal barely moves even if it takes a smidge of actual distance to start the real braking.



  • Video encoding is generally not a likely workload in an HPC environment. Also those results I’m not sure if that is really FreeBSD versus everyone else, or clang vs. everyone else. I would have liked to see clang results on the Linux side. It’s possible that BSDs core libraries did better, but they probably weren’t doing that much and odds are the compiler made all the difference, and HPC is notorious for just offering users every compiler they can get their hands on.

    Kernel specifically makes a difference from some of those tests (forking, favoring linux strongly, semaphores favoring BSD strongly). The vector math and particularly the AVX512 results would be most applicable to HPC users, and the Linux results are astoundingly better. This might be due to some linear algebra library that only bothered to do Linux and the test suite used that when it was available. Alternatively, it could have been because BSD either lacked or defaulted to a different CPU frequency management strategy that got in the way of vector math performance.


  • Keep in mind that AVX-512 would be a key factor in HPC (in fact the factor for Top500 specifically), and there the BSDs lag hugely. Also, the memory copy for whatever reason favors linux, and Stream is another common HPC benchmark.

    Unclear how much of the benefit when it happened was compiler versus OS. E.g. you can run clang on Linux, and HPC shops frequently have multiple compilers available.

    This is before keeping in mind that a lot of HPC participants only bother with Linux. So the best linear algebra library, the best interconnect, the best MPI, your chances are much better under Linux just by popularity.



  • FreeBSD is unlikely to squeeze performance out of these. Particularly disadvantaged because the high speed networking vendors favored in many of these ignore FreeBSD (Windows is at best an afterthought), only Linux is thoroughly supported.

    Broadly speaking, FreeBSD was left behind in part because of copyleft and in part by doing too good a job of packaging.

    In the 90s, if a company made a go of a commercial operating system sourced from a community, they either went FreeBSD and effectively forked it and kept their variant closed source and didn’t contribute upstream, or went Linux and were generally forced to upstream changes by copyleft.

    Part of it may be due to the fact that a Linux installation is not from a single upstream, but assembled from various disparate projects by a ‘distribution’. There’s no canonical set of kernel+GUI+compilers+utilities for Linux, but FreeBSD owns a much more prescriptive project. I think that’s gotten a bit looser over time, but back in the 90s FreeBSD was a one-stop-shop, batteries included project that included everything the OS needed maintained under a single authority. Linux needed distributions and that created room for entities like RedHat and SUSE to make their mark.

    So ultimately, when those traditionally commercial Unix shops started seeing x86 hardware with a commercially supported Unix-alike, they could pull the trigger. FreeBSD was a tougher pitch since they hadn’t attracted something like a RedHat/SUSE that also opted into open source model of business engagement.

    Looking at the performance of these applications on these systems, it’s hard to imagine an OS doing better. Moving data is generally as close to zero copy as a use case can get, these systems tend to run essentially a single application at a time, so the cpu and io scheduling hardly matter. The community used to sweat ‘jitter’ but at this point those background tasks are such a rounding error in the overall system performance they aren’t worth even thinking about anymore.





  • Surprisingly not a lot of ‘exciting tuning’, a lot of these are exceedingly conservative when it comes to tuning. From a software perspective, the most common “weird” thing in these systems is the affinity for diskless boot, and that’s mostly coming from a history of when hard drives used to be a more frequent failure causing downtime (yes, the stateless nature of diskless boot continues to be desired, but the community would have likely never bothered if not for OS HDD failures). They also sometimes like managing the OS kind of like a common chroot to oversimplify, but that’s mostly about running hundreds of thousands of what should be the exact same thing over and over again, rather than any exotic nature of their workload.

    Linux is largely the choice by virtue of this market evolving from largely Unix based but most applications they used were open source, out of necessity to let them bid, say, Sun versus IBM versus SGI and still keep working regardless of who was awarded the business. In that time frame, Windows NT wasn’t even an idea, and most of these institutions wouldn’t touch ‘freeware’ for such important tasks.

    In the 90s Linux happened and critically for this market, Red Hat and SUSE happened. Now they could have a much more vibrant and fungible set of hardware vendors with some credible commercial software vendor that could support all of them. Bonus that you could run the distributions or clones for free to help a lot of the smaller academic institutions get a reasonable shot without diverting money from hardware to software. Sure, some aggressively exotic things might have been possible versus the prior norm of proprietary, but mostly it was about the improved vendor-to-vendor consistency.

    Microsoft tried to get into this market in the late 2000s, but no one asked for them. They had poor compatibility with any existing code, were more expensive, and much worse at managing at scale in the context of headless, multi-user compute nodes.