Ths survey reminded me of a bunch of awesome features that I forgot were stabilized, many of which I can use in current projects lol. I know that wasn’t the intent, but it’s a nice side effect of filling it out I guess.
Ths survey reminded me of a bunch of awesome features that I forgot were stabilized, many of which I can use in current projects lol. I know that wasn’t the intent, but it’s a nice side effect of filling it out I guess.


monitoring how they are used is good to identify if people are actually more productive with it
Unfortunately, many jobs skipped this step. The marketing on AI tools should be illegal.
Far too many CEOs are promised that their employees will do more with less, so of course they give their employees more to do and make them use AI, then fire employees because the remaining ones are supposed to be more productive.
Some are. Many aren’t.
Like your comparison, the issue is that it’s not the right tool for every job, nor is it the right tool for everyone. (Whether it’s the right tool for anyone is another question of course, but some people feel more productive with it at times, so I’ll just leave it at that.)
Anyway, I’m fortunate enough to be in a position where AI is only strongly encouraged, but not forced. My friend was not though. Then he used it because he had to, despite it being useless to him. Then he, a chunk of his management chain, and half his department were fired. Nobody was hired to replace them.


There are plenty of companies that track metrics on AI usage. Big names like Amazon come up of course, but even some small companies require employees to use it.
So to answer your question: a lot of people, regardless of if they want to.
And no, they can’t all quit and get a new job.
This is super cool! I love seeing these new implementations of JS. boa is another JS runtime written in Rust as well.
I’m curious how easy it is to embed this. Can I use it from another Rust project? Can I customize module loading behavior, or set limits on the runtime to limit CPU usage or memory usage or intercept network calls? Can I use it from a non-Rust project? Or is this intended to be a standalone JS runtime called from the CLI? I’ve been looking at Boa as a JS engine for one of my projects, but I’m open to checking out brimstone too if it’ll work.


Another commenter already explained why this is unsound, so I’ll skip that, though static mut is almost universally unsound.
Note, of course, that main() won’t be called more than once, so if you can, I would honestly just make this a stack variable containing a Box<[u8; 0x400]> instead. Alternatively, a Box<[u8]> can make it simpler to pass around, and a Vec<u8> that is pre-allocated with Vec::with_capacity lets you track the current length as well with the buffer (if it’s going to have a variable length of actually useful data).
If you want to make it a static for some reason, I’d recommend making it just static and thread_local, then wrapping it in some kind of cell. Making it thread local will mean you don’t need to lock to access it safely.
I already do #1, and I push for #3 (specifically Python or TS) where I can at work, but there’s this weird obsession with bash that people have at work despite all these scripts not running on Windows natively (outside WSL). Currently I do #2, but I often end up just stuck in bash the whole time because it’s needed for things as simple as building our code. I want to try out Fish as an alternative for those situations.
Yeah I normally use Nushell as well. It was the one cross-platform shell I really liked.
I’ll still use it. I just need to find something a bit closer to bash for when I need to use bash commands to do something, or where working in an environment where others use bash. Nushell has some pretty major syntax differences like && not being used to “chain” commands.
Not going to lie, at first I forgot that Fish was ported to Rust and was confused why this was posted here.
I need to give Fish another try now that I’m on Linux. It’s a great shell, but I couldn’t really use it on Windows.


Is this your first time here?
Your account is brand new and you’ve already posted now three posts related to JPlus in this community in one day. Please tell me you’re joking with this one.
This post is a GitHub link to the project. Cool, I love seeing new projects, especially when the goal is to make it harder to write buggy code.
The other post is an article that immediately links to the GitHub. The GitHub contains a link at the top to, what I can tell, the same exact article. Both the article and the GitHub README explain what JPlus is and how to use it.
Why is this two posts when they contain the same information and link to each other directly at the top?


This is a distinction without a difference. Both introduce and explain how to use the project.


How does this post differ from this one? Why make two posts for the same thing?


I bought a license many, many years ago and loved SmartGit. I just use the cli now, but if you’re looking for a GUI, it’s a great choice.


The conclusion of this experiment is objectively wrong when generalized. At work, to my disappointment, we have been trying for years to make this work, and it has been failure after failure (and I wish we’d just stop, but eventually we moved to more useful stuff like building tools adjacent to the problem, which is honestly the only reason I stuck around).
There are a couple reasons why this problem cannot succeed:
The list keeps going on. My suggestion? Just don’t. You’ll spend less time implementing the thing than trying to get an LLM to do it. You’ll save operating expenses. You’ll be less of an asshole.


I’m not aware of any custom derives needed to use nalgebra. I’ve never needed to write any. At most, I’ve written macro_rules! macros to help with trait impls, but not specifically for nalgebra.
I’m not too sure what the author is referring to there.


It sounded from the PSF article that the biggest reason they rejected it, aside from the ethical concerns, was that they could be asked to give the money back if they are found to be violating that term anywhere and at any time. Even if they didn’t care about the term regarding DEI, the risk of being liable for $1.5m to the US government all because an orange blob didn’t like the logo is too great of a risk.


I mean, I find TypeScript fun to write. The only thing I really dislike about it is configuring the tools (tsc, eslint, etc). It’s a great language when everything’s setup and you disallow all the ugly JSisms with your linter and tsc.


Used Claude 4 for something at work (not much of a choice here and that team said they generate all their code). It’s sycophantic af. Between “you’re absolutely right” and it confidently making stuff up, I’ve wasted 20 minutes and an unknown number of tokens on it generating a non-functional unit test and then failing to solve the type errors and eslint errors.
There are some times it was faster to use, sure, but only because I don’t have the time to learn the APIs myself due to having to deliver an entire feature in a week by myself (rest of the team doesn’t know frontend) and other shitty high level management decisions.
At the end of the day, I learned nothing by using it, the tests pass but I have no clue if they test the right edge cases, and I guess I get to merge my code and never work on this project again.


I was being ridiculed in the past and called a slop-generator
I can only imagine why. Surely it’s unrelated to this?
I’ve completely moved to
codexcli as daily driver. I run between 3-8 in parallel in a 3x3 terminal grid
Nah, couldn’t be.


This to me feels like the author trying to understand library code, failing to do so, then complaining that it’s too complicated rather than taking the time to learn why that’s the case.
For example, the example about nalgebra is wild. nalgebra does a lot, but it has only one goal, and it does that goal well. To quote nalgebra, this is its goal:
nalgebra is a linear algebra library written for Rust targeting:
- General-purpose linear algebra (still lacks a lot of features…)
- Real-time computer graphics.
- Real-time computer physics.
Note that it’s a general-purpose linear algebra library, hence a lot of non-game features, but it can be used for games. This also explains its complexity. For example, it needs to support many mathematical operations between arbitrary compatible types (for example a Vector6 and a Matrix6x6, though nalgrbra supports arbitrary sized matrices so it’s not just a 6x6 matrix that needs to work here).
Now looking at glam:
glamis a simple and fast linear algebra library for games and graphics.
“For games and graphics” means glam can simplify itself by disregarding features they don’t need for that purpose. nalgebra can’t do that. glam can work with only square matrices up to 4x4 because it doesn’t care about general linear algebra, just what’s needed for graphics and games. This also means glam can’t do general linear algebra and would be the wrong choice if someone wanted to do that. glam also released after nalgebra, so it should come as no surprise that they learned from nalgebra and simplified the interface for their specific needs.
So what about wgpu? Well…
wgpuis a cross-platform, safe, pure-Rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
GPUs are complicated af. wgpu is also trying to mirror a very actively developed standard by following WebGPU. So why is it so complicated? Because WebGPU is complicated. Because GPUs are very complicated. And because their users want that complexity so that they can do whatever crazy magic they want with the GPU rather than being unable to because the complexity was hidden. It’s abstracted to hell and back because GPU interfaces are all incredibly different. OpenGL is nothing like Vulkan, which is nothing like DirectX 11, which is nothing like WebGPU.
Having contributed to bevy, there’s also two things to keep in mind there:
What this article really reminds me of isn’t a whole lot of Rust libraries that I’ve seen, but actually Python libraries. It shouldn’t take an entire course to learn how to use numpy or pandas, for example. But honestly even those libraries have, for the most part, a single goal each that they strive to solve, and there’s a reason for their popularity.
Yep. This was the difference between a silent, recoverable error and a loud failure.
It seems like they’re planning to remove all potential panics based on the end of their article. This would be a good idea considering the scale of the service’s usage.
(Also, for anyone who’s not reading the article, the unwrap caused the service to crash, but wasn’t the source of the issues to begin with. It was just what toppled over first.)