• 0 Posts
  • 168 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • The conclusion of this experiment is objectively wrong when generalized. At work, to my disappointment, we have been trying for years to make this work, and it has been failure after failure (and I wish we’d just stop, but eventually we moved to more useful stuff like building tools adjacent to the problem, which is honestly the only reason I stuck around).

    There are a couple reasons why this problem cannot succeed:

    1. The outputs of LLMs are nondeterministic. Most problems require determinism. For example, REST API standards require idempotency from some kinds of requests, and a LLM without a fixed seed and a temperature of 0 will return different responses at least some of the time.
    2. Most real-world problems are not simple input-output machines. When calling, let’s say for example, an API to post a message to Lemmy, that endpoint does a lot of work. It needs to store the message in the darabase, federate the message, and verify that the message is safe. It also needs to validate the user’s credential before all of this, and it needs to record telemetry for observability purposes. LLMs are not able to do all this. They might, if you’re really lucky, be able to generate code that does this, but a single LLM call can’t do it by itself.
    3. Some real world problems operate on unbounded input sizes. Context sizes are constrained and as currently designed cannot handle unbounded inputs. See signal processing for an example of this, and for an example of a problem a LLM cannot solve because it cannot receive the input.
    4. LLM outputs cannot be deterministically improved. You can make changes to prompts and so on but the output will not monotonically improve when doing this. Improving one result often means sacrificing another result.
    5. The kinds of models you want to run are not in your control. Using Claude? K Anthropic updated the model and now your outputs all changed and you need to update your prompts again. This fucked us over many times.

    The list keeps going on. My suggestion? Just don’t. You’ll spend less time implementing the thing than trying to get an LLM to do it. You’ll save operating expenses. You’ll be less of an asshole.





  • Used Claude 4 for something at work (not much of a choice here and that team said they generate all their code). It’s sycophantic af. Between “you’re absolutely right” and it confidently making stuff up, I’ve wasted 20 minutes and an unknown number of tokens on it generating a non-functional unit test and then failing to solve the type errors and eslint errors.

    There are some times it was faster to use, sure, but only because I don’t have the time to learn the APIs myself due to having to deliver an entire feature in a week by myself (rest of the team doesn’t know frontend) and other shitty high level management decisions.

    At the end of the day, I learned nothing by using it, the tests pass but I have no clue if they test the right edge cases, and I guess I get to merge my code and never work on this project again.



  • This to me feels like the author trying to understand library code, failing to do so, then complaining that it’s too complicated rather than taking the time to learn why that’s the case.

    For example, the example about nalgebra is wild. nalgebra does a lot, but it has only one goal, and it does that goal well. To quote nalgebra, this is its goal:

    nalgebra is a linear algebra library written for Rust targeting:

    • General-purpose linear algebra (still lacks a lot of features…)
    • Real-time computer graphics.
    • Real-time computer physics.

    Note that it’s a general-purpose linear algebra library, hence a lot of non-game features, but it can be used for games. This also explains its complexity. For example, it needs to support many mathematical operations between arbitrary compatible types (for example a Vector6 and a Matrix6x6, though nalgrbra supports arbitrary sized matrices so it’s not just a 6x6 matrix that needs to work here).

    Now looking at glam:

    glam is a simple and fast linear algebra library for games and graphics.

    “For games and graphics” means glam can simplify itself by disregarding features they don’t need for that purpose. nalgebra can’t do that. glam can work with only square matrices up to 4x4 because it doesn’t care about general linear algebra, just what’s needed for graphics and games. This also means glam can’t do general linear algebra and would be the wrong choice if someone wanted to do that. glam also released after nalgebra, so it should come as no surprise that they learned from nalgebra and simplified the interface for their specific needs.

    So what about wgpu? Well…

    wgpu is a cross-platform, safe, pure-Rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.

    GPUs are complicated af. wgpu is also trying to mirror a very actively developed standard by following WebGPU. So why is it so complicated? Because WebGPU is complicated. Because GPUs are very complicated. And because their users want that complexity so that they can do whatever crazy magic they want with the GPU rather than being unable to because the complexity was hidden. It’s abstracted to hell and back because GPU interfaces are all incredibly different. OpenGL is nothing like Vulkan, which is nothing like DirectX 11, which is nothing like WebGPU.

    Having contributed to bevy, there’s also two things to keep in mind there:

    1. Bevy is not “done”. The code has a lot of churn because they are trying to find the right way to approach a very difficult problem.
    2. The scope is enormous. The goal with bevy isn’t to create a game dev library. It’s to create an entire game engine. Compare it to Godot or Unreal or Unity.

    What this article really reminds me of isn’t a whole lot of Rust libraries that I’ve seen, but actually Python libraries. It shouldn’t take an entire course to learn how to use numpy or pandas, for example. But honestly even those libraries have, for the most part, a single goal each that they strive to solve, and there’s a reason for their popularity.



  • For a graphics-intensive application, this (or something custom with egui).

    Bevy also doesn’t need to redraw every N milliseconds or anything. You can create a custom game loop and redraw only when needed, whether that’s 60fps or only on window event.

    There’s also no reason a Bevy app couldn’t be embedded within a larger application. You can create the Bevy app when needed, render to a render target rather than the window surface, then manually draw that where you need to in your egui app. This also means you can stop the app, or at least the game loop, when it’s not needed anymore.


  • I got a simple approach to comments: do whatever makes the most sense to you and your team and anyone else who is expected to read or maintain the code.

    All these hard rules around comments, where they should live, whether they should exist, etc. exist only to be broken by edge cases. Personally I agree with this post in the given example, but eventually an edge case will come up when this no longer works well.

    I think far too many people focus on comments, especially related to Clean Code. At the end of the day, what I want to see is:

    • Does the code work? How do you know?
    • What does the code do? How do you know? How do I know?
    • Can I easily add to your code without breaking it?

    Whether you use comments at all, where you place them, whether they are full sentences, fragments, lowercase, sentence case, etc makes no difference to me as long as I know what the code does when I see it (assuming sufficient domain knowledge).







  • In Zig, we would just allocate the list with an allocator, store pointers into it for the tag index, and mutate freely when we need to add or remove notes. No lifetimes, no extra wrappers, no compiler gymnastics, that’s a lot more straightforward.

    What happens to the pointers into the list when the list needs to reallocate its backing buffer when an “add” exceeds its capacity?

    Rust’s borrow checker isn’t usually just a “Rust-ism”. It’s all low level languages, and many times also higher level languages. Zig doesn’t let you ignore what Rust is protecting against, it just checks it differently and puts more responsibility on the developer.


  • TehPers@beehaw.orgtoRust@programming.devWhich GUI crate?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    The title seems a bit confusing. Do you want a game library, or a GUI library?

    Assuming you’re doing game dev, bevy is probably the furthest along, though there are a few alternatives. You can enable only the features and plugins you need to lower the memory footprint, though it’s not clear to me how low of memory you’re looking for.

    As far as I know, everything uses winit. If you need the feature enabled, you can add it as a dependency directly (in Cargo.toml) and enable the feature.

    If you’re having a hard time, maybe consider a completed game engine. Have you looked at Godot? Does it need to be in Rust?