• 1 Post
  • 167 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle

  • addie@feddit.uktomemes@lemmy.worldML research
    link
    fedilink
    arrow-up
    8
    ·
    12 days ago

    Proving a thing that’s only known empirically is extremely valuable, too. We’ve an enormous amount of evidence that the Riemann hypothesis is correct - we can produce an infinite amount of points on the line, in fact - but proving it is a different matter.


  • Interesting, but misguided, I think.

    If you’ve selected Python as your programming language, then your problem is likely either to do some text processing, a server-side lambda, or to provide a quick user interface. If you’re using it for eg. Numpy, then you’re really using Python to load and format some data before handing it to a dedicated maths library for evaluation.

    If you’ve selected Go as your programming language, then your problem is likely to be either networking related - perhaps to provide a microservice that mediates between network and database - or orchestration of some kind. Kubernetes is the famous one, but a lot of system configuration tools use it to manipulate a variety of other services.

    What these uses have in common is that they’re usually disk- or network- limited and spend most of their time waiting, so it doesn’t matter so much if they’re not super efficient. If you are planning to peg the CPU at 100% for hours on end, you wouldn’t choose them - you’d reach for C / C++ / Rust. Although Swift does remarkably well, too.

    Seeing how quickly you can solve Fannkuch-Redux using Python is a bit like seeing how quickly you can drive nails into a wall using a screwdriver. Interesting in its way, but you’d be better picking up the correct tool in the first place.




  • Oh, that’s obnoxious. I thought it was another ‘button along the bottom’, but it takes up the space that should be ‘right control’? Bastards. Hopefully you can rebind it to something useful, even if the keycap symbol sucks.

    Mind you, I’ve already got caps-lock rebound as ‘control’ and alt-gr rebound as ‘compose’. My laptop has the ‘penguin’ key (it’s a Tuxedo laptop, no Windows key here) used for Sway. (My desktop keyboard is a Model M from before the days of Windows keys, have had to bind ctrl+alt as the ‘Sway Key’.) I’ve already got some ‘useless keys’ that I could rebind to other things - looking at you, print screen - but one you could press with your thumb while chording would always be nice.

    Those ZBooks look like fine laptops. If you installed Arch on them, obviously ;-)



  • Indeed.

    In some ways, this kind of thing is ideal for Rust. It’s at it best when you’ve a good idea of what your data looks like, and you know where it’s coming from and going to, and what you really want is a clean implementation that you know has no mistakes. Reimplementing ‘core code’ that hasn’t changed much in twenty years to get rid of any foolish overflows or use-after-free bugs is perfect for it.

    Using Rust for exploratory coding, or when the requirements keep changing? I think you’ve picked the wrong tool for the job. Invalidate a major assumption and have to rewrite the whole damn thing. And like you say; an important choice for big projects as choosing a tool that a lot of people will be able to use. And Window is very big.

    They’re smoking crack, anyway. A million lines per dev per month? When I’m doing major refactoring, a couple thousand lines per week in the same language, mostly moving existing stuff into a new home, is a substantial change. Three orders of magnitude more with a major language conversion? Get out of here.


  • Menu bar at the top at least makes some sense - it’s easier to mouse to it, since you can’t go too far. Having menus per-window like Linux, or like Windows used to before big ugly ribbons became the thing, is easier to overshoot. (Which is why I always open my menu bars by pressing ‘alt’ with my left thumb, and then using the keyboard shortcuts that are helpfully underlined. Window likes to hide those from you now since they’re ‘ugly’, and also makes you mouse over the pretty icons to get the tooltip that tells you what they are, which is just a PITA. Pretty != usable.)

    Mac OS has had the menu at the top since before it was a multitasking OS. They had them there on the first Mac I ever used, a Mac Classic 2 back in 1991 or so, and it was probably like that before then too. It’s not like they’ve been ‘innovating’ that particular feature and annoying their users.


  • Data centre GPUs tend not to have video outputs, and have power (and active cooling!) requirements in the “several kW” range. You might be able to snag one for work, if you work at a university or at somewhere that does a lot of 3D rendering - I’m thinking someone like Pixar. They are not the most convenient or useful things for a home build.

    When the bubble bursts, they will mostly be used for creating a small mountain of e-waste, since the infrastructure to even switch them on costs more than the value they could ever bring.



  • addie@feddit.uktoSelfhosted@lemmy.worldRaspberry Pi 4B
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    Mine was my local Forgejo server, NAS server, DHCP -> DNS server for ad blocking on devices connected to the network, torrent server, syncthing server for mobile phone backup, and Arch Linux proxy, since I’ve a couple of machines that basically pull the same updates as each other.

    I’ve retired it in favour of a mini PC, so it’s back to being a RetroPie server, have loads of old games available in the spare room for when we have a party, amuses children of all ages.

    They’re quite capable machines. If they weren’t so I/O limited, they’d be amazing. They tend to max out at 10 megabyte/second on SD card or over USB / ethernet. If you don’t need a faster disk than that, they’re likely to be ideal in the role.


  • addie@feddit.uktoScience Memes@mander.xyzMakes perfect sense
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 month ago

    Got the most actual quoted lines from the book of any film version, plus you’ve got all of Dicken’s direct-to-reader moralising delivered by Gonzo. And as well as being very faithful to the book, it is a superb film as well.

    Michael Caine excels as Scrooge, too. I wouldn’t say that he was better than Alastair Sim was in his version - that’s a performance that would take some beating - but there’s not much in it.


  • systemd-networkd gets installed by default by Arch, integrates a bit better with the rest of SystemD, doesn’t have so many VPN surprises, and the configuration is a bit more obvious to me - a few config files rather than NetworkManager’s “loads of scripts” approach. Small niggles rather than big issues.

    Really, I just don’t want duplication of services - more stuff to keep up-to-date. And if I’ve got SystemD anyway, might as well use it…


  • NetworkManager dependencies can now be disabled at build time…

    Nice. It was a damned nuisance that Cinnamon brought its own network stack with it. All my headless servers and my Plasma gaming desktop use systemd-networkd, which meant that my Cinnamon laptop needed different configuration. Now they can all be the same.

    Hopefully the new release will bash a few of the remaining Wayland bugs; Plasma is great but I prefer Cinnamon for work, and it’s just too buggy for gaming on a multi-monitor setup at the moment.



  • Java’s biggest strength is that “the worst it can be” is not all that bad, and refactoring tools are quite powerful. Yes, it’s wordy and long-winded. Fine, I’d rather work with that than other people’s Bash scripts, say. And just because a lot of Java developers have no concept of what memory allocation means, and are happy to pull in hundreds of megabytes of dependencies to do something trivial, then allocate fucking shitloads of RAM for no reason doesn’t mean that you have to.

    There is a difference in microservices between those set up by a sane architect:

    • clear data flow and pragmatic service requirements
    • documented responses and clear failure behaviour
    • pact server set up for validation in isolation
    • entire system can be set up with eg. a docker compose file for testing
    • simple deployment of updates into production and easy rollback

    … and the CV-driven development kind by people who want to be able to ‘tick the boxes’ for their next career move:

    • let’s use Kubernetes, those guys earn a fortune
    • different pet language for every service
    • only failure mode is for the whole thing to freeze
    • deployment needs the whole team on standby and we’ll be firefighting for days after an update
    • graduate developers vibe coding every fucking thing and it getting merged on Claude’s approval only

    We mostly do the second kind at my work; a nice Java monolith is bliss to work on in comparison. I can see why others would have bad things to say about them too.


  • Apart from being slow, having discoverability issues, not being able combine filters and actions so that you frequently need to fall back to shell scripts for basic functionality, it being a complete PITA to compare things between accounts / regions, advanced functionality requiring you to directly edit JSON files, things randomly failing and the error message being carefully hidden away, the poor audit trail functionality to see who-changed-what, and the fact that putting anything complex together means spinning so many plates that Terraform’ing all your infrastructure looks like the easy way; I’ll have you know there’s nothing wrong with the AWS Console UI.


  • Yeah. You know the first time you install Arch (btw), and you realise you’ve not installed a working network stack, so you need to reboot from the install media, remount your drives, and pacstrap the stuff you forgot on again? Takes, like, three minutes every time? Imagine that, but you’ve got a kernel compile as well, so it takes about half an hour.

    Getting Gentoo so that it’ll boot to a useful command line took me a few hours. Worthwhile learning experience, understand how boot / the initramfs / init and the core utilities all work together. Compiling the kernel is actually quite easy; understanding all the options is probably a lifetime’s work, but the defaults are okay. Setting some build flags and building ‘Linux core’ is just a matter of watching it rattle by, doesn’t take long.

    Compiling a desktop environment, especially a web browser, takes hours, and at the end, you end up with a system with no noticeable performance improvements over just installing prebuilt binaries from elsewhere.

    Unless you’re preparing Linux for eg. embedded, and you need to account for basically every byte, or perhaps you’re just super-paranoid and don’t want any pre-built binaries at all, then the benefits of Gentoo aren’t all that compelling.