The bigger issue is monetization. YouTube is popular in no small part because creators are trying to make money.
Hiker, software engineer (primarily C++, Java, and Python), Minecraft modder, hunter (of the Hunt Showdown variety), biker, adoptive Akronite, and general doer of assorted things.
The bigger issue is monetization. YouTube is popular in no small part because creators are trying to make money.
Calling RCS an industry standard is a bit… Questionable. Still, I’m happy to see Apple finally implementing it so there’s a good cross vendor texting implementation.
I wonder how this scales to large voice rooms.
Maybe; it does sound like reducing the size of the driver is potentially possible as well https://www.phoronix.com/news/AMDGPU-Headers-Repo-Idea
Right; any solution they come up with presumably needs to be more scalable than “new drivers” and “old drivers”. Eventually there will be too large a set of “old drivers” and we’ll end up in the same situation with a small “new drivers” driver and a large “old drivers” blob.
I’ve never met a single human that I know of that feels this strongly about Twitter. Most everyone I know was lukewarm on it at best.
It was best for keeping up with news on organizations more than it was keeping up with people.
Plex is moving in the app direction… So Plex is probably moving away from what you want despite being one of the easiest options.
It would probably be helpful to know what you’re trying to accomplish beyond “what”. Like, why do you want to host your music and play it via a web browser.
It’s a shame, back when they were wikia and just hosted mediawikis with light ads, it was actually a really nice service.
IIRC telegram does as well
So, the web uses a system called chain of trust. There are public keys stored in your system or browser that are used to validate the public keys given to you by various web sites.
Both letsencrypt and traditional SSL providers work because they have keys on your system in the appropriate place so as to deem them trustworthy.
All that to say, you’re always trusting a certificate authority on some level unless you’re doing self signed certificates… And then nobody trusts you.
The main advantage to a paid cert authority is a bit more flexibility and a fancier certificate for your website that also perhaps includes the business name.
Realistically… There’s not much of a benefit for the average website or even small business.
I’d give up any and every gun point in favor of police reform, proper election and transition of power legislation, and climate change.
Actually, I think they have it exactly right. The problem is Republican voters views and priorities have been misaligned with their respective party representatives for at least a decade.
This is no more evident than in evangelical voters jumping through hoops to justify a detestable candidate of poor morals.
What Trump, the tea party before him, etc represents to folks that adore them is quite different than what those things are.
So the local machine doesn’t really need the firewall; it definitely doesn’t hurt, but your router should be covering this via port forwarding (ipv4) or just straight up firewall rules (ipv6).
You can basically go two routes to reasonable harden the system IMO. You can either just set up a user without administrative privileges and use something like a systemd system level service to start the server as that user and provide control over it from other users … OR … if you’re really paranoid, use a virtual machine and forward the port from the host machine into the VM.
A lot of what you’re doing is … fine stuff to do, but it’s not really going to help much (e.g. building system packages with hardening flags is good, but it only helps if those packages are actually part of the attack surface or rather what’s exposed to the remote users in someway).
Your biggest risk is going to be plugins that aren’t vetted doing bad things (and really only the VM or using the dedicated user account provides an insulation layer there – the VM really only adds protection against privilege escalation which is pretty hard to pull off on a patched system).
My advice for most people:
For Minecraft in particular, to properly back things up on a busy server you need to disable auto save, manually force save, do the backup and then enable auto save again after your backup. Kopia can issue commands to talk to the server to do that, but you need a plugin that can react to those commands running on the server (or possibly to use the server console via stdin). Realistically though, that’s overkill and you’ll be just fine backing up the files exactly as they are periodically.
Kopia in particular will do well here because of its deduplication of baked up data + chunking algorithm that breaks up files. That has saved me a crazy amount of storage vs other solutions I’ve tried. Kopia level compression isn’t needed because the Minecraft region files themselves are already highly compressed.
I got banned from signal’s subreddit for talking about how telegram works and the case for it.
So most dorms don’t want you using your own routers because a bunch of student routers causes A LOT of inference.
You should probably reach out not to the dorm folks but the university networking folks as they’re the ones that will ultimately make the decision on whether or not to turn things off/disconnect you.
A cheap networking switch would probably be okay by them to get some more wired connections in your dorm room (routers aren’t really a great way to do that).
As a secondary concern, using a router will cause a double NAT for all your connected devices (universities don’t operate in the way ISPs do). That could cause some weird networking shenanigans, particularly for anything peer-to-peer like online games.
I’ve been reading her book, the truancy thing is interesting. She had data that showed that kids that weren’t showing up at school, particularly young ones, didn’t learn how to read sufficiently well, and then fell behind in school and struggled to catch up, they then ended up struggling later in life, and often ending up either as victims or perpetrators of crime.
So, she used the California DA’s office to enforce truancy laws across California, encouraged reaching out to fix the problems at home if at all possible, and also encouraged reaching out to folks that had been written off as “not caring” (she cites an example of a father that hadn’t been paying child support but upon learning that his daughter wasn’t going to school, started taking his daughter to school every morning, and volunteering in her classroom).
Of course this is all by her account, but that sounds overall quite positive to me.
Sure, there’s a cost to breaking things up, all multiprocessing and multithreading comes at a cost. That said, in my evaluation, single for “unity builds” are garbage; sometimes a few files are used to get some multiprocessing back (… as the GitHub you mentioned references).
They’re mostly a way to just minimize the amount of translation units so that you don’t have the “I changed a central header that all my files include and now I need to rebuild the world” (with a world that includes many many small translation units) problem (this is arguably worse on Windows because process spawning is more expensive).
Unity builds as a whole are very very niche and you’re almost always better off doing a more targeted analysis of where your build (or often more importantly, incremental build) is expensive and making appropriate changes. Note that large C++ projects like llvm, chromium, etc do NOT use unity builds (almost certainly, because they are not more efficient in any sense).
I’m not even sure how they got started, presumably they were mostly a way to get LTO without LTO. They’re absolutely awful for incremental builds.
Slow compared to what exactly…?
The worst part about headers is needing to reprocess the whole header from scratch … but precompiled headers largely solve that (or just using smaller more targeted header files).
Even in those cases there’s something to be said for the extreme parallelism in a C++ build. You give some of that up with modules for better code organization and in some cases it does help build times, but I’ve heard in others it hurts build times (a fair bit of that might just be inexperience with the feature/best practices and immature implementations, but alas).
There’s no precompiler in C++. There’s a preprocessor but that’s something entirely different. It’s also not a slow portion of the compile process typically.
C++ is getting to the point where modules might work well enough to do something useful with them, but they remove the need for #include preprocessor directives to share code.
https://firstamendment.mtsu.edu/post/first-amendment-protected-mans-cursing-of-police-ohio-appeals-court-rules/