

deleted by creator


deleted by creator


RAID (except RAID0) is data redundancy, it just isn’t backup (ie. it doesn’t help if you accidentally delete stuff, or if some bug corrupts it, or if you drop the computer while moving it).


Must be different depending on which country you live?
I had prime for a while (because people in my family wanted to try amazon video) and the included discounts were basically ridiculous discounts (like, 2% and often only if you subscribed to a recurring order) and the difference on delivery speed was that I got the “your order has been shipped” email the same day I ordered rather that a couple days later (which seems to be the norm when I get the free delivery without prime).


we just purposefully delayed non-priority orders by 5 to 10 minutes to make the Priority ones “feel” faster by comparison
Isn’t that how Amazon prime works?


is there an easier way to do self-signed certs besides spinning up your own certificate authority?
Letsencrypt works fine, just use a “real” domain and DNS challenge.
Your service will need to be on the “real” domain, but it won’t need to be accessible externally and you won’t need a public DNS entry for it (of course your VPS will still need to be able to resolve the backend’s name).
In layman’s speech (my speech) raid 1 and mirroring are essentially the same thing.
Technically, IIUC RAID is only used for hardware raid controllers, ZFS calls their equivalent RAIDZ1 (and I think it stores data in one disk and parity in another?) and both LVM and btrfs call theirs mirroring (each with its nuances). Whichever you pick, it’s a mode where you use two disks at 50% efficiency and your data survives the loss of one disk.
There are configurations that use more disks with higher efficiency than 50%, but I would avoid them in a homelab because the more disks you have, the higher the power drain and the higher the chance that at least one of them will fail. In a homelab scenario what you really want to minimize is the chance of needing to perform maintenance (replacing a drive in a RAID and restoring from a backup are both a hassle, and it’s not like the first requires significantly less work).
In your shoes (and in mine, whenever I’ll need to redo my RAID1 NAS), I’d skip RAID altogether and use the extra disk for extra backups of the data I care about.
Most of my NAS is filled with movies I’ve ripped, and I honestly wouldn’t really care much if I were to lose them: the movies I may want to re-watch are really few and I can just rip them again (or even buy them again) if the need arises.
Backups are enormously more important than RAID (will RAID do anything for you if you accidentally delete your family photos? what if the NAS floods or gets dropped on the floor?): you should really direct your time/resources/effort towards setting up automatic and monitored backups before worrying about RAID.
A NAS is any computer with space/connectors for drives and an ethernet port… it doesn’t need to be powerful or state-of-the-art, and there’s really no reason it should be expensive (besides the drives).
Of course companies will be more than happy to sell you an outdated J4125-based computer with 4 disk bays for over 500EUR, but that doesn’t mean you have to bite.
As for RAID, if you want to use it, just setup mirrored drives (ZFS, BTRFS or even LVM) and be done with it: you’ll need backups anyway so don’t overthink it. Unless you want to avoid downtime (which isn’t probably a big issue for most of your data?), you can do without RAID and just restore from backup if a drive happens to break.
If you don’t want to build your own PC, I’ve heard good things about these: https://aoostar.com/collections/nas-series (beware: I didn’t try any of them - my N3150-based NAS is not old enough to need replacement yet)


Hopefully, in time, people will learn that articles about LLM-generated stuff are as interesting as articles about what autocomplete suggestions vscode gives for specific half-written lines of code


Leverage whatsapp and hang good old posters around the neighborhood?


…and that’s why you need 16GB and a decent CPU to navigate the web


Did you ask an AI to do the list for you? (no need to answer)


Intriguing.
What’s the mechanism for dealing with spammers?
In lemmy there’s a clear escalation path that will lead to either the spammer’s instance dealing with the issue or the instance itself being de-federated.
How would that work in a p2p system?
Each user having to individually block every spammer will work as well as it did for email back in the day.


It’s optimized for making money


It straight made up a powershell module, and method call. Completely made up, non existent.
It was just imagining the best way to accomplish the task: instead of complaining, you should have just asked it to give you the source code of that new module.
Your lack of faith in AI is hindering your coding ability.
(do I need to add the /s? no, right?)


the translations on the database are for entities on the db
Oh, then you could consider having one extra table per entity (one-to-many) with the translatable stuff:
create table some_entity (
id .. primary key,
-- fields for attributes that are not translated
price ..,
created_on ..,
deleted_on ..,
..
);
create table some_entity_i18n(
id .. primary key,
some_entity_id .. foreign key references some_entity(id),
locale ..,
-- one field per translatable attribute
title ..,
description ..,
..
);
IMHO putting everything in one big table will only complicate things in the long run.


INSERT INTO TextContent (OriginalText, OriginalLanguage)
VALUES ("Ciao", "it");
Shouldn’t that be TextContent(TextContentId, OriginalText)? Something like
(then you should make the id a primary key, index originaltext and make the id in the other table a foreign key)
I could drop TextContent too, and just have a Translations table with TextContentId
Sure, but the you would have to reference the text via TextContentId in your code, which would be very annoying.
Instead you could have a function, say t("Ciao") that kinda runs something like (of course loading all the translations in ram at startup and referencing that would be better than running a query for each and every string).
select t.translation
from textcontent tc
join translations t on t.textcontentid = tc.textcontentid
where tc.originaltext = ?
and t.language = ?
The function could also return the originaltext and log an error message if a translation is not found.
BTW 1: most frameworks/languages have i18n facilities/libraries - you may investigate one and use it instead of rolling your own.
BTW 2: why would you put the translations in a database? what’s the advantage compared to files?
By that reasoning, backup isn’t redundancy because you’ll lose your data if the backup gets corrupted while restoring.
That said, there’s nothing wrong in redefining “redundant” to mean “having two or more duplicates”… you should however tell people if you do, to avoid misleading people that assume the dictionary definition.