• Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      33
      ·
      edit-2
      13 days ago

      I have been an IT professional since 1995. Never have I ever had a personal PC that wasn’t either a refurbished laptop or some sort of Frankenstein abomination that I put together from whatever was on sale and upcycled parts.

      • partial_accumen@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        13 days ago

        I have been an IT professional since 1995. Never have I ever had a personal PC that wasn’t either a refurbished laptop or some sort of Frankenstein abomination that I pit together from whatever was on sale and upcycled parts.

        I’ve been in the game for about the same amount of time. I stopped doing that about 15 years ago when I saw that the electricity I was paying on older gear was equaling or exceeding the cost of buying newer, faster, and lower power consumption hardware.

        • Windex007@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          13 days ago

          Power costs is a poor tax in the same way skipping the dentist and getting a root canal later is.

          Also in the process of power efficiency-izing my lab. It just wasn’t a feasible option before, I didn’t have the means. I just paid interest via electricity.

          • partial_accumen@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            13 days ago

            Do we need to update Sam Vimes ‘Boots’ Theory of Socio-Economic Unfairness to Sam Vimes **‘Compute’ ** Theory of Socio-Economic Unfairness?

        • Aceticon@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          12 days ago

          Curiously, judging by my recent upgrade parts search, the peak of the capability-to-power-used curve on PCs (at least gaming ones) seems to have peaked about a decade ago.

          Signed, a fellow Old Sea Dog Of Tech who has also gone through the same change over a decade ago

      • Fizz@lemmy.nz
        link
        fedilink
        arrow-up
        13
        ·
        13 days ago

        I swear it folk have the shittest hardware and jankiest setups and create more problems for themselves than any user ever could.

        • tomkatt@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          ·
          13 days ago

          It’s why we’re able to fix all the things. We dogfood shit setups, unsupported configurations, and weird edge cases so you don’t have to.

        • cm0002@lemmy.worldOP
          link
          fedilink
          arrow-up
          13
          ·
          13 days ago

          I don’t even restart when installing new software that needs it, I just reload whatever service or dependent software on the fly 😎

        • Aceticon@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          12 days ago

          The secret is to give yourself as Elitez Hacker objectives things like “least maintenance time required” or “maximum computing power lowest energy consumption” (or it’s companion “silent yet powerful”).

          Maybe “I’m fed up with the constant need for tweaking and the jet-plane-like quality of my heater-that-does-computing-on-the-side” is the real mid-life crisis of techies.

          • Fizz@lemmy.nz
            link
            fedilink
            arrow-up
            2
            ·
            12 days ago

            I try do everything with 2nd hand stuff as cheap as possible. This causes me and unbelievable amount of trouble because I have to try get all this ancient shit to work in a spaghetti network. half the time I don’t even know what I’m doing I’m just happy to be there.

            • Aceticon@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              11 days ago

              Yeah, I’ve been there - it’s how I learned to upgrade and eventually assemble my own PCs: I couldn’t just buy a new one every time it started to run slow with newer games so I learned which parts gave the better bang for the bug (back in those days it was often memory) and would upgrade them and eventually hit another bottleneck and upgrade that part and so on, and once in a while I did need to to a big upgrade (i.e. the motherboard, which usually meant also new CPU and new memory).

              I was also pretty lost - at least to begin with - back then, but, you know, doing is learning.

              Anyways, I still keep the “no waste” habits from back then (for example, recently I upgraded my CPU with one which the benchmarks say is twice as powerful, only my CPU is from 2018 and I didn’t want to upgrade the motherboard so the replacement had to be a CPU for the same socket type, so something also from that time. Ended up getting a server class CPU for it, which back then was over €200 but now, 2nd hand, cost me just €17).

              Over time have learned to prioritize other things also and learned that sometimes spending a bit more upfront saves a lot more over time (for example, if I aim for stuff that produces less heat (i.e. that use less power to do its work, which in todays technical lingo is “lower TDP”) and I might spend a bit more but save it all and then some in lower electricity costs over time.

              Point being that with a bit of reading and looking around you can learn what you need to better chose what you get, even if 2nd hand, in such a way that the results are less of a hassle and sometimes even end up saving more money (such as how parts that use a lot of power even 2nd hand can, in year or two, add up to something more expensive than newer parts which consume less because the 2nd hand ones eat so much more power).

              Also as one gets more financially able to afford it, it’s normal to trade personal time savings for money, in the sense that I don’t really need to have a fragile setup held together with chewing gum and string which is constantly giving me problems and I have to waste tons of time on it just to keep it going, when at least for some things I can get a ton of extra convenience and save a lot of my time by spending a little bit more money. There is a monetary value for one not to have to worry about something breaking all the time and having to constantly tweak and maintain it, you just have to find how much is it worth it for you (I can tell you peace of mind and no-hassle It’s worth a lot more for me nowadays than back when I was a teen).

      • Evil_Shrubbery@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        13 days ago

        Isn’t that a bit like buying an old truck instead of a year old Miata?

        Afaik those CPUs use so much juice when idling … sure, you dont get all them lanes or ECC, but a PC at the same price with a few year old CPU outclasses that CPU by a lot & at a fraction of the running cost (also quietly).

        Just something to keep in mind as an alternative, especially when you don’t intend to fill all the pcie bussy (several users with several intensive tasks that benefit from wider bus to RAM & PCI even with a slow CPU).
        Ok, and you miss out on some fancy admin stuff, but … it’s just for home use …

        • lud@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          13 days ago

          Yeah server hardware isn’t the most efficient if you want to save power. It’s probably better to get a NUC or something.

          With that said my old Dell PowerEdge R730 only uses around 84 watt (running around 5 VMs that are doing pretty much nothing) The server runs Proxmox and has 128 GB of ram, two Xeon E5-2667 v4 CPUs, 4 old used 1 TB HDDs I bought for cheap, and 4 old used 128 GB SATA SSDs I also bought for cheap (all storage is 2,5 drives).

          All I had to do was change a few BIOS settings to prioritize efficiency over performance. 84 watts is obviously still not great but it’s not that bad.

          • Evil_Shrubbery@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            13 days ago

            Sounds nice, but yes, uses quite a bit of power.

            I should measure mine - I have a Ryzen 5900 (24t, 64MB … some 20k cinebench score) as the main, and a Core 12700 (16+4t, 12MB).
            (And Intel gen 7 and 2 at my patents. All of them proxmoxed.)

            Never ever managed to bottleneck anything on them, not really, but got them super cheap used.

            Buying anything server/enterprise that powerful would cost me a lot of moneys. And prob have two CPUs which doubles a lot of power hungry bits.

            • lud@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              12 days ago

              The only reason that I have measured my server is that it has that feature built into the iDRAC. I have been thinking of buying an external power meter for years but have never bothered to do that.

              Luckily I got my server for free from work. It was part of an old SAN so it came with 4 dual 16 Gbit fiber channel cards and 2 dual 10 gigabit ethernet cards. Before I took those out of the server it consumed around 150 watts at idle which is crazy.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          13 days ago

          I always recommend buying enterprise grade hardware for this type of thing, for two reasons:

          1. Consumer-grade hardware is just that - it’s not built for long-term, constant workloads (that is, server workloads). It’s not built for redundancy. The Dell PowerEdge has hotswappable drive bays, a hardware RAID controller, dual CPU sockets, 8 RAM slots, dual built-in NICs, the iDrac interface, and redundant hot-swappable PSUs. It’s designed to be on all the time, reliably, and can be remotely managed.

          2. For a lot of people who are interested in this, a homelab is a path into a technology career. Working with enterprise hardware is better experience.

          Consumer CPUs won’t perform server tasks like server CPUs. If you want to run a server, you want hardware that’s built for server workloads - stability, reliability, redundancy.

          So I guess yes, it is like buying an old truck? Because you want to do work, not go fast.

          • Evil_Shrubbery@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            12 days ago

            Is this mythology? :P
            Server stuff is unusual and mysterious, rare, and expensive - I get the allure.

            I like your second point (tho wouldn’t say a lot, most of us just want services at home + ProxMox or even Linux in general isn’t the most common hypervisor to learn for getting a job in like mid-sized companies), but for the rest - PC can take loads just as well as enterprise/server, this isn’t the 90s or early 2000s when you eg got shitty capacitors on even the best consumer mobos. Your average second gen Core PC could run non-stop since it’s birth to today.
            The exception are hard drives, which homelabbers buy enterprise anyways.
            BTW - who has their home lab on full load all the time (not sarcasm, actually asking for usecases)?

            The rest is just additional equipment one might need or might not. A second CPU slot is irrelevant when buying old servers, ram slots need to be filled to even take advantage of the extra lanes of server CPUs and even then older tech might still be slower than dual channel ddr5, drive bays are cheap to buy … but if you want nicely looking hot-swappable PSU then you need a server/workstation case.

            Server vs consumer CPUs mostly differ in how well they can parallelize tasks, mostly by having more cores and more lanes. But if a modern CPU core outclases older server CPU cores like 10:1 that logic just doesn’t add up anytime. Both do the same work.

            Imho old servers aren’t super cheap but are priced accordingly.

            I think this whole debate consumer vs enterprise hardware (except hard drives ofc) can be summed in a proxy question of - do homelabbers need registered ECC RAM?

      • wreckedcarzz@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        13 days ago

        I have a ThinkServer with a similar Xeon, running proxmox -> Debian, so I was looking like “huh, interesting” until I saw the internals.

        Fuuuuuuuuuuuuuuuuuck all that. Damn it Dell, quit your weird bullshit. It’s just a motherboard, cpu, cooler, and ram. Slap in intake and exhaust fans. Figure it the fuck out.

        E: and it better have a goddamn standard psu, too. Fuck yourself, Dell. I’ve seen your shit.

        • Benjaben@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          13 days ago

          The one saving grace is that their one-off custom damn shit always feels well designed, and they move a lotta units (which helps with repairs when everything is GD custom). Dunno if that’s changed in recent years.

          With that said I avoid them for personal use usually for the same reason, why have a desktop if you don’t get the benefit of parts compatibility?!

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          Hmm, I don’t have direct experience with ThinkServers, but what I see on eBay looks like standard ATX hardware… which is not really what you want in a server.

          The Dell motherboard has dual CPU sockets and 8 RAM slots. The PSUs are not the common ATX desktop format because there are 2 of them and they are hot swappable. This is basically a rack server repacked into a desktop tower case, not an ATX desktop with a server CPU socket.