• Fizz@lemmy.nz
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 hours ago

    I’d probably just warranty the CPU and assume it was a defect instead of blame the entire company.

    But yeah amd is the better choice for everything atm except x86 power efficiency laptop chips.

  • Decq@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    18 hours ago

    I honestly don’t get why anyone would have bought an Intel in the last 3-4 years. AMD was just better on literally every metric.

      • PalmTreeIsBestTree@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Older Intel CPUs are the only ones that can play 4K BluRays on the player itself and not just ripping to a drive. Very niche use case but that is one I can think of.

    • Quatlicopatlix@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      16 hours ago

      Idle power is the only thing they are good at, but for a homeserver a used older cpu is good enough.

      • Decq@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        16 hours ago

        Was that even true for comparable CPU’s? I feel this was only for their N100’s etc.

        • Quatlicopatlix@feddit.org
          link
          fedilink
          English
          arrow-up
          7
          ·
          14 hours ago

          Nah all the am4 cpus have abysmal idle power, the am5 got a little better as far as i know but the infinity fabric was a nightmare for the idle power.

          • Decq@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            13 hours ago

            Well I concede, I guess there was one metric they were better at. Doing absolutely nothing.

  • SapphironZA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    12 hours ago

    Just for interest. Why did you buy Intel in the first place. I don’t know about many use cases where Intel is the superior option.

  • 3dcadmin@lemmy.relayeasy.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 hours ago

    It was ok until he said the AMD chip consumed more power. It is a X3D chip that is pretty much a given, if he’d gone for a none X3D chip he’d have saved quite a bit of power especially at idle. Plus he seems to use an AMD chip like an Intel chip with little or no idea how to tweak its power usage down

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    20 hours ago

    "Do you need to transcode video?

    Then leave Intel the fuck alone."

    Been my rule for 20 years, and it’s worked good so far.

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      17 hours ago

      It’s odd, their GPUs are doing fine, a market they are young in, but their well established CPU market is cratering

      Business majors suck.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        12 hours ago

        Their GPU situation is weird. The gaming GPUs are good value, but I can’t imagine Intel makes much money from them due to the relatively low volume yet relatively large die size compared to competitors (B580 has a die nearly the size of a 4070 despite being competing with the 4060). Plus they don’t have a major foothold in the professional or compute markets.

        I do hope they keep pushing in this area still, since some serious competition for NVIDIA would be great.

  • KiwiTB@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    9
    ·
    23 hours ago

    Looks like they didn’t have adequate cooling for their CPU, killed it… Then replaced it without correcting the cooling. If your CPU hits 3 digits, it’s not cooled properly.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        22
        ·
        17 hours ago

        The article (or one of the linked ones) says the max design temperature is 105°C, so it doesn’t throttle until it hits that.

        Which makes me think it should be able to sustain operating at that temperature. If not, Intel fucked up by speccing them too high.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          17 hours ago

          I’d expect it to still throttle before getting to 105C, and then adjust to maintain a temp under 105C. If it goes above 105C, it should halt.

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            13
            ·
            17 hours ago

            Then you misunderstand the spec. That’s the max operating temperature, not the thermal protection limit. It throttles at 105 so it doesn’t hit the limit at 115 or whatever and shut down. I can’t find a detailed spec sheet that might give an exact figure.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              16 hours ago

              The chip needs to account for thermal runaway, so I’d expect it to throttle before reaching max operating temperature and then adjust so it stays within that range. So it should downclock a little around 90C or whatever, the increase as needed as it approaches 105C or whatever the max operating temp is. If it goes above that temp, it should aggressively throttle or halt, depending how how far above it went and how quickly.

              • frongt@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 hours ago

                I’d expect it to throttle before reaching max operating temperature

                Again, you misunderstand. The max operating temperature is where Intel has stated that the CPU can safely operate for extended periods of time, including accounting for situations like thermal runaway (though ideally they engineer the chip that that doesn’t happen in the first place).

                If that situation does occur, the chip attempts to throttle at 105, and if that fails then it presumable halts at whatever the protection threshold is before it hits the actual damage point, as I said.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  15 hours ago

                  Interesting, so it only throttles at that temp? That’d a bit different than how AMD handles it IIRC, which think stops boosting around 80C or so and throttles around 90C, and the max operating temp is closer to 100C.

          • fuckwit_mcbumcrumble@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            ·
            16 hours ago

            Why? It’s designed to run up to 105c.

            I think it was when AMDs 7000 series CPUs were running at 95c and everyone freaked out that AMD came out and said that the CPUs are built to handle this load 24/7 365 for years on end.

            And it’s not like this is new to Intel. Intel laptop CPUs have been doing this for a decade now.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              16 hours ago

              CPUs should throttle as they approach the limit to prevent thermal runaway. As it gets closer to that limit, it should adjust the frequency in smaller increments until it arrives at that temp to keep the changes to temps small.

              • fuckwit_mcbumcrumble@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                5
                ·
                15 hours ago

                105c is the max operating temperature. It’s not going to run away the second it hits 106.

                Your CPU starts throttling at 104c so that way it almost never hits at 105c for long If it can’t maintain clocks then it drops them until 104c can mostly be maintained.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  14 hours ago

                  If you have an improperly mounted cooler, you could very well get to 105C incredibly quickly, and 115C or whatever the halt temp is shortly after.

      • chloroken@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        laughs in 8700k

        When I overclock this old chip (which it was built for) it can hit over 100 with proper cooling. Some chips are hot as fuck. I think this one shuts off at 105.

    • Kyden Fumofly@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      19 hours ago

      That’s not the case. 100% for new CPUs, but also for old ones too.

      My father’s old CPU cooler did not make good contact, got lose in one corner some how, and the system would throttle (fan at 100% making noise and PC run slow). After i fixed it, in one of my visits, CPU was working fine for years.

      System throttles or even shuts down before any thermal damage occures (at least when temperatures rise normally).

      • lemming741@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 hours ago

        Pretty much anything with a heat spreader should be impossible to accidentally kill. Bare die? May dog have mercy on your soul.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      23 hours ago

      What if it hits around 90°C during Vulkan shader processing? 😅 Otherwise like 42–52 idle. How’s that? I’m wondering if my cooling is sufficient.

      This is an AMD 9950X3D + 9070 XT setup, for reference.

      Any way to do Vulkan shader processing on the GPU perhaps, to speed it up?

      • Glitchvid@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        23 hours ago

        It’s fine, modern CPUs boost until they either hit amperage, voltage, or thermal constraints, assuming the motherboard isn’t behaving badly then the upper limits for all of those are safe to be at perpetually.

      • fuckwit_mcbumcrumble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        AMDs 7000 series CPUs were designed to boost until they hit 95c, then maintain those temps. 9000 series behaves differently for boosting, but the silicon can handle it.

        • Victor@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          Okay cool, then I feel more confident. This is only my second build, ever, so I’m a little bit nervous. I didn’t buy any extra fans apart from the ones that came with my case. But I did get that beasty Noctua gen 2 air cooler, and it seems to be holding so far, even in the hot summer air.

      • miss phant@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        20 hours ago

        If you’re talking about the Steam feature you can safely turn it off, any modern hardware running mesa radv (the default AMD vulkan driver in most distros) should be sufficient to process shaders in real-time thanks to ACO.

        • Victor@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          What does it mean to “process shaders in real-time”? Wouldn’t it be objectively faster to process them ahead-of-time? Even if it’s only slightly faster while running the game?

          I mean processing takes like a minute or so, so it’s no big deal. I’m just curious for the fun of it, if I can compile it on the GPU. Not sure it’s even possible.

          • miss phant@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            What does it mean to “process shaders in real-time”?

            Processing them as they’re loaded, quickly enough that there’s no noticeable frame drop. Usual LLVM based shader compilers aren’t fast enough for that but ACO is specifically written to compile shaders for AMD GPUs and makes this feasible.

            Pre-compilation would in theory always yield higher 1% lows yes, but it’s not really worth the time hit anymore especially for games that constantly require a new cache to be built or have really long compilation times.

            I think the one additional thing Steam does in that step is transcoding videos so they can be played back with Proton’s codec set but using something like Proton-GE, Proton-cachyos or Proton-EM solves this too.

            Disclaimer: I don’t know how the deeply technical stuff of this works so this might not be exact.

            • Victor@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              Huh.

              Well like I said it only takes like a minute with half of my 32 threads utilized at 100 % (so all of my cores I guess?). Might as well keep doing it I suppose.

  • Knossos@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 day ago

    I built a new PC recently. All I needed to see were the benchmarks over the last 5 years. There’s currently no contest.

    • the16bitgamer@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      I went from Ryzen 1000 to intel 12000 since I need single threaded performance above all else (CAD). Plus it was a steal of a deal.

      If Intel ever sorts out their drivers or it gets cheap enough I might for at 14000 chip but no further.

  • zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    20 hours ago

    I knew Michael Stapelberg from other projects, but I just realized he is the author of the i3 Window Manager. Damn!

  • Vik@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    23 hours ago

    I’d never heard of arrow lake dying like raptor has been? wild.

  • callouscomic@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    7
    ·
    edit-2
    23 hours ago

    Somehow I figured out Intel was shit early on. Been AMD for like 15-20 years. I think it was a combo of childhood shit computers running Intel, and a lot of advice pointing out what garbage it was and not worth the cost for PC builds.

    Similar reasons I hate Hitachi and Western Digital hard drives. They always fucking fail.

    • acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      15-20 years is silly. Intel was the clear leader for a long time before Ryzen in 2017, and arguably a few years after that too.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      17 hours ago

      I was in team AMD in the 2000s for two reasons: price and competition to Intel. Intel had a massive anti-trust loss to AMD around that time, and I wanted AMD to succeed. I stuck with them until Zen was actually competitive and stayed with them ever since because they actually had better products. Intel was the king in both performance and power efficiency until that Zen release, so I really don’t know where that advice would’ve come from.

      As for Hitachi and Western Digital, WTF? Hitachi hasn’t been a thing for well over a decade since they sold their HDD business to WD, and WD is generally as reliable or better than its competition. It sounds like you were impacted by a couple failures (probably older drives?) and made a decision based on that. If you look at Backblaze stats, there’s not a huge difference between manufacturers, just a few models that do way worse than the rest.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      18 hours ago

      Similar reasons I hate Hitachi and Western Digital hard drives. They always fucking fail.

      You misspelled Seagate.

      My WD drives have been great, but my Seagates failed multiple times, causing data loss because I wasn’t properly protecting myself.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        17 hours ago

        All manufacturers have bad batches. Use diversity and keep backups.

        • fuckwit_mcbumcrumble@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          16 hours ago

          Seagate has more than bad batches. When every single one of their 1tb per platter barracuda drives have high failure rates then that’s a design/long term production issue.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          How likely is it that I got 4 to 5 bad batches over the space of as many years?

          Raid and offline backups these days, I eventually learned my lesson. One of which is stay away from Seagate.

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            16 hours ago

            Within the realm of possibility. Especially if you treat them harshly (lots of start-stop, and low airflow and high temps). Backblaze collects and publishes data, and the AFR for Seagate is slightly higher than other manufacturers, but not what I’d consider dangerous.

  • postall@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    14
    ·
    21 hours ago

    Ah ha ha. I had my second ryzen die yesterday in a row. No load, no overclocking, just in the middle of coding. Fack AMD and fack Intel. I’m gonna go buy a Mac Mini.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      Probably a bad motherboard then. CPUs generally don’t just die, unless there’s some kind of excess voltage or something. If you weren’t aggressively overclocking, that sounds like the mobo isn’t doing a great job at controlling voltage. It could also be a bad PSU, the CPU is the last thing I’d suspect on the second failure.

      • postall@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 hours ago

        Boards are different, Asus and Asrock, power supplies cheap Zalman and expensive DeepCool. It doesn’t matter. It’s not supposed to happen! And it has never happened before, until they started making some wild voltage controls.

    • Pieisawesome@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      16 hours ago

      CPUs don’t die very often without something being very wrong with your system.

      Could be the PSU or motherboard