I’d probably just warranty the CPU and assume it was a defect instead of blame the entire company.
But yeah amd is the better choice for everything atm except x86 power efficiency laptop chips.
I honestly don’t get why anyone would have bought an Intel in the last 3-4 years. AMD was just better on literally every metric.
If your use case benefited from Quicksync then Intel was a clear choice.
Older Intel CPUs are the only ones that can play 4K BluRays on the player itself and not just ripping to a drive. Very niche use case but that is one I can think of.
Idle power is the only thing they are good at, but for a homeserver a used older cpu is good enough.
Was that even true for comparable CPU’s? I feel this was only for their N100’s etc.
Nah all the am4 cpus have abysmal idle power, the am5 got a little better as far as i know but the infinity fabric was a nightmare for the idle power.
Well I concede, I guess there was one metric they were better at. Doing absolutely nothing.
Just for interest. Why did you buy Intel in the first place. I don’t know about many use cases where Intel is the superior option.
It was ok until he said the AMD chip consumed more power. It is a X3D chip that is pretty much a given, if he’d gone for a none X3D chip he’d have saved quite a bit of power especially at idle. Plus he seems to use an AMD chip like an Intel chip with little or no idea how to tweak its power usage down
I’ve got a 9700X and it absolutely rips at only 65W
Same here, that thing fucks and stays very cool doing it
As I said - a bit of knowledge is a dangerous thing
"Do you need to transcode video?
Then leave Intel the fuck alone."
Been my rule for 20 years, and it’s worked good so far.
It’s odd, their GPUs are doing fine, a market they are young in, but their well established CPU market is cratering
Business majors suck.
Their GPU situation is weird. The gaming GPUs are good value, but I can’t imagine Intel makes much money from them due to the relatively low volume yet relatively large die size compared to competitors (B580 has a die nearly the size of a 4070 despite being competing with the 4060). Plus they don’t have a major foothold in the professional or compute markets.
I do hope they keep pushing in this area still, since some serious competition for NVIDIA would be great.
they always did, even back in college.
yeah quicksync is the only reason i put an intel in my NAS.
Looks like they didn’t have adequate cooling for their CPU, killed it… Then replaced it without correcting the cooling. If your CPU hits 3 digits, it’s not cooled properly.
If your CPU hits 3 digits, then throttling isn’t working properly, because it should kick in before it hits that point.
The article (or one of the linked ones) says the max design temperature is 105°C, so it doesn’t throttle until it hits that.
Which makes me think it should be able to sustain operating at that temperature. If not, Intel fucked up by speccing them too high.
I’d expect it to still throttle before getting to 105C, and then adjust to maintain a temp under 105C. If it goes above 105C, it should halt.
Then you misunderstand the spec. That’s the max operating temperature, not the thermal protection limit. It throttles at 105 so it doesn’t hit the limit at 115 or whatever and shut down. I can’t find a detailed spec sheet that might give an exact figure.
The chip needs to account for thermal runaway, so I’d expect it to throttle before reaching max operating temperature and then adjust so it stays within that range. So it should downclock a little around 90C or whatever, the increase as needed as it approaches 105C or whatever the max operating temp is. If it goes above that temp, it should aggressively throttle or halt, depending how how far above it went and how quickly.
I’d expect it to throttle before reaching max operating temperature
Again, you misunderstand. The max operating temperature is where Intel has stated that the CPU can safely operate for extended periods of time, including accounting for situations like thermal runaway (though ideally they engineer the chip that that doesn’t happen in the first place).
If that situation does occur, the chip attempts to throttle at 105, and if that fails then it presumable halts at whatever the protection threshold is before it hits the actual damage point, as I said.
Interesting, so it only throttles at that temp? That’d a bit different than how AMD handles it IIRC, which think stops boosting around 80C or so and throttles around 90C, and the max operating temp is closer to 100C.
My intel mac’s cpu (i5-5250U) throttles to maintain 105 C
Why? It’s designed to run up to 105c.
I think it was when AMDs 7000 series CPUs were running at 95c and everyone freaked out that AMD came out and said that the CPUs are built to handle this load 24/7 365 for years on end.
And it’s not like this is new to Intel. Intel laptop CPUs have been doing this for a decade now.
CPUs should throttle as they approach the limit to prevent thermal runaway. As it gets closer to that limit, it should adjust the frequency in smaller increments until it arrives at that temp to keep the changes to temps small.
105c is the max operating temperature. It’s not going to run away the second it hits 106.
Your CPU starts throttling at 104c so that way it almost never hits at 105c for long If it can’t maintain clocks then it drops them until 104c can mostly be maintained.
If you have an improperly mounted cooler, you could very well get to 105C incredibly quickly, and 115C or whatever the halt temp is shortly after.
laughs in 8700k
When I overclock this old chip (which it was built for) it can hit over 100 with proper cooling. Some chips are hot as fuck. I think this one shuts off at 105.
That’s not the case. 100% for new CPUs, but also for old ones too.
My father’s old CPU cooler did not make good contact, got lose in one corner some how, and the system would throttle (fan at 100% making noise and PC run slow). After i fixed it, in one of my visits, CPU was working fine for years.
System throttles or even shuts down before any thermal damage occures (at least when temperatures rise normally).
Pretty much anything with a heat spreader should be impossible to accidentally kill. Bare die? May dog have mercy on your soul.
What if it hits around 90°C during Vulkan shader processing? 😅 Otherwise like 42–52 idle. How’s that? I’m wondering if my cooling is sufficient.
This is an AMD 9950X3D + 9070 XT setup, for reference.
Any way to do Vulkan shader processing on the GPU perhaps, to speed it up?
It’s fine, modern CPUs boost until they either hit amperage, voltage, or thermal constraints, assuming the motherboard isn’t behaving badly then the upper limits for all of those are safe to be at perpetually.
AMDs 7000 series CPUs were designed to boost until they hit 95c, then maintain those temps. 9000 series behaves differently for boosting, but the silicon can handle it.
Okay cool, then I feel more confident. This is only my second build, ever, so I’m a little bit nervous. I didn’t buy any extra fans apart from the ones that came with my case. But I did get that beasty Noctua gen 2 air cooler, and it seems to be holding so far, even in the hot summer air.
If you’re talking about the Steam feature you can safely turn it off, any modern hardware running mesa radv (the default AMD vulkan driver in most distros) should be sufficient to process shaders in real-time thanks to ACO.
What does it mean to “process shaders in real-time”? Wouldn’t it be objectively faster to process them ahead-of-time? Even if it’s only slightly faster while running the game?
I mean processing takes like a minute or so, so it’s no big deal. I’m just curious for the fun of it, if I can compile it on the GPU. Not sure it’s even possible.
What does it mean to “process shaders in real-time”?
Processing them as they’re loaded, quickly enough that there’s no noticeable frame drop. Usual LLVM based shader compilers aren’t fast enough for that but ACO is specifically written to compile shaders for AMD GPUs and makes this feasible.
Pre-compilation would in theory always yield higher 1% lows yes, but it’s not really worth the time hit anymore especially for games that constantly require a new cache to be built or have really long compilation times.
I think the one additional thing Steam does in that step is transcoding videos so they can be played back with Proton’s codec set but using something like Proton-GE, Proton-cachyos or Proton-EM solves this too.
Disclaimer: I don’t know how the deeply technical stuff of this works so this might not be exact.
Huh.
Well like I said it only takes like a minute with half of my 32 threads utilized at 100 % (so all of my cores I guess?). Might as well keep doing it I suppose.
How far back does that go? My AMD 6000 series GPU probably doesn’t need it, but what about my old laptop APU (3500U?).
I built a new PC recently. All I needed to see were the benchmarks over the last 5 years. There’s currently no contest.
I went from Ryzen 1000 to intel 12000 since I need single threaded performance above all else (CAD). Plus it was a steal of a deal.
If Intel ever sorts out their drivers or it gets cheap enough I might for at 14000 chip but no further.
I knew Michael Stapelberg from other projects, but I just realized he is the author of the i3 Window Manager. Damn!
Interesting, so it’s not only their recent-ish (either 12th or 13th gen and up, iirc) laptop CPUs that die under normal load.
I’d never heard of arrow lake dying like raptor has been? wild.
Somehow I figured out Intel was shit early on. Been AMD for like 15-20 years. I think it was a combo of childhood shit computers running Intel, and a lot of advice pointing out what garbage it was and not worth the cost for PC builds.
Similar reasons I hate Hitachi and Western Digital hard drives. They always fucking fail.
15-20 years is silly. Intel was the clear leader for a long time before Ryzen in 2017, and arguably a few years after that too.
I was in team AMD in the 2000s for two reasons: price and competition to Intel. Intel had a massive anti-trust loss to AMD around that time, and I wanted AMD to succeed. I stuck with them until Zen was actually competitive and stayed with them ever since because they actually had better products. Intel was the king in both performance and power efficiency until that Zen release, so I really don’t know where that advice would’ve come from.
As for Hitachi and Western Digital, WTF? Hitachi hasn’t been a thing for well over a decade since they sold their HDD business to WD, and WD is generally as reliable or better than its competition. It sounds like you were impacted by a couple failures (probably older drives?) and made a decision based on that. If you look at Backblaze stats, there’s not a huge difference between manufacturers, just a few models that do way worse than the rest.
Similar reasons I hate Hitachi and Western Digital hard drives. They always fucking fail.
You misspelled Seagate.
My WD drives have been great, but my Seagates failed multiple times, causing data loss because I wasn’t properly protecting myself.
All manufacturers have bad batches. Use diversity and keep backups.
Seagate has more than bad batches. When every single one of their 1tb per platter barracuda drives have high failure rates then that’s a design/long term production issue.
How likely is it that I got 4 to 5 bad batches over the space of as many years?
Raid and offline backups these days, I eventually learned my lesson. One of which is stay away from Seagate.
Within the realm of possibility. Especially if you treat them harshly (lots of start-stop, and low airflow and high temps). Backblaze collects and publishes data, and the AFR for Seagate is slightly higher than other manufacturers, but not what I’d consider dangerous.
Ah ha ha. I had my second ryzen die yesterday in a row. No load, no overclocking, just in the middle of coding. Fack AMD and fack Intel. I’m gonna go buy a Mac Mini.
Those M chips ARE pretty amazing.
Probably a bad motherboard then. CPUs generally don’t just die, unless there’s some kind of excess voltage or something. If you weren’t aggressively overclocking, that sounds like the mobo isn’t doing a great job at controlling voltage. It could also be a bad PSU, the CPU is the last thing I’d suspect on the second failure.
Boards are different, Asus and Asrock, power supplies cheap Zalman and expensive DeepCool. It doesn’t matter. It’s not supposed to happen! And it has never happened before, until they started making some wild voltage controls.
CPUs don’t die very often without something being very wrong with your system.
Could be the PSU or motherboard