cross-posted from: https://ani.social/post/16779655

GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

This post is part “hear me out” and part asking for advice.

Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    for amd there is 7900xt at 20gb, and 7900xtx at 24gb. 4090 and 3090 are 24gb. The AMD ones might have similar $/gb and $/tflop as 9070xt.

  • ZombiFrancis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I spent chunks of 2023 and 2024 investigating and testing image gen models after a cryptobro coworker kept talking about it.

    I rigged up an old system and ran it locally to see wtf these things are doing. Honestly producing slop at 5 seconds per image v 5 minutes is meaningless in terms of value if 0% of the slop can be salvaged. And still, a human has to figure out what to so with the best candidates.

    In fact at a certain speed it begins to work against itself as no one can realistically analyze AI gen output as fast as it is produced.

    Conclusion: AI is mostly worthless. It just forces you to accept that human effort is the only thing with intrinsic value. And it’s as tough to get out of AI as it is to put any in.

    And that’s looking past all the other gargantuan problems with AI models.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Well, I wouldn’t call them a “scam”. They’re meant for a different use-case. In a datacenter, you also have to pay for rack space and all the servers which accomodate all the GPUs. And you can now pay for 32 times as many servers with Radeon 9060XT or you buy H200 cards. Sure, you’ll pay 3x as much for the cards itself. But you’ll save on the amount of servers and everything that comes with it, hardware cost, space, electricity, air-con, maintenance… Less interconnect makes everything way faster…

    Of course at home different rules apply. And it depends a bit how many cards you want to run, what kind of workload you have… If you’re fine with AMD or you need Cuda…