• wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    This is my use case exactly.

    I do a lot of analysis locally, this is more than enough for my experiments and research. 64 to 96gb VRAM is exactly the window I need. There are analyses I’ve had to let run for 2 or 3 days and dealing with that on the cloud is annoying.

    Plus this will replace GH Copilot for me. It’ll run voice models. I have diffusion model experiments I plan to run but are totally inaccessible locally to me (not just image models). I’ve got workloads that take 2 or 3 days at 100% CPU/GPU that are annoying to run in the cloud.

    This basically frees me from paying for any cloud stuff in my personal life for the foreseeable future. I’m trying to localize as much as I can.

    I’ve got tons of ideas I’m free to try out risk free on this machine, and it’s the most affordable “entry level” solution I’ve seen.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      And even better, “testing” it. Maybe I’m sloppy, but I have failed runs, errors, hacks, hours of “tinkering,” optimizing, or just trying to get something to launch that feels like an utter waste of an A100 mostly sitting idle… Hence I often don’t do it at all.

      One thing you should keep in mind is that the compute power of this thing is not like an A/H100, especially if you get a big slowdown with rocm, so what could take you 2-3 days could take over a week. It’d be nice if framework sold a cheap MI300A, but… shrug.

      • wise_pancake@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        I don’t mind that it’s slower, I would rather wait than waste time on machines measured in multiple dollars per hour.

        I’ve never locked up an A100 that long, I’ve used them for full work days and was glad I wasn’t paying directly.