• wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    I think it’s different. The fundamental operation of all these models is multiplying big matrices of numbers together. GPUs are already optimised for this. Crypto was trying to make the algorithm fit the GPU rather than it being a natural fit.

    With FPGAs you take a 10x loss in clock speed but can have precisely the algorithm you want. ASICs then give you the clock speed back.

    GPUs are already ASICS that implement the ideal operation for ML/AI, so FPGAs would be a backwards step.