

10·
3 days agoI would pickup an inference GPU to run models locally. I can see a benefit in that especially if they’re on the cheap


I would pickup an inference GPU to run models locally. I can see a benefit in that especially if they’re on the cheap


Read his multipart on arguing with AI boosters. He covers silly arguments like this.
Also to paraphrase Cory Doctorow, you’re not going to keep breeding these mares to run faster and then one day they’ll birth a locomotive…


Does this mean thunderbolt works yet? Only reason I cant switch to Asahi on M1.


I guess kids in Arizona won’t know how many R’s are in strawberry then…
Honestly memes on lemmy are orders of magnitude greater than ones I see on reddit. If growing user base makes the memes more lame I don’t know if I support it