The massive computer clusters powering artificial intelligence consume vast quantities to answer the world’s queries, but how is Big Tech redressing the balance?
Writing a 100-word email using ChatGPT (GPT-4, latest model) consumes 1 x 500ml bottle of water It uses 140Wh of energy, enough for 7 full charges of an iPhone Pro Max
That’s what I always thought when reading this and other articles about the estimated power consumption of GPT-4.
Run a decent 7B LLM on the consumer hardware like the steam deck and you got your e-mail in a minute with the fans barely spinning up.
Then I read that GPT-4 is supposedly a 1760B model. (https://en.m.wikipedia.org/wiki/GPT-4#Background)
I don’t know how energy usage would scale with model size exactly, but I’d consider it plausible that we are talking orders of magnitude above the typical local LLM.
considering that the email by the local LLM will be good enough 99% of the time, GPT may just be horribly inefficient, in order to score higher in some synthetic benchmarks?
That’s what I always thought when reading this and other articles about the estimated power consumption of GPT-4. Run a decent 7B LLM on the consumer hardware like the steam deck and you got your e-mail in a minute with the fans barely spinning up.
Then I read that GPT-4 is supposedly a 1760B model. (https://en.m.wikipedia.org/wiki/GPT-4#Background) I don’t know how energy usage would scale with model size exactly, but I’d consider it plausible that we are talking orders of magnitude above the typical local LLM.
considering that the email by the local LLM will be good enough 99% of the time, GPT may just be horribly inefficient, in order to score higher in some synthetic benchmarks?
Computational demands scale aggressively with model size.
And if you want a response back in a reasonable amount of time you’re burning a ton of power to do so. These models are not fast at all.