When two of the most influential people in AI both say that today’s large language models are hitting their limits , it’s worth paying attention. In a recent long-form interview, Ilya Sutskever – co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is movin
Technically not true. Scaling LLMs up to infinite resource costs will yield small accuracy percentage gains at first, then less than percentage points, then less than percentage points of percentage points, so on and so forth until the difference can no longer be calculated never reaching more than about 94% accuracy to human output assuming no restrictions on the output and that training data even existed for every context (it absolutely does not), and that no AI generated content is in the training data.
All while the world fucking burns and people die of thirst and hunger because of the power and water costs.
They’re correct in that we will never see benefits of scaling LLMs, just technically incorrect on the theoretical limit.
Technically not true. Scaling LLMs up to infinite resource costs will yield small accuracy percentage gains at first, then less than percentage points, then less than percentage points of percentage points, so on and so forth until the difference can no longer be calculated never reaching more than about 94% accuracy to human output assuming no restrictions on the output and that training data even existed for every context (it absolutely does not), and that no AI generated content is in the training data.
All while the world fucking burns and people die of thirst and hunger because of the power and water costs.
They’re correct in that we will never see benefits of scaling LLMs, just technically incorrect on the theoretical limit.