You must log in or register to comment.
Technically not true. Scaling LLMs up to infinite resource costs will yield small accuracy percentage gains at first, then less than percentage points, then less than percentage points of percentage points, so on and so forth until the difference can no longer be calculated never reaching more than about 94% accuracy to human output assuming no restrictions on the output and that training data even existed for every context (it absolutely does not), and that no AI generated content is in the training data.
All while the world fucking burns and people die of thirst and hunger because of the power and water costs.
They’re correct in that we will never see benefits of scaling LLMs, just technically incorrect on the theoretical limit.

