LLMs mimic understanding but think bottom-up, unlike humans. Explore why they’re more than autocomplete and why the future is human–AI collaboration, not replacement.
I prefer this way of phrasing, because it doesn’t dismiss the evident “understanding” these models have. It seems that the prediction task equips the model with non-trivial latent capabilities.
Hate to keep banging on the Chinese room thought experiment, but that’s exactly what this is. It’s not an evident “understanding”, it’s the appearance of understanding through particularly spicy autocomplete. Fools gonna keep getting fooled.
Hate to keep banging on the Chinese room thought experiment, but that’s exactly what this is. It’s not an evident “understanding”, it’s the appearance of understanding through particularly spicy autocomplete. Fools gonna keep getting fooled.