You must log in or register to comment.
No.
I prefer this way of phrasing, because it doesn’t dismiss the evident “understanding” these models have. It seems that the prediction task equips the model with non-trivial latent capabilities.
Hate to keep banging on the Chinese room thought experiment, but that’s exactly what this is. It’s not an evident “understanding”, it’s the appearance of understanding through particularly spicy autocomplete. Fools gonna keep getting fooled.



