They have knowledge: the probability of words and phrases appearing in a larger context of other phrases. They probably have a knowledge of language patterns far more extensive than most humans. That’s why they’re so good at coming up with texts for a wide range of prompts. They know how to sound human.
That in itself is a huge achievement.
But they don’t know the semantics, the world-context outside of the text, or why it’s critical that a certain section of the text must refer to an actually extant source.
The pitfall here is that users might not be aware of this distinction. Even if they do, they might not have the necessary knowledge themselves to verify. It’s obvious that this machine is smart enough to understand me and respond appropriately, but we must be aware just which kind of smart we’re talking about.
They have knowledge: the probability of words and phrases appearing in a larger context of other phrases. They probably have a knowledge of language patterns far more extensive than most humans. That’s why they’re so good at coming up with texts for a wide range of prompts. They know how to sound human.
That in itself is a huge achievement.
But they don’t know the semantics, the world-context outside of the text, or why it’s critical that a certain section of the text must refer to an actually extant source.
The pitfall here is that users might not be aware of this distinction. Even if they do, they might not have the necessary knowledge themselves to verify. It’s obvious that this machine is smart enough to understand me and respond appropriately, but we must be aware just which kind of smart we’re talking about.