When recognizing faces and emotions, artificial intelligence (AI) can be biased, like classifying white people as happier than people from other racial backgrounds. This happens because the data used to train the AI contained a disproportionate number of happy white faces, leading it to correlate race with emotional expression. In a recent study, published in Media Psychology, researchers asked users to assess such skewed training data, but most users didn’t notice the bias — unless they were in the negatively portrayed group.
Thats kind of the point right? It wasnt an AI bias, it was a user bias that the AIs picked up through their training data. And users aren’t identifying it because it was a user bias to begin with.
Thats kind of the point right? It wasnt an AI bias, it was a user bias that the AIs picked up through their training data. And users aren’t identifying it because it was a user bias to begin with.