Jeremy Price wanted to know whether new AI chatbots, including ChatGPT, were biased on issues of race and class, so he devised an unusual experiment to find out.
Price, an associate professor of technology, innovation and pedagogy in urban education at Indiana University, asked three leading chatbots — ChatGPT, Claude and Google Bard (now called Gemini) — to tell a story about two people meeting and learning from each other, including details like people’s names and circumstances. He then shared the stories with experts on race and class and asked them to code them for signs of bias.
He expected that because the chatbots are trained on reams of data extracted from the internet that reflects society’s demographics, they would find something.
“The data that goes into chatbots and the way society talks about what learning should be is very white-centric,” he says. “It’s a mirror that reflects our society.”
But his bigger idea is to experiment with building tools and strategies to guide these chatbots toward reducing bias based on race, class, and gender. One possibility, he says, is to develop additional chatbots that, for example, review answers from ChatGPT before sending them to users, reviewing them for bias.
“If you put another agent on its shoulder, it can stop the language model when it’s generating text and say, ‘Hold on a second. Is what I’m about to output now biased? Is this going to be useful and helpful to the person I’m chatting to?’ If the answer is yes, then keep outputting it. If the answer is no, then you need to rework it so that it is,” he says.
He hopes that tools like this can help people become more aware of and counter their own biases.
And without such intervention, he worries, AI could exacerbate the problem or make it even worse.
“We should continue to use generative AI,” he argues, “but we have to be very careful and aware as we move forward.”
Listen to Price’s full research and findings on this week’s EdSurge Podcast.
Listen to the episode Spotify, Apple Podcastsor watch it in the player below.