Bias (in AI)
When AI models reflect or amplify unfair, unbalanced, or inaccurate patterns in their training data.
Help me explain to...
K–5th
Bias means something isn’t fair. If an AI only sees pictures of one kind of dog, it might think all dogs look like that. It can make mistakes because it hasn’t seen enough different examples.
6–8th
AI bias happens when the computer makes unfair decisions because it was trained on limited or one-sided data. For example, if it only learns from pictures of adults, it might not do well recognizing kids.
9–12th
Bias in AI occurs when models inherit societal, cultural, or historical inequalities from the data they’re trained on. It’s a major issue in hiring tools, facial recognition, and education tech, and it requires critical examination and diverse data practices.
Expeditions
K–5th
Show photos of different animals but only give labels for one kind. Ask students to guess what the others are. Then talk about how AI can be “confused” if it hasn’t seen enough variety.
6–8th
Give students a small dataset with clear bias (e.g., only photos of one type of object or person). Ask them to make predictions based on it. Discuss what was missing and how that affected their results.
9–12th
Have students research a real-world example of AI bias (e.g., facial recognition misidentifying people of color). Lead a discussion on the ethical implications and how to reduce bias through better design.
Share Term
