LLMs Inherit Bias: Long Before Humans Write a Prompt
LLMs Inherit Bias Many people believe bias in AI comes from users. They say: “If people ask bad questions, AI gives bad answers.” That sounds reasonable. But it’s not the full truth. Bias enters AI much earlier — before any prompt is written. Let’s understand how, in a simple way. Where Do LLMs Learn From? LLMs learn from data. That data comes from: Books News articles Websites Social media Public records Old opinions and new ones This data is written by humans. And humans are not neutral. We have: Beliefs Power systems Blind spots Fears Preferences So when AI learns from us, it also learns our bias . Bias Is in the Dataset Imagine teaching a child using only one type of book. If all books: Praise one group Ignore another group Repeat the same ideas The child’s view of the world becomes narrow. LLMs are similar. If some voices appear more often in data, AI thinks those voices are “normal” or “correct.” If some people are missing, AI doesn’t even know they exist. This is not evil. It’s ...