Skip to main content

Posts

Featured

LLMs Are Like Humans: Training Shapes What They Say

LLMs Are Like Humans Large Language Models (LLMs) often feel objective and neutral. But in reality, they are deeply shaped by what they are exposed to —just like humans. Think about a human being. A person raised hearing only one story, one ideology, one version of history will naturally: Repeat those views Defend them confidently Filter out alternatives Believe they are being “truthful” Not because they are dishonest—but because their training was narrow . LLMs work the same way. Training Is Influence, Not Truth An LLM does not “discover” truth. It absorbs patterns from its training data. If you train an LLM mostly on: Information favoring a particular community Narratives supporting one country Text aligned with a specific ideology or belief system The model will: Echo those perspectives Frame answers to support them Marginalize or ignore alternatives It won’t argue. It won’t question. It will comply—fluently. Humans and LLMs Share This Vulnerability Humans call it: Conditioning Soc...

Latest Posts

What If We Train LLMs on WhatsApp Forwards and Fake News?

To Understand What Tamil Nadu Is Planning with Sarvam AI, You Must First Understand LLMs

LLMs Can Speak. Humans Must Decide.

Humans Are the Original LLMs

India Will Truly Develop When We Start Inventing—Not Just Taking Aarti of Inventions

The Hidden Machine: How Governments and Big Systems Keep People Sick for Profit

Checkmate for Sridhar Vembu: The $1.7 Billion Bond That Could Expose the Truth Behind Zoho's Billion-Dollar Claims