👻

Hallucination Detection

Learn to spot AI hallucinations — fabricated facts, fake citations, and confident-sounding misinformation. Essential skills for verifying AI outputs.

What Are AI Hallucinations?

AI hallucinations occur when a language model generates information that sounds plausible but is factually incorrect, fabricated, or nonsensical. The model doesn't "know" it's wrong — it's generating text based on statistical patterns, not factual understanding. Hallucinations can range from subtle inaccuracies to completely invented facts, citations, or events.

Common Types of Hallucinations

Fabricated citations are among the most dangerous — models generate realistic-looking paper titles, authors, and journal names that don't exist. Confident factual errors occur when models state incorrect information with high certainty. Temporal confusion happens when models mix up dates, attribute events to wrong time periods, or describe historical events inaccurately. Entity conflation merges details from different people, places, or concepts into a single incorrect description.

Red Flags to Watch For

Be suspicious of highly specific details that are hard to verify (exact statistics, obscure historical dates, specific quotes). Watch for overly fluent and confident language on niche or recent topics. Look for internal contradictions within the same response. Be wary when a model provides a source but the details feel too convenient or perfectly aligned with the question asked.

Verification Methods

Always cross-reference critical claims with authoritative sources. Search for cited papers and authors independently. Check if statistics align with reputable databases. For code, test the output rather than trusting it. For historical or scientific claims, verify against established references. When in doubt, ask the model to provide its reasoning — this won't prevent hallucinations but can sometimes reveal logical gaps.

Ready to test your knowledge?

Take the Hallucination Detection Quiz