What Is AI Hallucination?

When AI generates false, fabricated, or inconsistent content — and why it's worse with tool use.

AI hallucination occurs when a model generates content that is factually incorrect, fabricated, or inconsistent with its training data or provided context. In advisory AI (chatbots, content generation), hallucinations are misleading. In agentic AI (tool use, database writes), hallucinations are dangerous — a hallucinated parameter in a function call can cause data loss, unauthorized access, or system degradation.

Types of Hallucination

(1) Factual hallucination — stating false information as fact. (2) Contextual hallucination — contradicting previously stated or provided information. (3) Citation hallucination — fabricating references, papers, or URLs. (4) Schema hallucination — inventing API parameters, database fields, or function names that don't exist. Schema hallucination is the most dangerous in agentic contexts because it directly affects tool execution.

Why Hallucination Gets Worse with Tool Use

In a text-generation context, a hallucination produces wrong text. In a tool-use context, a hallucination produces wrong code. A hallucinated database column name in a SQL query causes a runtime error — or worse, writes to the wrong column. A fabricated API endpoint routes data to an unauthorized server. An invented function parameter triggers unexpected behavior. The consequences escalate from "misleading content" to "system failure."

Current Defenses and Their Limits

RAG (Retrieval-Augmented Generation) grounds model outputs in retrieved documents — but the model can still hallucinate while summarizing retrieved content. Fact-checking models verify stated facts — but are themselves susceptible to hallucination. Constrained decoding limits model outputs to valid tokens — but doesn't prevent semantically invalid combinations of valid tokens. These defenses reduce hallucination frequency but don't guarantee prevention.

The Execution-Level Defense

Exogram's schema enforcement rule validates every tool call against known schemas before execution. Hallucinated parameters, invented endpoints, and fabricated field names are blocked deterministically — not by another model, but by code. The conflict detection system also catches factual contradictions across sessions. Defense against hallucination at the execution boundary is deterministic: schema match = pass, no match = block. No probability, no error rate.

Frequently Asked Questions

Can AI hallucination be eliminated?

Not at the model level — hallucination is an inherent property of autoregressive language models. But at the execution level, hallucination-driven failures can be eliminated through deterministic schema validation and action governance.

Is RAG enough to prevent hallucination?

RAG reduces factual hallucination by grounding responses in retrieved documents, but models can still hallucinate while summarizing retrieved content. And RAG doesn't address schema hallucination (inventing API parameters) at all.

How does Exogram handle AI hallucination?

Exogram validates tool call schemas deterministically, catches factual contradictions through conflict detection, and blocks actions that reference non-existent parameters, endpoints, or fields. Schema hallucination is blocked at the execution boundary.