AI Hallucination
Definition
When an AI model generates content that is factually incorrect, fabricated, or inconsistent with its training data or provided context. Hallucinations can manifest as: invented facts presented confidently, contradictions with previously stated information, fabricated citations or references, and schema hallucination (inventing API parameters or database fields that don't exist).
Why It Matters
In advisory contexts, hallucinations are misleading. In agentic contexts, they're dangerous. A hallucinated database column name in a function call can cause runtime errors. A fabricated API endpoint can leak data to unauthorized servers. Schema hallucination — where the model invents parameters — is particularly dangerous when agents have tool-use capabilities.
How Exogram Addresses This
Exogram's schema enforcement rule validates every tool call against known schemas before execution. Hallucinated parameters, invented endpoints, and fabricated field names are blocked deterministically. The conflict detection system also catches factual contradictions across sessions.
Related Terms
Key Takeaways
- → Hallucinations aren't bugs — they're an inherent property of probabilistic models
- → Schema hallucination (inventing fields/parameters) is the most dangerous form for agents
- → Training-time fixes reduce frequency but cannot eliminate hallucinations
- → Deterministic schema enforcement is the only 0% error rate solution