RAG Security

Definition

The security considerations specific to Retrieval-Augmented Generation (RAG) systems, where AI models query external knowledge bases to augment their responses. RAG security concerns include: data poisoning (injecting malicious content into the knowledge base), indirect prompt injection (embedding instructions in retrieved documents), context manipulation (influencing what gets retrieved), and action safety (governing what agents do with retrieved context).

Why It Matters

RAG systems extend the model's knowledge but also extend its attack surface. Poisoned documents in the knowledge base can manipulate model behavior. Retrieved context that contains destructive instructions can cause the model to propose harmful tool calls. Securing RAG requires both retrieval integrity and execution governance.

How Exogram Addresses This

Exogram governs what agents do with retrieved context. Even if RAG retrieves poisoned or manipulated content, the execution boundary validates every proposed action. Good context doesn't guarantee safe actions — and bad context can't bypass deterministic enforcement.

Related Terms

medium severityProduction Risk Level

Key Takeaways

  • This concept is part of the broader AI governance landscape
  • Production AI requires multiple layers of protection
  • Deterministic enforcement provides zero-error-rate guarantees

Governance Checklist

0/4Vulnerable

Frequently Asked Questions