Enterprise AI Architecture

SOC 2 Audit Trails for LLM Tool Calls

How to pass SOC 2 Type II audits when deploying autonomous AI agents by using Exogram's cryptographic execution ledger.

01. The Architectural Threat

  • SOC 2 requires strict access controls, change management, and audit trailing for all automated systems.
  • LLM agents are "black boxes". They make decisions dynamically, and their prompts change constantly.
  • When an auditor asks, "Why did the system delete this S3 bucket at 3:00 AM on a Sunday?", saying "The AI hallucinated it" results in an immediate SOC 2 failure.

02. The Exogram Resolution

  • Exogram transforms black-box AI behavior into deterministic, cryptographically auditable ledger entries.
  • Every time an agent requests a tool call, Exogram logs the `evaluation_id`, the `payload_hash`, and a `context_snapshot` of the exact graph state at that millisecond.
  • Auditors can definitively see the exact context the AI was provided, the deterministic policy rule that allowed/blocked the action, and the execution timestamp.
  • Exogram makes non-human entities fully accountable under enterprise IAM standards.

Technical Implementation Blueprint

// The Zero-Trust SOC 2 Audit Generation:

CREATE TABLE public.exogram_executions (
    evaluation_id TEXT PRIMARY KEY,
    idempotency_key TEXT NOT NULL,
    user_id UUID NOT NULL, -- The Agent's machine identity
    action_type TEXT NOT NULL, 
    status TEXT NOT NULL, -- EVALUATED | EXECUTED | BLOCKED
    context_snapshot JSONB, -- The exact state of the world to prove decision logic
    executed_at TIMESTAMP WITH TIME ZONE,
    CONSTRAINT unique_tenant_idemp UNIQUE (user_id, idempotency_key)
);

Frequently Asked Questions

Can an agent mutate its own audit logs?

No. The Exogram execution table is strictly governed by Postgres Row-Level Security and is completely isolated from the agent's runtime environment.

Explore Other Blueprints