AI Tool Use

Definition

The capability of AI agents to interact with external tools, APIs, databases, and systems. Tool use transforms AI from a text generator into an action-taker — reading files, writing code, querying databases, sending emails, and modifying production state. Tool use is enabled through function calling (OpenAI), tool use (Anthropic), and MCP (Model Context Protocol).

Why It Matters

Tool use is what makes AI agents dangerous. A model that only generates text can produce harmful content — a model that uses tools can execute harmful actions. The risk surface expands from "bad outputs" to "bad actions." Every tool an agent can access is a potential attack vector.

How Exogram Addresses This

Exogram governs the boundary between tool selection and tool execution. The deterministic policy engine validates every tool call — checking schema, intent, boundaries, and system state — before the tool actually runs. Works with every tool-use implementation.

Related Terms

medium severityProduction Risk Level

Key Takeaways

  • This concept is part of the broader AI governance landscape
  • Production AI requires multiple layers of protection
  • Deterministic enforcement provides zero-error-rate guarantees

Governance Checklist

0/4Vulnerable

Frequently Asked Questions