Agentic Security April 2026

Agentic AI security: runtime controls and signed proof for agents that act.

A field guide for technical founders and CTOs of fast-growing AI companies. What changes when agents use tools, credentials, customer data, and delegated authority — and the runtime controls plus signed evidence receipts enterprise buyers expect to see during security review.


What this page covers

This guide explains the attack surface unique to agentic AI architectures — delegation chains, inter-agent injection, tool-use exploits, and runtime drift — and the runtime controls needed to defend against them. Coverage is mapped to NIST AI RMF Manage 2.x and OWASP LLM 08 (Excessive Agency).

What makes AI “agentic”

An AI agent is a system that receives a goal, breaks it into sub-tasks, calls external tools, and acts on results — often without human approval at each step. A multi-agent system chains several of these together: one agent plans, another retrieves data, a third executes code, and a fourth validates the output.

This architecture powers the most capable AI products shipping today — coding assistants that create pull requests, research agents that query databases and synthesize reports, customer-service systems that look up orders and issue refunds. The capability leap is real. So is the security gap.

Traditional AI security focused on a single model endpoint: you send a prompt, you get a response, you evaluate that response. Agentic systems break this model. The “response” isn’t text — it’s a sequence of actions executed across tools, APIs, and other agents, sometimes spanning minutes or hours.

Why traditional AppSec doesn’t cover AI agents

Application security tools were built for deterministic software. They assume that code follows defined execution paths, that inputs map predictably to outputs, and that access controls are enforced by the application layer.

AI agents break every one of these assumptions:

Non-deterministic execution

The same user input can produce different action sequences depending on context, model state, and tool outputs. WAFs and static analysis can’t model an attack surface that changes with every request.

Natural-language control plane

The agent’s behavior is governed by natural language instructions, not compiled code. Prompt injection isn’t SQL injection — it targets the decision-making logic itself, not a data layer.

Implicit authorization

When an agent calls a tool, it acts on behalf of the user — but the tool sees the agent’s credentials, not the user’s intent. The mapping between “what the user asked for” and “what tools the agent calls” is mediated by a model, not enforced by code.

Action chains, not requests

A single user instruction can trigger dozens of API calls, file reads, and database queries. Security must evaluate the entire chain, not individual requests in isolation.

Four attack surfaces unique to agentic AI

01

Inter-agent communication

When Agent A passes instructions to Agent B, those messages become an attack vector. A compromised or manipulated upstream agent can inject instructions that downstream agents execute without question — a form of indirect prompt injection that propagates through the entire chain.

02

Tool-use exploits

Agents call APIs, execute code, read files, and write to databases. Each tool invocation is a privilege boundary. An attacker who controls what arguments an agent passes to a tool — through poisoned context or manipulated planning steps — can escalate from “read customer record” to “export all customer records.”

03

Delegation chains

Multi-step delegation creates confused-deputy problems. Agent A has permission to delegate to Agent B, which can invoke Tool C. But was Agent A’s original instruction legitimate? By the time Tool C executes, the provenance of the request is three layers removed from any human decision.

04

Emergent behavior

Individual agents pass unit tests. The composed system does something unexpected. Emergent failures aren’t bugs in any single component — they’re interaction effects that only appear when agents operate together in production with real data and real timing.

Why unit testing falls short

Standard AI testing validates a model’s responses to known inputs. You write a prompt, check the output, mark it pass or fail. This works for single-turn interactions. It breaks for agentic systems because:

This isn’t a shortcoming of testing teams. It’s a fundamental architectural gap. The only way to catch these failures is to observe the system as it runs.

Framework mapping: OWASP, NIST, MITRE ATLAS

Agentic attack surfaces map directly to established risk taxonomies — they’re extensions of known categories, not a wholly new domain.

Attack surface OWASP LLM Top 10 MITRE ATLAS NIST AI RMF
Inter-agent injection LLM01: Prompt Injection AML.T0051 MG-2.2
Tool-use escalation LLM07: Insecure Plugin Design AML.T0040 MG-3.1
Delegation-chain confusion LLM08: Excessive Agency AML.T0048 GV-1.3
Emergent behavior LLM09: Overreliance AML.T0043 MS-2.6

Mapped to OWASP LLM Top 10 (2025), MITRE ATLAS v4.0, and NIST AI RMF 1.0.

Runtime monitoring for agentic systems

Runtime monitoring watches agent behavior as it happens. Instead of testing what an agent might do, you observe what it is doing — every tool call, every inter-agent message, every decision in the delegation chain.

Three capabilities matter for agentic security:

Tool-call auditing

Every tool invocation is logged with its arguments, the requesting agent, the originating user instruction, and the returned data. Anomalous patterns — an agent suddenly requesting bulk exports when it usually reads single records — trigger alerts before data leaves the system.

Delegation-chain tracing

Every request in a multi-agent workflow carries provenance metadata — which human instruction originated the chain, which agents processed it, and what transformations occurred along the way. If a downstream agent receives instructions that can’t be traced to a legitimate origin, the chain is halted.

Behavioral drift detection

Over long-running tasks, an agent’s actions are compared against its established behavioral baseline. Gradual context drift — where accumulated tool outputs or inter-agent messages shift an agent’s behavior toward unsafe territory — is flagged before the agent crosses a policy boundary.

How GLACIS approaches agentic security

GLACIS provides runtime observability for AI systems, including multi-agent architectures. The platform sits between your agents and the tools they call, monitoring behavior without adding latency to the critical path.

Mapped to OVERT controls ov-2.1 (runtime behavior logging), ov-3.1 (tool-call attestation), and ov-4.2 (multi-agent provenance tracking).

Sprint output

See one agent workflow hardened end‑to‑end

The Agent Runtime Security & Evidence Sprint maps a delegation chain inside your infrastructure, installs runtime controls at the tool boundary, and produces a signed evidence pack you can hand to enterprise security reviewers.

Book the Agent Runtime Security Sprint

Explore further

Harden one agent workflow before the next security review.

The Agent Runtime Security & Evidence Sprint is a fixed-scope, 10-business-day engagement on one named workflow. Outputs include an agent surface map, a runtime control plan, a signed receipt + evidence-pack demonstration, and a customer-facing security-review artifact. $48k fixed — founder design-partner pricing available on request.

See a sample evidence pack Harden an agent