Natural Selection

The Week Attackers Started Hunting AI Agents

The Body Count

This week: 22 documented prompt injection techniques targeting AI agents in the wild. A new attack category — slopsquatting — where fake software packages are designed specifically to be recommended by AI coding assistants. A quantified "$670,000 Shadow AI Premium" on data breaches. Microsoft Copilot deployments stalling at weeks 6-12 because governance was treated as a checkbox. And Credo AI surveying 371 enterprise leaders to find only 4% governing AI at scale despite 60% deploying it across departments.

The pattern from last week was AI tools failing passively — vulnerabilities, exposed tokens, missing security configurations. This week, failures are active. Attackers are targeting AI agents deliberately, exploiting governance gaps enterprises haven't closed.


Winner of the Week: 22 Prompt Injection Techniques Against Live AI Agents

Check Point Research cataloged 22 indirect prompt injection techniques demonstrated against AI agents reading web content. These aren't theoretical attacks in a lab. Researchers observed real-world bypass of an AI-powered ad review system and documented campaigns abusing interest in AI tools like OpenClaw via fake installers designed to hijack agent decisions and steal credentials.

The attack surface is fundamental: AI agents consume external content as a core function. They read web pages, process documents, ingest repository content. Indirect prompt injection embeds malicious instructions inside that content — instructions the agent follows because it can't distinguish data from commands.

Separately, Help Net Security reported on evolving "agentic attack chains" including slopsquatting — creating fake software packages with names AI coding assistants are likely to recommend. The AI suggests the package. The developer installs it. The malware executes. The attack exploits trust developers place in AI recommendations, turning the assistant into an unwitting distribution channel.

22 techniques. Live demonstrations. Real bypasses. The security community is no longer asking whether AI agents can be attacked. They're cataloging the playbook.


Runner Up: The $670,000 Shadow AI Premium

Virtasant's analysis of enterprise AI incidents puts a dollar figure on the governance gap: shadow AI adds approximately $670,000 to average data breach costs. The IBM Cost of Data Breach Report found 63% of organizations lack AI governance initiatives entirely. Only 2% meet full responsible AI standards.

The economics are straightforward. Ungoverned AI use — personal accounts, unapproved tools, agents connecting to production systems without IT knowledge — creates attack surface nobody monitors, data flows nobody tracks, and incident response playbooks that don't account for AI-specific vectors. When the breach happens, remediation is more expensive because the organization doesn't know what the AI touched.

CISOs are starting to use this number. 96% of employees now use AI — and nearly a third pay for their own subscriptions to bypass corporate filters. Blocking is no longer viable strategy. The $670,000 premium is the cost of pretending it is.


Runner Up: Copilot Deployments Stall at Week 6

Practitioner analysis from 2toLead documents a pattern: Microsoft 365 Copilot deployments stall between weeks 6-12 because governance was treated as a moment, not a process. The initial rollout works. Licenses activate. Users adopt. Then the governance wall hits — identity controls lag, data governance gaps surface, and organizations realize the AI is accessing content nobody intended it to reach.

This is the governed-path problem manifesting at enterprise scale. The ungoverned path (deploy Copilot, hand out licenses) is fast. The governed path (inventory data access, configure identity controls, establish ongoing review) is slow. By the time governance catches up, the AI has been operating ungoverned for weeks.

Microsoft's own internal response is telling: they built two separate control planes — Agent 365 for agents and Copilot controls for Microsoft 365 Copilot — to manage security, governance, and observability as AI scales across the enterprise. If Microsoft needs dual control planes to govern its own AI deployment, the complexity facing everyone else isn't theoretical.


The Honorable Mentions

4% govern at scale, 60% deploy across departments. Credo AI's 2026 State of AI Governance report surveyed 371 enterprise leaders. The gap is stark: a majority deploy AI across multiple departments, a single-digit percentage govern it at scale. Most rely on manual or ad hoc governance. The report flags runtime governance for agentic AI and shadow AI discovery as urgent capability gaps current programs can't meet.

Gartner reframes the governance question. At the Gartner Data and Analytics Summit, messaging shifted: AI governance isn't about checking data fitness anymore. It's about whether decisions should be made with AI at all, and under what constraints. New accountability structures required.

Forrester tells CISOs to act now. Forrester's 2026 risk recommendations urge CISOs to inventory AI systems, embed AI risk into governance processes, and treat AI governance as shared leadership responsibility. Not a compliance exercise. A cross-functional operating requirement.

Non-human identities rival human accounts. Security Boulevard reports AI agents now rival or exceed human accounts in many environments — and identity governance programs were never designed to handle them. The junction of AI risk is identity: who can the agent act as, and what can it reach?

Regulators advance on multiple fronts. Financial regulators are elevating AI governance as a focus area. The U.S. State Department announced a $4M cooperative agreement to address obstacles to international AI deployment and governance. Pressure converges from multiple directions simultaneously.


The Pattern

Last week's incidents were passive failures — AI systems producing vulnerable code, exposing credentials, missing security configurations because constraints were never provided. This week's incidents are different. Attackers are actively targeting AI agents, exploiting the same gaps.

22 prompt injection techniques. Slopsquatting packages designed to be AI-recommended. $670,000 breach premiums from ungoverned AI. The transition from "AI does dumb things accidentally" to "attackers exploit AI doing dumb things deliberately" is selection pressure escalating.

Meanwhile, governance numbers haven't moved. 4% at scale. 63% with no governance initiatives. Copilot deployments stalling because governance was an afterthought. The governed path remains slower than the ungoverned path — and now the ungoverned path has active predators on it.

Natural selection doesn't care about your roadmap. It selects based on what's deployed today.


Sources

  1. Check Point Research. "9th March — Threat Intelligence Report." March 8, 2026.

  2. Help Net Security. "Agentic Attack Chains Advance as Infostealers Flood Criminal Markets." March 11, 2026.

  3. Virtasant. "Enterprise AI Governance: What Regulators Are Already Enforcing." March 9, 2026.

  4. MEXC News. "Why CISOs are Prioritizing AI Governance in 2026." March 13, 2026.

  5. 2toLead. "Microsoft 365 Copilot Governance in 2026: Why Deployments Stall Without It." March 5, 2026.

  6. Microsoft Inside Track. "Shaping AI Management at Microsoft with Agent 365 and Copilot Controls." March 8, 2026.

  7. Credo AI. "The State of AI Governance Report 2026." March 13, 2026.

  8. Eleanor Treharne-Jones. "Gartner Data and Analytics Summit 2026 — Day 1 Recap." March 9, 2026.

  9. Forrester. "2026 Really Is This Risky: Our Top Recommendations For CISOs." March 3, 2026.

  10. Security Boulevard. "AI Has Given You Two New Problems — And Identity Governance Is The Only Place They Meet." March 13, 2026.

  11. FinScan. "Crypto Coordination, AI Governance, and Rising Enforcement Risks." March 12, 2026.

  12. U.S. Department of State. "AI Innovation Adoption Program Funding Opportunity (FAIIA)." March 12, 2026.

Shadow AI AI Security Governance Gap Prompt Injection
Join the conversation Discuss on LinkedIn →
Mar 2026
Practice

How to Actually Benchmark a VPS: What a Day of Testing Taught Us About Getting It Right

15 mins
Mar 2026
AI Governance

Grep 'n Guess: The Research Caught Up

20 mins
Mar 2026
Natural Selection

Natural Selection - About This Series

2mins