Capital Flows to Counter AI Agents Risks

Capital Flows to Counter AI Agents Risks

On September 16, researchers disclosed ShadowLeak, a zero-click flaw in ChatGPT’s Deep Research that exfiltrated Gmail through a connector. No click, no phishing page, just a crafted input. At the same time, SentinelOne demonstrated malware prototypes that leaned on LLMs for runtime evasion. These incidents shifted AI agents from a theoretical worry to a visible attack surface.

Markets responded immediately. Netskope’s IPO raised about $900 million and traded up on debut, a signal that investors remain willing to back cloud-native security platforms when they show a path to scale. CrowdStrike announced acquisitions of Pangea and Onum, framing a new “Agentic Security Workforce” strategy. Check Point bought Lakera to anchor AI runtime protections. Each of these moves is pointing in the same direction: capital now flows to the governance of agents, tokens, and connectors.

From Identity Abuse to Agent Abuse

Abusing AI agents matters because it redefines trust. Traditional phishing relies on tricking the user; token abuse relies on stealing existing credentials. Agent abuse bypasses both, targeting the automation layer that acts on behalf of the user. In that sense, the connector becomes a new “super credential,” with access broader than any one password.

The Salesforce token incidents from last week’s Signals edition already showed how integrations create systemic exposure. ShadowLeak makes the same case for agents. A single connector with weak controls can extend compromise across tenants, services, and data stores. It shows structural fragility in how modern platforms connect.

The Capital Signal

Security markets don’t always move in sync with threat disclosures. This week they did. Netskope’s IPO reception was a vote of confidence in the scalability of cloud-edge security. Investors rewarded growth despite losses, seeing value in controlling data and traffic flows where AI and SaaS converge. The timing was no accident: buyers are looking for platforms that can prove they are ready for AI-driven risks, and Netskope positioned itself squarely in that narrative.

Meanwhile, incumbents pulled forward their M&A agendas. CrowdStrike used its Fal.Con conference to lay out a vision where agent governance sits alongside identity and data in its Falcon platform. The purchases of Pangea and Onum brought policy and telemetry under one roof. Check Point’s Lakera acquisition sent a similar signal: runtime protection for LLMs is not an R&D side project but a core pillar of future platform value.

These transactions validate agent security as a monetizable category. Investors can now point to real exits and public multiples when evaluating startups working on prompt injection defenses, token lifecycle tools, or AI-aware DLP. The message: this is not experimental anymore.

Ecosystem Impact

The ecosystem impact is layered.

For enterprises, the cost of ignoring agent risk is operational. An exploited connector can leak sensitive mail, CRM records, or code repositories without a single employee mistake. That means incident response must now include “agent hygiene”: inventories of which agents are connected, which scopes they hold, and how tokens are rotated.

For consumers, the exposure is indirect but tangible. SaaS breaches already drive higher fraud losses and support burdens. If consumer-facing agents leak data through connectors, recovery processes will look much like identity theft cases: long, painful, and trust-eroding. Complaint volume rises, refunds increase, and churn accelerates when recovery fails.

For vendors, the expectation shifts. It is no longer enough to patch vulnerabilities after disclosure. Buyers and regulators will ask for systematic controls: default-deny connectors, short-lived tokens, mandatory human approval for sensitive actions, and visible authenticity signals. Vendors that deliver will onboard smoothly and retain better. 

Bottom Line

AI agents are no longer a side note. They are an attack surface, an operational risk, and now a business category. To manage this risk, the following are expected:

  1. Agent policy enforcement. Which connectors and which actions are allowed, under what scope? Enterprises want policy as code, not settings buried in dashboards.
  2. Token lifecycle management. Short-lived, audience-scoped tokens with automatic revocation after anomalies. Long-lived tokens will be treated as liabilities.
  3. Human oversight. Proof-of-presence or human approval for sensitive flows like email sends, payments, or code commits. AI may act quickly, but high-value actions still require human eyes.

Just as MFA became the baseline a decade ago, agent governance will be the baseline for the coming one. Trust will depend on demonstrating agent governance in production, not in slide decks. Markets are already rewarding those who move first. The next wave of winners will be the vendors who turn agent security from patchwork into platforms.

Supporting signals

ShadowLeak connector flaw

Netskope IPO

  • Event: Netskope raised ~$900M and traded up.
  • Why it matters: Market appetite for cloud-native platforms is intact.
  • Watch: Peers advancing IPO or M&A timelines.

CrowdStrike acquisitions

  • Event: Pangea and Onum deals; launch of Agentic Security Workforce.
  • Why it matters: Platform race now runs through agent governance.
  • Watch: Proof points in reduced agent-related incidents.