Don't Be the Next TeamPCP Victim
This Canadian Startup Is Defending the New Frontier of Software Supply Chain Security
On March 19, 2026, a threat actor with the handle TeamPCP exploited a single misconfigured GitHub Actions workflow inside Aqua Security’s Trivy repository, one of the most trusted open-source vulnerability scanners in the world. They compromised a service account, force-pushed malicious code to 76 of 77 version tags, and quietly embedded a credential stealer into a tool that security teams around the world were actively using to protect themselves.
The payload harvested SSH keys, cloud tokens, Kubernetes secrets, and npm credentials from CI/CD environments. Those stolen tokens fueled CanisterWorm, a self-propagating worm that cascaded through 66+ npm packages using blockchain-based command infrastructure that couldn’t be conventionally taken down. Within eight days, the same campaign had compromised GitHub Actions, Docker Hub, npm, PyPI, and the VS Code extension marketplace. One misconfiguration. Five ecosystems. An estimated 300 GB of exfiltrated data. Over 500,000 stolen credentials.
The attacker didn’t write a zero-day. They didn’t break encryption. They turned a trusted security tool into a weapon, and most organizations never saw it coming.
This Isn’t Your Father’s Supply Chain Attack
The industry has been talking about software supply chain risk since SolarWinds in 2020. But something fundamental has shifted in the last twelve months.
The attack surface is no longer human-scale.
The average application ships with over 1,100 open-source components. Most of those were chosen by nobody on your team: they’re transitive dependencies, packages that your packages depend on. And increasingly, even first-order dependency decisions are no longer made by humans.
AI coding agents (Cursor, GitHub Copilot Workspace, Claude Code, and their descendants) are now writing code, selecting libraries, and opening pull requests without a human ever touching the keyboard. A study of over 117,000 dependency changes found that AI agents select known-vulnerable dependency versions 50% more often than humans. They do this at speeds that compress the security review window to essentially zero.
The same week as TeamPCP, attackers also compromised Axios (the HTTP library downloaded over 100 million times a week) by adding a malicious dependency that ran a remote access trojan on install, then self-destructed before anyone noticed. The industry average time to detect a supply chain breach is 267 days. On 135 monitored endpoints, that malware executed and phoned home to the attacker’s server within 89 seconds of install.
As a16z put it bluntly in their April 2026 analysis: “We are building a world where machines write the code, machines choose the dependencies, and machines ship the updates.” If security doesn’t keep pace, the AI agents are cooked.
The Three Attack Surfaces Nobody Is Defending
There are now three distinct attack surfaces that most security teams have almost no visibility into.
The developer machine. Developer machines are running MCP servers, AI models, IDE extensions, and browser plugins that directly influence what code gets written and committed. Credentials accumulate across dotfiles, .env files, and environment variables. Security teams typically have no real-time inventory of what’s running.
The coding agent. AI coding agents don’t just suggest code: they act. They install dependencies, invoke external tools via MCP servers, execute builds, and push commits. Most security policies were written for humans. There is no category in your existing tooling for “approve this MCP server plugin before the agent can use it.” Attackers are beginning to exploit this gap explicitly.
AI-generated code. LLMs regularly invent package names that don’t exist. Nearly 20% of AI-recommended packages are fabrications, and attackers register these hallucinated names in advance with malicious payloads. The technique is called “slopsquatting.” One researcher uploaded a dummy package with a commonly hallucinated name and watched it accumulate 30,000 downloads, largely from AI-driven workflows, in weeks.
Traditional security tooling is blind to all three surfaces. Most software composition analysis tools work by checking dependencies against CVE databases. But a newly planted backdoor doesn’t have a CVE. Running npm audit on the compromised Axios version returned a clean bill of health because the malware had already self-destructed.
Securing AI Development at the Source
This is the problem that Montreal-based Boost Security is built to solve. And why we at Amiral Ventures are proud to have backed them.
Boost Security’s core insight is deceptively simple: the right place to stop a supply chain attack is not the CI/CD pipeline gate. By the time code reaches your scanner, credentials may already be exfiltrated and your developer machine may already be compromised. You need to move protection upstream: to the moment a prompt is sent to a coding agent, to the moment a dependency is suggested, to the moment a plugin is installed.
Their newly launched Developer Endpoint Security platform gives security teams centralized visibility and governance across the full AI development workflow:
Developer Endpoint Visibility: A real-time inventory of every coding agent, MCP server, AI model, IDE extension, and package running across your developer fleet, the exact visibility gap that made TeamPCP possible.
Coding Agent Safety: Governance controls ensuring agents only run with approved MCP servers and plugins, with configuration drift flagged before it becomes an incident.
Supply Chain Security: Behavioral analysis of packages and extensions: evaluating what code actually does, not just checking it against a CVE list.
Secure Agentic Code Generation: Guardrails embedded into the coding agent workflow so generated code follows organizational secure coding guidelines before being committed.
Data Leakage Prevention: Outbound prompt scanning to detect and mask credentials and API keys before they reach external LLMs.
“AI coding agents are fundamentally changing how software gets built, but security has largely remained focused on scanning code after the fact,” said CEO and Founder Zaid Al Hamami. “Developer Endpoint Security moves protection upstream. It secures the developer machine, governs the coding agent, and ensures safer code is generated from the start.”
Why We Invested
A few things stood out when we evaluated Boost Security at Amiral.
The team. Zaid co-founded the company alongside Rajiv Sinha, who built Oracle’s first Application Security program and later worked at Cigital, the leading North American AppSec consulting firm. This is Zaid's second software security startup. His first, Immunio, built the first RASP technology, and was acquired by TrendMicro. Between them, two decades of seeing every era of software security from the inside. Their VP of Security Research, François Proulx, is one of the founders of NorthSec and a veteran AppSec researcher who has discovered 0-days in Terraform providers, AWS Helm Charts, and major GitHub Actions; he’s a recognized voice in the supply chain security community long before most people knew it was a category.
The timing. The threat landscape shifted materially in the past 12 months. AI coding agents have moved from prototype to production infrastructure at thousands of organizations. The attack surface they create, developer machines running MCP servers, agents making autonomous dependency decisions, LLMs hallucinating package names, didn’t meaningfully exist two years ago. Boost Security is building for the world that actually exists now.
The market. After SolarWinds, Log4Shell, XZ Utils, TeamPCP, and the Axios attack, the CISO conversation has shifted from “should we care about this?” to “how do we actually defend against it?” The specific category of AI-native developer security (securing the coding agent workflow, not just the code it produces) is still early. There are very few credible solutions addressing all three attack surfaces described above. Boost Security is one of them.
What This Means for You
If you’re a CISO or AppSec leader, three questions worth asking your team today:
1. Do you have a real-time inventory of every AI tool, MCP server, and IDE extension running on your developer machines?
2. Do your security policies govern what AI coding agents can connect to and what packages they can install?
3. Are you checking packages for behavioral signals, or just CVE matches?
If the answers are uncertain, this is exactly the gap Boost Security is built to close.




