Vercel got breached on April 19. The entry point wasn’t Vercel. It was Context.ai, a third-party AI tool a Vercel employee had connected to their Google Workspace with deployment-level OAuth scopes.
When Context.ai’s OAuth app got compromised, the attacker didn’t need to breach Vercel. They had a badge already. From the employee’s Workspace account they pivoted into Vercel’s internal environments, pulled environment variables belonging to a subset of customers, and claim to have lifted NPM and GitHub tokens in the process. ShinyHunters are reportedly asking $2M.
Last month I wrote that your AI tools are the attack surface. That post was about prompt injection: feed your agent evil text and it exfiltrates tokens. Vercel is the next vector. No prompt injection required. The vendor of your AI tool gets breached and inherits your trust envelope wholesale.
The conclusion most teams will resist: the safest AI tool is still the one you didn’t install.
Context.ai Wasn’t Hacked At Vercel. It Was Hacked As Vercel.
This is the reframe that matters. Context.ai didn’t get through a firewall. It walked in through the front door holding a badge a Vercel employee had issued it during an OAuth flow.
OAuth scopes are the part we consent to and then forget. Deployment-level Google Workspace access gives an app the ability to read mail, enumerate users, access drive, and chain into any other app tied to the same account. That’s not a software integration. That’s an employee.
— Hacker News, on the Vercel incidentWhen one OAuth token can compromise dev tools, CI pipeline, secrets and deployment simultaneously, something architectural has gone wrong.
We have a mental model for this in the physical world. If a contractor comes on site, they get a visitor badge, an escort, and a scoped area. If they leave the company, the badge gets deactivated. If their employer gets breached, you pull the badge immediately.
We do none of this for AI tools.
This Is a Pattern Now
Vercel is not the first node in this chain, and it won’t be the last.
Three weeks ago, the litellm supply-chain attack got in through Trivy, the security scanner thousands of teams run to protect their builds. A compromised scanner stole CI credentials, which published a backdoored gateway that by definition holds every LLM key in your org. The tool you installed to defend you became the way in.
A week later npm had a very bad day. Axios got maintainer-hijacked and served a trojan to 100 million weekly downloads. Anthropic shipped a source map that leaked the Claude Code source through the same registry on the same day. One attack, one accident, same pipeline.
Now Context.ai into Vercel. Same shape, different node.
Each compromise lands one hop upstream of the thing you cared about. The package you depend on. The scanner you bought to catch bad packages. The AI tool your employee installed to summarize the scanner’s report. That upstream node is the soft spot, and we keep adding more of them.
Your Stack Has More Of These Than You Think
Count the AI integrations with OAuth scopes across your org right now. A real count, not a guess.
- Every dev with Cursor or Claude Code has connectors to GitHub, Linear, Jira, or Slack
- Every MCP server installed last weekend is a persistent identity with its own scopes
- Every custom GPT or Claude skill with connectors is holding a Workspace grant
- Every AI browser extension with network access is an exfiltration channel
None of these went through procurement. None of them show up in your IAM review. Most of them were installed by an engineer who wanted to ship faster this afternoon.
Ask your team to list every AI tool they’ve granted OAuth scopes to in the last 90 days. Compare against the list your security team thinks exists. The gap is your actual attack surface. Shadow AI is the new shadow IT, except the grants are deeper and the tools update themselves.
Why Your Pipeline Guardrails Missed It
This is the bit the guardrail story gets wrong. We spent five years putting controls on the code path. Branch protection. Required reviews. Signed commits. SAST in CI. Secret scanning on push. All good. All necessary. All irrelevant to Vercel.
The Vercel attack didn’t go through a pull request. It didn’t trip a CI check. It didn’t deploy a malicious build. The attacker authenticated as a human, clicked through a dashboard, and read environment variables that the platform was designed to hand out on request.
Code-level guardrails protect against bad code. They don’t protect against legitimate access from compromised identities, and they don’t see which AI tools your engineers installed this week. The identity layer is where the 2026 threat lives, and most teams don’t have an inventory, let alone controls.
Build It Yourself
The first and best mitigation isn’t on any IAM checklist. It’s this: don’t install the tool.
Three years ago, buying a third-party AI tool was the pragmatic choice. Building your own meant weeks of integration work that wouldn’t ship the feature the business asked for. That math flipped. Most of what a SaaS AI tool does for you now is a few hundred lines of script against an API you already have credentials for. A meeting summarizer. A Slack digest. A Jira triage bot. A PR reviewer. An afternoon each, maybe a weekend for the complicated ones.
Briefs as code makes the same argument for planning docs. Karpathy made it for dependencies after litellm went down: yoink the functionality with an LLM rather than pull in the package.
A local script with a scoped token you control is a smaller attack surface than a SaaS vendor’s OAuth grant into your Workspace. The supplier can’t get breached if there’s no supplier.
Not every AI tool can be replaced this way. Some genuinely need vendor-side models, proprietary data, or platform features you can’t reproduce. But most of them can. Before clicking through the next OAuth consent screen, ask whether the thing on the other side is something you could build in a day. Often it is.
For The Tools You Can’t Replace
If the tool genuinely has to stay, the mitigations aren’t exotic. IAM hygiene we already do for humans, applied to the non-humans we keep installing.
- Inventory AI OAuth grants. Workspace admin, GitHub Organization settings, and cloud IAM all expose third-party app grants. Most orgs have never looked.
- Scope ruthlessly. If a tool asks for deployment or full-drive scopes to do autocomplete, say no. Prefer fine-grained tokens.
- Mark secrets as sensitive. Vercel’s sensitive env vars survived because they’re encrypted at rest and not readable from the API. AWS Secrets Manager, GCP Secret Manager, Doppler and 1Password offer the same. The read-back default is the attack surface.
- Rotate on supplier incident. When a vendor discloses a breach, revoke the OAuth grant and rotate every credential it touched. Don’t wait for them to tell you you’re affected. They don’t know yet.
- Govern MCP servers like
.envfiles. Commit, review, scope to the single repo they need. Don’t wire a general-purpose GitHub MCP to your whole org.
The Uncomfortable Part
We’re adding non-human identities to our orgs at the rate AI tools ship, which is daily. Each one is a contractor with a badge, maintained by a startup whose security posture you have never audited. Vercel audited Context.ai exactly as carefully as you have audited the last AI tool your engineer installed, which is to say, not at all.
The prior post argued the AI tools you use to write code are the attack surface. That’s still true. But the Vercel breach sharpens it. The attack surface isn’t just the tool’s behavior when it reads your untrusted content. It’s the tool’s existence in your IAM graph, holding scopes, waiting for its vendor to get popped.
You wouldn’t let a dozen random-startup contractors roam your codebase unescorted. Stop letting their AI tools do it. Build the thing yourself when you can. Scope, rotate, and revoke what you can’t replace. The safest AI tool is still the one you didn’t install.


