1. Headline & intro
Another week, another twist in the Delve saga – and another reminder that much of startup security is still more paperwork than protection. After LiteLLM and Lovable, we now learn that Delve also handled security certifications for Context AI, the company whose app became the entry point for a recent breach at hosting giant Vercel.
This isn’t just inside baseball about one troubled Y Combinator alum. It’s a warning shot for anyone buying or building SaaS and AI tools: the compliance machine we’ve come to rely on is creaking, and trust is being outsourced to the wrong places.
2. The news in brief
According to TechCrunch, Delve – a compliance startup already under heavy scrutiny – was the company that provided security certifications for Context AI, the AI agent training firm connected to Vercel’s recent security incident.
TechCrunch reports that Vercel was breached after an employee installed a Context AI app and linked it to a corporate Google account. Attackers then abused that account to reach some of Vercel’s internal systems and access limited customer data.
Context AI confirmed it had used Delve for its certifications, but said it moved to competitor Vanta and an independent audit firm (Insight Assurance) after allegations about Delve surfaced in March. Earlier, another Delve customer, LiteLLM, was hit by a supply‑chain attack when hackers planted malware in its open-source code; LiteLLM also dropped Delve and is re‑certifying.
Lovable, a former Delve customer, separately disclosed that it had accidentally exposed customer chat data due to a configuration issue and had initially dismissed vulnerability reports. Meanwhile, an anonymous whistleblower known as “DeepDelver” has published further claims about Delve’s behaviour. Delve declined to comment to TechCrunch.
3. Why this matters
On the surface, this looks like a single shaky startup causing trouble for a few unlucky customers. In reality, it exposes a much broader problem: the industrialisation of “check‑the‑box” security.
Delve’s pitch – like many rivals – was simple: we’ll automate your path to SOC 2, ISO 27001 and other badges, so you can sell faster to enterprises. When everything works, everyone wins: startups shorten sales cycles, auditors get repeat business and buyers see familiar logos on a PDF.
But as soon as an alleged weak link appears – rubber‑stamped audits, copied tools, fabricated customer data – the whole trust chain starts to wobble. If the certifying company is unreliable, how much are those certificates worth? And if large platforms like Vercel are indirectly affected, the blast radius is no longer “just” one startup’s risk.
Winners in the short term are Delve’s more established competitors, especially those with stronger brands and deeper auditor networks. Independent audit firms also gain leverage; automation alone suddenly looks insufficient without humans who will actually say “no”.
The losers are early‑stage SaaS and AI teams that did treat security seriously but leaned on fast‑track compliance vendors to get through procurement. They now face extra skepticism, slower deals and expensive re‑certification – even if they did nothing wrong technically.
Most importantly, customers lose clarity. A SOC 2 report or ISO certificate is supposed to simplify due diligence. After Delve, many buyers will have to start asking a new question: who watched the watcher?
4. The bigger picture
The Delve story sits at the intersection of three trends.
First, the rise of compliance‑as‑a‑service. Over the past five years, tools promising “SOC 2 in weeks” have exploded in Silicon Valley. They connect to your AWS, GitHub, HR system and spit out dashboards, policies and evidence collections. That’s genuinely useful – but it also tempts everyone to treat audits as an automated subscription instead of a hard‑earned assessment.
Second, the normalisation of supply‑chain attacks. From SolarWinds to the xz Utils backdoor, attackers increasingly go after the vendors behind your vendors. In this case, the path was an app from Context AI, a customer of Delve, into Vercel. None of the individual incidents TechCrunch describes are spectacular on their own. Together, they show how fragile trust becomes when every tool is integrated with every other tool.
Third, the AI tooling gold rush. Startups like Context AI are racing to plug agents and copilots into corporate accounts, especially Google Workspace, Microsoft 365 and developer platforms. Those integrations are powerful – and a gift to attackers if security and governance aren’t airtight. The Vercel incident is an early example of what “OAuth‑driven” breaches in the AI era will look like.
We’ve been here before in other forms. Fintech had Wirecard, health tech had Theranos: charismatic narratives plus weak oversight, sold to buyers who mostly wanted the comfort of someone else’s guarantee. The difference with Delve and its ecosystem is that the fallout is more subtle – not billions missing, but thousands of companies potentially over‑estimating how safe their stack really is.
5. The European / regional angle
For European companies, this isn’t a distant American drama. Many EU startups, banks and corporates already rely on U.S. compliance platforms and U.S. auditors to obtain SOC 2 or similar reports so they can sell globally. Those reports then get stapled next to GDPR documentation as if they were all part of the same regulatory universe.
But European law doesn’t work like that. Under GDPR and the upcoming EU AI Act, accountability stays with the controller and provider. You cannot point to a certificate from a vendor like Delve and claim you’ve outsourced responsibility. NIS2 and DORA go even further for critical sectors, forcing companies to assess third‑party risk in depth and document it.
The Delve saga gives EU regulators ammunition. It underlines why Brussels keeps insisting on risk‑based approaches and continuous oversight instead of one‑time attestations. Expect more guidance – and maybe enforcement – around how organisations vet their security partners and AI tool vendors.
It also opens a lane for European alternatives: regional compliance tools that integrate EU regulations from the ground up, and local audit firms that understand both SOC 2 and GDPR, not just one of them. For privacy‑conscious markets like Germany or the Nordics, trust in the auditor can matter as much as trust in the software itself.
6. Looking ahead
Where does this go from here?
Delve will likely continue to bleed customers, at least among more mature startups and any company with in‑house security leadership. Whether it survives depends on two things: how regulators view the whistleblower allegations, and whether investors still see a path to rehabilitating the brand. An acqui‑hire or quiet shutdown would not be surprising.
More interesting than Delve’s fate is the second‑order impact on the compliance industry. Expect:
- Enterprise buyers to expand security questionnaires: not just “do you have SOC 2?” but “who issued it, how do they work, and how often are controls re‑tested?”
- Greater demand for independent audit firms that are clearly separate from automation vendors.
- Pressure on accelerators and VCs to stop treating a shiny compliance logo as equivalent to a real security program in their portfolio dashboards.
In AI specifically, security reviews of agent platforms and integrations will become a mandatory step before rollout, not an afterthought. CISOs will want to know exactly what an AI tool can do with a Google or Microsoft account and how revocation, monitoring and anomaly detection work in practice.
Unanswered questions remain: Did any buyer rely on Delve’s certifications in legal contracts? Could we see lawsuits if misrepresentation is proven? And how many other “move fast” compliance players cut corners in ways that simply haven’t surfaced yet?
7. The bottom line
Delve is not just a story about one allegedly reckless startup; it’s a mirror held up to an ecosystem that equated PDFs and badges with real security. If we keep outsourcing trust to whoever promises “SOC 2 in a sprint,” we’ll keep being surprised by breaches that were entirely predictable.
If you build or buy SaaS and AI products, the question is no longer “do they have a certificate?” but “do I understand – and trust – the people and processes behind it?”



