This report was generated by our Deep Research agent and may contain mistakes.
Did we get something wrong? DM @oscrhong and we'll fix it ASAP!
Upgrade to Pro to get implementation-ready specs for every company, the full report library, and 5 on-demand report requests per month.
Delve was a San Francisco-based compliance automation startup founded in 2023 by MIT dropouts Karun Kaushik (CEO) and Selin Kocalar (COO). The company participated in Y Combinator's Winter 2024 batch and raised $35.4M across a seed round and a $32M Series A led by Insight Partners at a $300M valuation β all within roughly 18 months of founding. Its stated product was an AI-native platform that deployed autonomous agents to collect compliance evidence, autofill vendor questionnaires, and continuously monitor infrastructure across frameworks including SOC 2, HIPAA, ISO 27001, GDPR, and PCI DSS.[1]
Delve did not fail from a flawed market thesis or poor timing. The compliance automation market is real, large, and underserved. Instead, the company collapsed under allegations that its core product was a fabrication: that AI automation was simulated through boilerplate evidence generation and low-cost Indian audit mills, that compliance reports were nearly identical across clients, and that a secondary product was an unattributed fork of a competitor's open-source code. The gap between what was sold and what was delivered β if the allegations are accurate β represents one of the more brazen alleged frauds in recent YC history.
As of April 2026, Delve had suspended demos, frozen its sales pipeline, and begun offering complimentary re-audits to customers. Insight Partners scrubbed its investment blog post. At least one named customer, LiteLLM, publicly defected to competitor Vanta. The company had not formally shut down, but its path to recovery appeared structurally compromised. No regulatory findings had been published as of the research date.[2]

Karun Kaushik and Selin Kocalar met during their first week at MIT and dropped out together during their sophomore year in 2023.[3] The decision was deliberate and timed to the AI boom. As Kocalar put it in February 2025: "This is almost like the internet boom. It's a now-or-never opportunity, and we wouldn't sacrifice that for anything."[4]
Both founders carried credentials unusual for their age. Kaushik had conducted AI research at Stanford University's Bintu Lab and helped scale an AI-powered COVID diagnostic tool globally during the pandemic.[5] Kocalar had published eight peer-reviewed papers by age 20 and led an experiment aboard the International Space Station.[6] These credentials made the founding story press-friendly and investor-legible β a fact that, in retrospect, may have accelerated trust beyond what the underlying product warranted.
The pivot to compliance was experience-driven rather than market-mapped. Before Delve, Kaushik and Kocalar were building an AI-powered medical scribe for healthcare providers. When they encountered HIPAA compliance firsthand β finding it costly, opaque, and time-consuming β they recognized the problem as a product opportunity.[7] Kaushik articulated the core insight clearly: "Compliance frameworks are standardized. Businesses aren't. That mismatch is why traditional software breaks down and teams fall back to duct-taped workflows across email, Slack, and shared drives."[8]
The insight was genuine. Compliance automation is a real and painful problem for startups seeking SOC 2 or HIPAA certification to close enterprise deals. The existing market β dominated by Vanta and Drata β had grown rapidly but still left room for a more AI-native approach. The founders enrolled in Y Combinator's Winter 2024 batch, relocated from an apartment in Mission Bay to a Financial District office in San Francisco, and began selling aggressively.[9]
What remains unresolved β and is central to the later fraud allegations β is whether the product the founders were selling during this period reflected genuine technical capability or a vision they had not yet built. No independent technical audit of the platform's early architecture has been published. The exact state of the medical scribe product at the time of the pivot is also unknown, making it difficult to assess how much engineering infrastructure carried over into the compliance product.
Delve's stated product was an AI-native compliance platform designed to eliminate the manual labor that makes achieving certifications like SOC 2, HIPAA, ISO 27001, GDPR, and PCI DSS so painful for startups and growth-stage companies.
The core workflow, as marketed, worked roughly as follows: a customer would connect Delve to their internal systems β cloud infrastructure, HR tools, code repositories, access management platforms. Delve's AI agents would then navigate those systems autonomously, capturing screenshots and logs as evidence of compliance controls. The platform would autofill vendor security questionnaires, flag gaps in real time, and maintain continuous monitoring so that compliance didn't decay between annual audits. The customer would then work with an independent auditor β sourced through Delve's network β to issue the final certification.[30]
The product began with HIPAA compliance, the framework the founders had encountered firsthand. As Kocalar described it: "As our customer base grew, they started asking for support with other frameworks: SOC 2, PCI, GDPR, ISO, basically the whole alphabet soup of compliance."[31] This expansion was framed as customer-driven, which suggested genuine engagement with paying users.
Delve also launched a no-code tool called "Pathways," pitched to prospects as a workflow automation layer that allowed compliance teams to build custom evidence collection processes without writing code.[32] The long-term vision was expansive: Kaushik described plans to eventually automate one billion hours of human work and expand into adjacent areas including cybersecurity, risk management, and internal governance.[33]
What the whistleblowers alleged the product actually was differed sharply from this description. According to the DeepDelver Substack post and the subsequent internal whistleblower, the actual delivery involved boilerplate evidence generation β near-identical compliance documentation populated with client names but otherwise standardized across all customers. A leaked dataset of 493 draft audit reports allegedly showed 99.8% identical text and "keyboard-mashed" logging values, suggesting the evidence was fabricated rather than collected from client systems.[34] Customers were allegedly forced to "choose between adopting fake evidence or performing mostly manual work with little real automation or AI."[35]
The "Pathways" product carried its own allegation: that it was a fork of Sim.ai's open-source SimStudio, rebranded without attribution and pitched as Delve's proprietary technology. Sim.ai CEO Emir Karabeg confirmed to TechCrunch that no license agreement existed, stating: "We knew they planned to use Sim for something and later tried unsuccessfully to sell them an agreement."[36] Notably, Sim.ai was itself a paying Delve customer β making the alleged appropriation particularly striking. After the allegations broke, Delve scrubbed all Pathways references from its website along with many other pages, a behavioral signal consistent with awareness of wrongdoing.[37]
No independent technical audit of Delve's actual platform capabilities has been published. The full scope of what the AI agents could and could not do β and whether any genuine automation existed beneath the alleged boilerplate layer β remains unverified.
Delve's primary customers were startups and growth-stage technology companies seeking compliance certifications to unlock enterprise sales. SOC 2 certification, in particular, has become a de facto requirement for selling software to enterprise buyers β a dynamic that creates a large, recurring, and somewhat captive market of companies that need certification to grow. Delve's named customers at Series A β Lovable, Bland, and Wispr Flow β were all AI-native startups in the YC orbit, suggesting the company's initial distribution ran heavily through the YC network.[38]
By January 2026, Delve claimed over 1,000 customers in 50+ countries, with COO Kocalar asserting the platform had helped clients land "nine-figure deals and federal contracts."[39] The federal contract claim is notable: government procurement requires rigorous compliance documentation, and if Delve's evidence generation was as boilerplate as alleged, the implications for those customers extend beyond reputational damage.
The compliance automation market is large and structurally growing. Enterprise software buyers increasingly require SOC 2, ISO 27001, and HIPAA certifications from vendors before signing contracts, creating a mandatory compliance layer for any startup selling upmarket. Traditional compliance consulting is expensive β a SOC 2 audit can cost $30,000β$100,000 in professional services fees β and the software layer that automates evidence collection addresses a genuine pain point. Insight Partners' managing director Praveen Akkiraju framed the opportunity at Series A: "Since compliance touches every part of how a business runs, from scaling operations to closing deals to building customer trust, modernizing this function will modernize the entire organization."[40]
The AI-native framing added a second market narrative: that large language models and autonomous agents could reduce the human labor in compliance evidence collection from weeks to hours. This narrative was credible enough in 2024β2025 to attract $300M valuations and Fortune 500 CISO participation in the Series A.
Delve entered a market with established incumbents and well-funded challengers. Vanta, the category leader, had raised over $200M and built deep integrations with cloud infrastructure providers, giving it a distribution and data advantage that was difficult to replicate quickly. Drata occupied a similar position. Newer entrants including Comp AI, Anecdotes, Leida, and Sprinto competed on price, speed, or specific framework depth.[41]
Delve's competitive positioning rested on two claims: that its AI agents were more automated than competitors' rule-based integrations, and that its pricing was more accessible to early-stage startups. The ~$15,000 ACV was competitive with Vanta's pricing for comparable coverage.[42]
The structural problem with this positioning is that compliance automation is a category where trust is the primary purchase criterion. Customers are buying a certification that their enterprise buyers will rely on to make security decisions. Vanta and Drata had years of auditor relationships, customer references, and regulatory familiarity that Delve could not replicate quickly. Delve's alleged solution to this trust gap β partnering with Accorp and Gradient, described by the whistleblower as Indian certification mills with nominal US presence β may have been an attempt to shortcut the auditor relationship-building that incumbents had spent years developing. If accurate, it was a shortcut that undermined the product's core value proposition: that the certification it produced was worth anything.
The competitive landscape also had a platform risk dimension. As AI tooling matured, larger compliance platforms could plausibly absorb the "AI agent" feature set that Delve was selling as a standalone product. The window for a differentiated AI-native compliance product was real but narrow β and filling it with genuine automation rather than simulated automation was the execution challenge Delve allegedly failed.
Delve operated on an annual subscription model, charging customers for access to its compliance automation platform plus the cost of the associated audit. The average contract value was approximately $15,000 per year, covering a primary compliance framework such as SOC 2 or HIPAA.[43] Under churn pressure, pricing reportedly dropped to as low as $6,000 with ISO 27001 and a penetration test included β a significant discount that suggests the company was prioritizing retention over margin.[44]
Delve never disclosed audited revenue figures. All revenue metrics in the public record come from founder claims or low-confidence third-party databases β the absence of verified financials is itself a signal worth noting.
The implied ARR at Series A is difficult to reconcile with available data. At 500+ customers and a ~$15,000 ACV, the implied ARR would be $7.5M or higher. However, Getlatka reported $2.6M in revenue as of October 2025 β three months after the Series A closed.[45] This discrepancy (which is an inference from two low-confidence data points, not a verified fact) could reflect Getlatka's methodology, a lower realized ACV than the stated average, or customer counts that were inflated. It cannot be resolved from public data.
Estimated burn rate (inference, not fact): With a 24-person team as of October 2025 and a San Francisco office, a rough estimate of $15,000β$20,000 per employee per month in fully-loaded costs implies a monthly burn of $360,000β$480,000, or roughly $4.3Mβ$5.8M annually. Against $2.6M in reported revenue, this would imply the company was not profitable at the time of the Getlatka report β directly contradicting the Series A claim of profitability. This inference depends on unverified inputs and should be treated with caution.
Delve's reported growth trajectory was extraordinary by any measure β and that extraordinariness is now a central part of the story.
At the January 2025 seed round, the company reported 100+ customers and a run rate of "several million dollars" β achieved, the founders claimed, from their apartment before hiring a single salesperson.[46] The company claimed to have generated over $1M in ARR before making its first sales hire.[47]
By July 2025, six months later, the customer count had grown to 500+ β a 5x increase in six months β with named customers including AI startups Lovable, Bland, and Wispr Flow.[48] The Series A announcement claimed the company was profitable and had doubled revenue quarter-over-quarter.
By January 2026, COO Kocalar told Inc. that Delve had surpassed 1,000 customers in 50+ countries β a further doubling in six months.[49] A low-confidence database entry from BounceWatch reported 1,500+ clients and eight-figure profitable revenue, though this figure has no verifiable basis.[50]
All of these figures are founder-reported or derived from databases that rely on founder-reported inputs. No audited financials, no independent customer verification, and no third-party revenue confirmation exists in the public record. The Getlatka $2.6M figure β itself low-confidence β is the only third-party revenue estimate available, and it conflicts with the implied ARR from the customer count and ACV data. The co-founders were named to Forbes' 30 Under 30 for 2026 in the AI category,[51] a recognition that further amplified the growth narrative without adding evidentiary weight to the underlying metrics.
The central allegation against Delve is that its core product β AI-automated compliance evidence collection β did not function as marketed. According to the DeepDelver whistleblower post published March 18β19, 2026, and subsequently corroborated by an internal Delve employee on March 28, 2026, the company allegedly generated boilerplate compliance documentation that was not derived from client-specific system data but was instead standardized text populated with client names.[52]
The specific evidence cited was a leaked dataset of 493 draft audit reports allegedly showing 99.8% identical boilerplate text across clients, with "keyboard-mashed" logging values β strings of random characters β substituted for actual system logs.[53] If accurate, this would mean that Delve's AI agents were not navigating client infrastructure and capturing real evidence, but were instead generating plausible-looking documentation that would satisfy a cursory audit review without reflecting actual security controls.
The alleged remedy Delve offered customers was binary and damaging: accept the fabricated evidence as their own compliance record, or perform the evidence collection manually with little platform assistance. Neither option delivered the automation that customers had paid for.[54]
Delve's response was internally contradictory. The company simultaneously called the Substack post "misleading," claimed it was "an automation platform that provides data to independent, licensed auditors" (not a compliance issuer), and had COO Kocalar acknowledge on LinkedIn: "We grew too fast and fell short of our own standard. To our customers, we deeply apologize for the inconveniences caused."[55] The acknowledgment of falling short is difficult to reconcile with the simultaneous claim that the allegations were a "targeted cyberattack from a malicious actor."
The internal whistleblower's materials β including screenshots, videos, and recorded conversations β significantly increased the credibility of the allegations beyond what an anonymous Substack post alone could establish. A leaked Notion post allegedly written by CEO Kaushik in November 2025 reportedly contradicted the platform capabilities described in the Series A pitch deck.[56] If verified, this would suggest the founders were aware of the gap between their pitch and their product at the time of fundraising β a distinction that matters for any future regulatory or legal proceedings.
Compliance-as-a-service has a structural vulnerability that Delve allegedly exploited and was ultimately exposed by: the audit relationship. A SOC 2 or HIPAA certification is only as credible as the auditor who issues it. Established platforms like Vanta and Drata had spent years building relationships with recognized CPA firms and accredited auditors. Delve allegedly routed virtually all of its clients through two firms β Accorp and Gradient β described by the whistleblower as operating primarily in India with only a nominal US presence, functioning as "part of the same operation."[57]
This is a structurally dangerous arrangement for customers. A SOC 2 report issued by an unaccredited or low-quality auditor may not satisfy enterprise buyers' security review requirements, meaning customers who paid for certification may not have received a certification their own customers would accept. The implications for Delve's claimed customer wins β including federal contracts β are significant.
The audit partner problem also explains why the December 2025 Google Sheet exposure was so damaging. A publicly-accessible spreadsheet containing hundreds of confidential draft audit reports is not a minor operational security lapse β it is a catastrophic failure of the exact function Delve was selling. For a compliance company, its own data handling practices are a direct product signal. Delve failed that signal in the most visible way possible, which is what triggered the whistleblower investigation.
The Sim.ai allegation added a second dimension of alleged misconduct that, while less financially material than the evidence fabrication claim, is behaviorally significant. Delve allegedly forked Sim.ai's open-source SimStudio β licensed under Apache 2.0, which requires attribution β rebranded it as "Pathways," and pitched it to prospects as Delve's own proprietary no-code tool.[58]
Sim.ai CEO Emir Karabeg confirmed to TechCrunch that no license agreement existed and that Sim.ai had "tried unsuccessfully to sell them an agreement" β meaning Delve was aware of the attribution requirement and chose not to comply.[59] The fact that Sim.ai was itself a paying Delve customer at the time makes the alleged appropriation particularly notable. Both companies were YC alumni.
The pattern this suggests β shipping borrowed or superficial technology under time pressure, faster than genuine IP could be built β is consistent with the broader evidence fabrication allegation. A team that had sold a product vision it had not yet built would face constant pressure to fill the gap between demo and delivery. Pathways may represent one instance of that gap-filling strategy.
Delve's immediate scrubbing of all Pathways references from its website after the TechCrunch report is a strong behavioral signal. Companies that believe they have done nothing wrong do not typically delete product pages within hours of a press inquiry.
Beyond company-specific decisions, the Delve case reveals a structural vulnerability in the compliance automation category that is worth naming explicitly.
Compliance certifications are credence goods: the customer cannot easily evaluate the quality of what they received. A startup that pays for SOC 2 certification receives a document. Whether the underlying evidence collection was rigorous or boilerplate, whether the auditor was accredited or a rubber-stamp operation, whether the controls documented actually exist β none of this is visible to the customer without independent verification. This information asymmetry creates a principal-agent problem that is unusually exploitable.
The AI framing amplified this vulnerability. "AI agents that navigate your systems" is a description that most compliance buyers cannot technically evaluate. The gap between a genuine AI agent that reads system logs and a human operator who pastes boilerplate text is invisible to a customer who receives the same-looking PDF at the end of the process. Delve allegedly exploited this invisibility.
The hypergrowth incentives of the 2024β2025 AI funding environment made this worse. A company that could show 5x customer growth in six months and claim profitability at a $300M valuation had strong incentives to fill any product gaps with whatever was available β boilerplate evidence, low-cost audit mills, forked open-source tools β rather than slow down to build genuine automation. Whether this was a deliberate strategy from the start or an emergent response to product gaps discovered after customers were signed is unknown. The outcome was the same either way.
Compliance automation is a credence good category, and that makes it uniquely exploitable β and uniquely punishing when the exploitation is discovered. Delve's customers could not evaluate whether their SOC 2 evidence was AI-generated from real system data or boilerplate text until a whistleblower published a dataset of 493 reports showing 99.8% identical content. The same information asymmetry that allegedly allowed Delve to sell fabricated compliance for months meant that when the exposure came, it was total: every customer had to question whether their certification was worth anything, and LiteLLM's public defection to Vanta signaled that the trust damage was irreversible.
A compliance company's own operational security is a direct product signal β and Delve failed it catastrophically before any whistleblower published a word. The December 2025 Google Sheet exposure β hundreds of confidential draft audit reports left publicly accessible β was not a minor lapse. It was a demonstration that the company selling compliance infrastructure could not manage its own data handling. This single incident triggered the whistleblower investigation that unraveled the company. Startups selling trust-dependent products are held to a higher operational standard than their product category, not a lower one.
The YC network accelerated Delve's customer acquisition and investor trust in ways that reduced scrutiny of underlying product claims. Delve's seed investors included Y Combinator itself; its early customers were YC-adjacent AI startups; its Series A lead was a top-tier firm. Y Combinator publicly expressed support for the founders even after the whistleblower allegations broke. This network provided distribution and credibility that a less well-connected team could not have accessed β but it also meant that the normal friction of customer due diligence and investor scrutiny was reduced. The Sim.ai IP appropriation is the sharpest illustration: Sim.ai was a fellow YC alum and a paying Delve customer, and still had its open-source code allegedly forked without attribution.
Hypergrowth metrics that cannot be reconciled with each other are a warning sign, not a validation. Delve reported 500+ customers at a ~$15,000 ACV at Series A, implying $7.5M+ ARR β but Getlatka reported $2.6M in revenue three months later. The company claimed profitability while a rough burn rate estimate suggests it was not. None of these discrepancies were flagged publicly before the fraud allegations broke. Investors and journalists who covered the Series A did not attempt to reconcile the numbers. The lesson is not that all fast-growing companies are fraudulent, but that unreconciled metrics in a trust-dependent category deserve more scrutiny than they received here.
Shipping borrowed technology under time pressure β Pathways as a fork of SimStudio β is a pattern, not an isolated incident. The Apache 2.0 license requires attribution; Delve allegedly provided none and actively tried to purchase a retroactive agreement when caught. This is not a mistake made by a team that was unaware of the requirement. Combined with the boilerplate evidence allegation, it suggests a team that had sold a product vision ahead of its technical reality and was filling the gap with whatever was available. The scrubbing of Pathways from the website within hours of the TechCrunch report is the behavioral signature of a team that knew exactly what it had done.
Ready to rebuild Delve?
Implementation-ready specs, every report, and 5 on-demand requests each month.