Pit.AI Technologies was a Y Combinator-backed AI hedge fund that aimed to replace human-generated trading hypotheses with reinforcement learning, eliminate the traditional 2-and-20 fee structure, and "solve intelligence for investment management." Founded in December 2016 by Yves-Laurent Kom Samo—a former Goldman Sachs and JPMorgan quant with an Oxford PhD in Machine Learning—the company had one of the most credentialed founding profiles in its YC cohort. It raised $120,000 in seed funding, presented at YC W17 Demo Day, and then spent roughly four years in a research-stage holding pattern before quietly closing in early 2021. The core thesis of failure: Pit.AI chose to operate as an actual hedge fund rather than a software company, which required raising LP capital and generating auditable live trading returns—a regulatory and fundraising gauntlet that a two-person team with no assets under management and a $120K budget could not clear. The underlying technical challenge of extracting signal from financial noise proved harder and slower than the YC timeline assumed.
Yves-Laurent Kom Samo arrived at Pit.AI with credentials that were unusual even by YC standards. He spent two years at Goldman Sachs as an Equities Algorithmic Trading Strategist beginning in 2010, then moved to JPMorgan Chase as an FX Quant Trader in 2012. [1] In 2013, he left finance to pursue a PhD in Machine Learning at the University of Oxford, where he was a Google Fellow in Machine Learning and affiliated with the Oxford-Man Institute of Quantitative Finance—an academic center specifically focused on applying quantitative methods to financial markets. [2]
That combination—practitioner experience at two of the world's largest investment banks, followed by three years of frontier ML research at an institution purpose-built for quantitative finance—gave Kom Samo a specific and pointed view of what was broken in the industry. Traditional quant funds, in his framing, required a human to first postulate a trading hypothesis before machine learning could be applied to test it. The hypothesis-generation step was the bottleneck: it was slow, biased, and limited by human intuition. His insight was to eliminate it entirely.
Kom Samo completed his PhD in 2016 and participated in Entrepreneur First's Batch 7 before joining Y Combinator's Winter 2017 cohort. [3] The dual accelerator path—EF followed by YC—suggests he was actively seeking institutional validation and network before launch, a pattern consistent with someone building toward a regulated financial product rather than a consumer app.
Pit.AI Technologies was formally incorporated on December 12, 2016. [4] The founding vision was explicit: use AI to build what Kom Samo called "AI-Quants"—systems that generate trading hypotheses directly from data—and charge limited partners no management fees, collecting only carry on performance. [5] As he described it: "I am the Founder and CEO of Pit.AI Technologies, a Silicon Valley AI startup (hedge fund) backed by Y Combinator aiming at solving intelligence for investment management, and getting rid of hedge fund management fees." [6]
The ambition was structural, not incremental. Pit.AI was not trying to build a better quant tool for existing funds. It was trying to replace the human judgment layer of the entire asset management industry.
Pit.AI's core product was an AI-powered trading system designed to automate the entire research pipeline of a hedge fund—from data ingestion to strategy generation to portfolio construction—without requiring a human analyst to propose a hypothesis first.

The technical architecture centered on a variant of reinforcement learning (RL). In standard supervised machine learning, a model is trained to predict a specific output—say, whether a stock will go up tomorrow. Pit.AI's approach was different. Rather than predicting per-state returns, its RL system evaluated trading strategies holistically, optimizing directly for portfolio-level metrics like the Sharpe ratio (a measure of risk-adjusted return) and maximum drawdown (the largest peak-to-trough loss). [13] The system treated strategy selection as a sequential decision problem: the agent learned which actions to take across time to maximize a reward function defined in terms of real investment performance metrics.
This was a meaningful technical distinction. Most quant funds—even those using machine learning—still rely on human researchers to generate the initial trading idea. A researcher might hypothesize that momentum in one asset class predicts reversals in another, then use ML to test and refine that hypothesis. Pit.AI's pipeline was designed to skip that step entirely. Its "AI-Quants" were meant to surface trading hypotheses directly from large-scale data, without human priors. [14]
To train and validate these models, Pit.AI participated in the Fintech Sandbox program, which gave early-stage fintech startups access to institutional-grade financial data. [15] The specific data sources used are not publicly documented, but Fintech Sandbox partnerships typically include market data, alternative data, and historical tick data from providers like Bloomberg, Nasdaq, and ICE.
The business model innovation was as central to the pitch as the technology. Pit.AI planned to charge limited partners no management fees—the standard 2% annual fee on AUM that hedge funds collect regardless of performance—and instead collect only carry, meaning a percentage of profits. [16] The argument was that management fees misalign incentives: they reward funds for growing AUM, not for generating returns. A carry-only structure would only pay Pit.AI when its investors made money.
By July 2018, the company launched a Research Paper Series, publishing on Medium under the Pit.AI Technologies publication. The series articulated the company's philosophy on AI in finance and argued against what the founder called "dogmatic modeling paradigms" in the industry. [10] This shift toward open research publication—roughly 16 months after Demo Day—was the last significant public output from the company before its closure.
Pit.AI's target customers were institutional and high-net-worth limited partners—the investors who allocate capital to hedge funds. The company was not selling software to existing funds; it was positioning itself as a fund, competing directly for LP allocations. This meant its customers were endowments, family offices, fund-of-funds, and accredited individual investors who would commit capital to Pit.AI's strategy in exchange for a share of returns.
The secondary audience, implicit in the Research Paper Series, was the quantitative finance research community. By publishing on AI methodology in finance, Pit.AI was also building credibility with institutional allocators who evaluate funds partly on the intellectual rigor of their investment process.
The global hedge fund industry managed approximately $3.2 trillion in assets under management as of 2017, with quantitative and systematic strategies representing a growing share of that total. [17] The specific segment Pit.AI was targeting—AI-native, systematic long/short equity or multi-asset strategies—was nascent but attracting significant institutional interest following the success of firms like Two Sigma and Renaissance Technologies.
The fee compression trend was real and measurable. Average hedge fund management fees had declined from approximately 1.6% in 2008 to 1.4% by 2017, and performance fees from 19.2% to 17.4% over the same period, as LPs pushed back on the traditional 2-and-20 structure. [17] Pit.AI's carry-only model was a direct response to this trend.
Pit.AI's competitive landscape had two distinct layers.
The first was established quantitative hedge funds: Renaissance Technologies, Two Sigma, D.E. Shaw, and AQR. These firms had decades of track records, billions in AUM, and hundreds of researchers. They were not direct competitors in the LP market—no institutional allocator would compare a pre-AUM two-person startup to Renaissance—but they defined the performance benchmark that Pit.AI's models would eventually need to clear.
The second layer was the emerging class of AI-native hedge fund startups, several of which were better-capitalized and further along. Numerai, founded in 2015, had already built a novel crowdsourced model structure and raised institutional backing. Sentient Technologies had raised over $100 million to apply evolutionary algorithms to trading. Aidyia, a Hong Kong-based AI fund, had launched live trading by 2017. Each of these competitors had either more capital, a live track record, or a differentiated go-to-market strategy that avoided the direct LP fundraising problem Pit.AI faced.
Pit.AI's technical differentiation—assumption-free RL applied to strategy-level optimization rather than return prediction—was genuine. But differentiation at the model architecture level is difficult to communicate to LP allocators, who evaluate funds primarily on audited returns, risk controls, team size, and operational infrastructure. On all of those dimensions, Pit.AI was at a structural disadvantage relative to both incumbents and better-funded peers.
Pit.AI planned to operate as a hedge fund, generating revenue exclusively through performance fees (carry) on LP capital. [16] The company explicitly rejected management fees, which are typically charged as an annual percentage of AUM regardless of performance.
This model had a fundamental cash flow problem for a pre-AUM startup. Management fees exist partly because they cover operating costs—salaries, data, infrastructure, compliance—during periods when a fund is not generating performance. A carry-only fund earns nothing until it has LP capital deployed and generating positive returns. For Pit.AI, that meant the company needed to raise LP capital, deploy it, generate auditable returns, and then collect carry—all before its $120,000 in seed funding ran out. [8]
No evidence suggests Pit.AI ever raised a subsequent funding round or secured LP commitments. The company remained at two employees throughout its operating life, [18] suggesting it never reached the operational scale required to run a compliant fund management operation.
At YC W17 Demo Day in March 2017, Pit.AI's models were running on paper—without real money—and the founder projected live trading within one year. [9] That projection was apparently never met. No evidence of a live fund launch, LP capital raise, or AUM figure appears anywhere in the public record.
The company raised a single seed round of $120,000 from Y Combinator and Zillionize in March 2017. [8] No subsequent funding rounds were recorded on Crunchbase, PitchBook, or CB Insights. [19]
The team remained at two employees for the entirety of its known operating life. [18] The company's official Twitter account, @PitAITech, was created in July 2018—more than a year after Demo Day—and published zero tweets before the company closed. The account had 39 followers at the time of its last known state.
The Research Paper Series launched in July 2018 was the company's most substantive public output. No published academic papers, institutional partnerships, or commercial outcomes linked to the series have been identified.
Pit.AI closed quietly in early 2021, approximately four years after its YC Demo Day, with no public shutdown announcement, no post-mortem, and no investor statement. [12] The founder moved on to found KXY Technologies and later joined Google as a Senior Staff Machine Learning Engineer. [11] The absence of any public explanation makes definitive causal analysis impossible, but the evidence points to several compounding failures.
The most structurally fatal decision Pit.AI made was to operate as an actual hedge fund rather than as a software or data company. This choice meant the company's entire revenue model depended on first raising LP capital, then deploying it, then generating auditable returns, and only then collecting carry. Every step in that chain required clearing a bar that a two-person, $120K startup was not equipped to clear.
Institutional LP allocators—endowments, family offices, fund-of-funds—require a minimum track record before committing capital. The industry standard is typically 12–24 months of audited live returns, a compliance infrastructure (SEC registration or exemption, a prime broker relationship, an administrator), and a team large enough to handle operations, risk management, and investor relations separately from portfolio management. Pit.AI had none of these at Demo Day, and its $120,000 in seed funding was insufficient to build them. [8]
The carry-only fee structure compounded this problem. Management fees—the 2% annual charge that Pit.AI explicitly rejected—exist in part because they fund operations during the pre-performance phase. By eliminating them, Pit.AI removed the only revenue stream available to a fund before it starts generating returns. The company had no bridge between its seed capital and its first dollar of carry. No evidence suggests the team attempted to address this by raising a larger seed round, seeking a strategic partnership with an existing fund, or pivoting to a software licensing model.
The founder himself provided the clearest articulation of the second failure in a March 2020 Medium post: "financial markets are noisy, much noisier than physical systems, so much so that overall, for the same model complexity, one would need a lot more financial data than physical data to achieve the same level of accuracy." [20]
This is a precise statement of a well-known problem in quantitative finance: the signal-to-noise ratio in financial data is extremely low. A reinforcement learning system trained on physical systems—robotics, game-playing, logistics—can iterate quickly because the environment is relatively stationary and the reward signal is dense. Financial markets are non-stationary (the relationships between variables change over time), adversarial (other participants adapt to and arbitrage away any detectable pattern), and sparse in signal (a strategy that works 52% of the time is exceptional). Building an RL system that reliably outperforms on these dimensions requires far more data, far more compute, and far more iteration time than the YC 3-month batch cycle assumed.
The March 2017 projection of live trading within one year was almost certainly optimistic given this constraint. The Research Paper Series launched in July 2018—16 months after Demo Day—may represent the team's recognition that the models were not ready for live deployment and that publishing research was a way to build credibility while continuing to iterate. But no evidence suggests the research series attracted LP interest or accelerated the technical timeline.
Kom Samo identified a third obstacle in the same 2020 post: "cultural and social obstacles" to an AI revolution in finance, including "noise from AI hype and dogmatic modeling paradigms." [21] This is a candid acknowledgment that the institutional finance community was skeptical of AI-native investment approaches in 2017–2020, and that the hype cycle around AI was actively working against credibility with sophisticated allocators.
LP allocators in 2017 had seen a wave of AI-in-finance pitches and were increasingly skeptical of claims that were not backed by live returns. The very language Pit.AI used—"solving intelligence for investment management"—was the kind of framing that sophisticated institutional investors had learned to discount. The company's response was the Research Paper Series, which positioned Pit.AI as a serious research organization rather than a hype-driven startup. But publishing on Medium, without peer-reviewed academic papers or institutional co-authors, was unlikely to move the needle with the endowments and family offices that Pit.AI needed to convince.
Pit.AI remained at two employees throughout its operating life. [18] Running a hedge fund—even a small one—requires simultaneous execution across model research, data engineering, compliance, investor relations, legal, and operations. A two-person team cannot staff these functions. The founder's Oxford academic page included a March 2017 hiring announcement, suggesting he recognized the team needed to grow. But no evidence of successful hires appears in the public record, and the company's headcount never increased above two.
The $120,000 seed round was the binding constraint. At San Francisco salary levels in 2017, $120,000 covered roughly three to four months of a single senior engineer's compensation, let alone the compliance infrastructure, data costs, and legal fees required to launch a registered fund. Without a follow-on raise—which would have required demonstrating progress toward live trading—the team could not grow, and without team growth, the company could not make progress toward live trading. The loop was closed.
Operating as a regulated financial product requires a different capital structure than a software startup. Pit.AI raised $120,000 to build a hedge fund—a product category that requires audited track records, compliance infrastructure, prime broker relationships, and LP relations before generating a dollar of revenue. [8] Startups entering regulated financial services need to either raise enough capital to clear the compliance threshold or find a go-to-market path (licensing, white-labeling, B2B SaaS) that generates revenue before the regulatory burden is fully loaded.
Fee model innovation that eliminates operating revenue is a structural liability for pre-revenue companies. Pit.AI's carry-only model was intellectually coherent and LP-friendly, but it removed the management fee revenue that traditional funds use to cover costs during the pre-performance phase. [16] A startup with no AUM and no management fees has no bridge between seed capital and first revenue. Innovative fee structures need to be paired with a financing plan that accounts for the gap.
The signal-to-noise problem in financial ML is a genuine technical constraint, not a marketing challenge. The founder's own acknowledgment that financial data requires "a lot more data than physical data to achieve the same level of accuracy" [20] points to a timeline mismatch between the pace of ML research iteration and the pace of LP fundraising. Companies building ML systems for financial markets should plan for longer research cycles and more capital than comparable ML applications in other domains.
Credibility-building through research publication is a slow substitute for live returns. The Research Paper Series launched in July 2018 was a reasonable response to the credibility gap, but institutional LP allocators evaluate funds on audited performance, not Medium posts. [10] For an AI-native fund, the fastest path to LP credibility is a small live track record—even with a friends-and-family fund or a prop trading account—not open research contribution.
The YC batch model is poorly suited to businesses that require regulatory approval and LP fundraising before launch. YC's 3-month cycle and Demo Day format are optimized for software products that can show user growth or revenue within weeks. Pit.AI's product required 12–24 months of live returns before it could raise LP capital. [9] The mismatch between the accelerator's feedback loop and the fund's launch timeline left Pit.AI in a permanent pre-launch state that its seed funding could not sustain.