AudioFocus was an Oakland-based hearing aid startup founded in May 2019 by Shariq Mobin, a UC Berkeley auditory neuroscience PhD and former Google Brain engineer. The company participated in Y Combinator's Summer 2019 batch and set out to solve the most persistent complaint in audiology: hearing aids that fail in noisy environments like restaurants. AudioFocus built a machine learning system that analyzed acoustic echo statistics to spatially isolate nearby voices from distant ones — a fundamentally different approach from conventional noise suppression. The technology demonstrably worked in clinical settings, producing 2–3x improvements on standardized hearing tests. But AudioFocus never crossed the gap between research-grade prototype and consumer product. The company raised only $393K in total, spent four years iterating on a behind-the-ear hardware prototype, and quietly wound down around 2023 when the founder moved to a new employer. The core failure was structural: the embedded-compute and miniaturization requirements of a wearable hearing aid were simply beyond what a four-person team with seed-stage funding could solve on a venture timeline.
Shariq Mobin spent years studying how the human brain processes sound before he ever thought about building a company. His doctoral research at UC Berkeley sat at the intersection of auditory neuroscience and machine learning — a rare combination that gave him both the scientific grounding to understand the hearing problem and the engineering tools to attempt a solution. After completing his PhD, he joined Google Brain as an engineer, where he worked on production-scale ML systems. The combination of academic depth and industry engineering experience made him an unusually credible founder for a deep-tech hardware startup. [1] [2]
The founding insight was straightforward but technically demanding. Difficulty hearing in noisy environments is the number-one complaint of more than 300 million hearing aid users globally. [3] Conventional hearing aids amplify everything — the person across the table and the kitchen noise behind them — because they lack the spatial intelligence to distinguish nearby voices from distant ones. Mobin's research suggested that the human auditory system solves this problem by analyzing the way sound echoes off nearby surfaces. A voice one meter away produces a different echo signature than a voice three meters away. If you could teach a machine learning model to read those echo statistics, you could replicate the brain's spatial filtering in software.
UC Berkeley's Intellectual Property & Industry Research Alliances (IPIRA) lists Bruno Olshausen — one of the most prominent computational neuroscientists in the world and a Berkeley faculty member — as a co-founder of AudioFocus. [4] Olshausen's theoretical work on sparse coding and neural signal processing is directly relevant to the AudioFocus algorithm. However, he does not appear on the company's public-facing website or in any press coverage, and his operational role remains ambiguous. He may have been an academic advisor listed for IP licensing purposes rather than an active company builder.
The team also included a hearing aid hardware design expert — identified on the team page as Dr. Reza Kassayan, formerly a hardware architect at EarLens — and a small group of engineers. At peak, the company had four employees. [5]
AudioFocus was formally established in May 2019 and entered Y Combinator's S19 batch the same month. [6] The company was headquartered in Oakland, California, close to both UC Berkeley and the Bay Area hardware ecosystem. The YC program gave the team early institutional credibility, a small seed check, and access to a network of investors — but the company's ambitions required far more capital than a standard YC deal provides.
AudioFocus built a hearing aid designed to solve what audiologists call the "cocktail party problem": the inability to follow a single conversation in a room full of competing sounds. Conventional hearing aids amplify all sounds in a given frequency range. They make everything louder, which helps in quiet environments but often makes noisy environments worse. AudioFocus took a fundamentally different approach. [17]
The Core Algorithm: Echo Statistics as a Spatial Filter
The central innovation was acoustics-informed machine learning. When sound travels from a source to a microphone, it arrives both directly and as reflections off nearby surfaces — walls, tables, the human body. The ratio and timing of these reflections encode information about how far away the source is. A voice one meter away produces a distinct echo signature compared to a voice three meters away. AudioFocus trained a machine learning model to read these echo statistics and use them to spatially separate nearby voices from distant ones. The company described this as mimicking how the human auditory cortex processes sound. [18]
Voice Fingerprinting
A secondary capability was speaker-specific filtering. The system could build a "voice fingerprint" of a target individual — a spouse, a close friend — from a few minutes of recorded speech. Once trained, the model would preferentially amplify that specific voice and suppress everything else. [19] This feature addressed a specific use case: the hearing aid user who primarily needs to hear one or two people in their daily life.
The Hardware Platform
AudioFocus built a behind-the-ear (BTE) hearing aid prototype using a BatAndCat BTE device as the physical shell. The deep learning model ran on a Variscite VAR-SOM-MX8 embedded system-on-module — a research-grade board, not a consumer chip. [20] Mobin wrote the real-time audio pipeline in C++, implementing short-time Fourier transforms (STFT), Wiener filters, and resampling to process audio with minimal delay. [21] The team went through three hardware design iteration cycles and built a dedicated Audio Lab at Circuit Launch, an Oakland hardware incubator, for patient testing. [22]
Training Data Infrastructure
One of the most technically sophisticated elements of the system was the training data pipeline. AudioFocus built a custom acoustic ray-tracing engine that simulated how sound behaves in different physical environments. This engine generated tens of gigabytes of synthetic acoustic training data, allowing the model to learn from a far wider range of environments than could be captured through real-world recording alone. [23]
The Mobile App
The company also built an Android companion application. [24] The exact function of the app — whether it controlled the hearing aid, served as a standalone noise-suppression tool, or was primarily a demo interface — is not documented in public sources. The Android-only platform suggests limited engineering resources and a likely prioritization of the hardware path.
The Work at a Startup listing described the algorithms as "10x better than modern hearing aids" at suppressing background noise and referenced a public audio demo at audiofocus.io/demo. The company held one patent. [25]
AudioFocus targeted the large population of adults with hearing loss who either do not use hearing aids or are dissatisfied with the ones they have. The company's stated market framing: 37 million US adults have hearing loss, but only 8 million use hearing aids. [26] The gap — 29 million people who need help but don't use the available products — represents both the opportunity and the implicit indictment of existing technology. The primary complaint driving non-adoption is poor performance in noisy environments, which AudioFocus cited as the number-one complaint of more than 300 million hearing aid users globally. [27]
The immediate target user was someone with mild-to-moderate hearing loss who spends time in social settings — restaurants, family gatherings, meetings — where conventional hearing aids fail. The voice fingerprinting feature suggests a secondary target: users with a specific, recurring communication need, such as hearing a spouse across a dinner table.
The global hearing aid market was valued at approximately $9 billion in 2019 and growing steadily, driven by aging demographics in developed markets. The US market alone represented roughly $3 billion annually. The 29-million-person gap between diagnosed hearing loss and hearing aid adoption in the US represents a significant latent market — one that the industry has consistently failed to capture due to cost, stigma, and performance limitations. AudioFocus's technology, if it had reached consumer form factor, would have addressed the performance barrier directly.
However, the market has structural features that complicate entry. Hearing aids in the US were, until 2022, classified as Class II medical devices requiring audiologist fitting and prescription. The over-the-counter (OTC) hearing aid category was created by the FDA in August 2022 — after AudioFocus had already been operating for three years — which would have opened a direct-to-consumer channel that did not exist at the company's founding.
The incumbent hearing aid manufacturers — Phonak (Sonova), Oticon (Demant), Starkey, Widex, and Signia (WS Audiology) — collectively control roughly 90% of the global market. All of them have invested heavily in noise reduction technology. Phonak's Roger system, for example, uses a remote microphone to transmit a target speaker's voice directly to the hearing aid, addressing the cocktail party problem through a hardware workaround rather than an algorithmic one. Oticon's OpenSound Navigator uses a different approach: rapid scene analysis to selectively attenuate noise sources.
AudioFocus's approach was technically distinct from both. Rather than a remote microphone or scene-level noise gating, it used spatial echo analysis to filter by distance — a more elegant solution that required no additional hardware worn by the conversation partner. But the incumbents had advantages that no seed-stage startup could match: decades of miniaturization expertise, proprietary chip designs, FDA relationships, and established audiologist distribution networks.
In the startup space, companies like Eargo and Olive Union were pursuing OTC hearing aids with a consumer-friendly form factor and price point. These companies competed on accessibility and cost, not algorithmic performance. AudioFocus was not competing in that segment — it was attempting to build a technically superior product for users who needed genuine performance improvement, a harder and more expensive problem.
AudioFocus's intended business model was direct-to-consumer hardware sales of a premium hearing aid. The product would have been priced as a medical-grade device, likely in the $1,000–$3,000 range consistent with the premium hearing aid market. No revenue figures or pricing details were ever made public, and no paying customers are documented at any point in the company's history.
The company's actual funding model during its operational life was a hybrid of venture seed capital (YC, Xoogler network), non-dilutive research grants (NIA/Johns Hopkins A2 Pilot Grant), and institutional relationships (UC Berkeley). [28] [29] This mix is more characteristic of an academic spinout than a venture-backed product company. The NIA grant in particular signals that AudioFocus was operating partly on a clinical research model — generating peer-reviewed evidence of efficacy — rather than a pure product commercialization model. Whether this was a deliberate strategy or a consequence of being unable to raise commercial venture capital is not documented.
AudioFocus's traction was entirely clinical rather than commercial. The most significant result was a 2–3x improvement in noise tolerance measured using two standardized audiological tests: the Quick Speech-in-Noise (QuickSIN) test and the Acceptable Noise Level (ANL) test. [30] These are validated, peer-reviewed instruments used by audiologists to assess hearing aid performance. A 2–3x improvement is a clinically meaningful result — not a marginal gain.
The company secured an A2 Pilot Grant from the National Institute on Aging and Johns Hopkins University, indicating that the research passed peer review. [31] Academic collaborations with Stanford, Johns Hopkins, and the University of the Pacific further validated the technology's scientific credibility. [32]
On the patient side, Mobin began recruiting volunteers as early as November 2019, presenting at the Hearing Loss Association of America East Bay Chapter. [33] By July 2023 — nearly four years later — the company reported "several excited patients" and an ongoing pilot study with a professor in San Francisco. [34]
The phrase "several excited patients" after four years of operation is the most telling traction metric in the public record. It confirms that the technology generated genuine enthusiasm among users who tried it — and that the company never moved beyond a small prototype cohort to any form of commercial distribution.
AudioFocus did not fail because the problem was wrong, the science was bad, or the market was too small. It failed because the gap between a working research prototype and a deployable consumer product was a hardware engineering and capital problem that the company's resources could not bridge. The failure unfolded across four distinct dimensions.
In September 2020 — 16 months after founding — Mobin posted on Hacker News and explicitly named the technical barriers preventing commercialization: a 10ms latency requirement (audio delays above this threshold cause the user to hear an echo of their own voice, making the device unusable), a small power budget (hearing aids run on tiny batteries lasting 3–7 days), consistent real-time performance, and fault tolerance. [35]
These are not software problems. Running a deep learning model that processes audio in under 10ms, on a device drawing milliwatts of power, in a package small enough to fit behind or inside an ear, requires either custom silicon (an application-specific integrated circuit, or ASIC) or an extremely optimized implementation on a specialized low-power processor. Designing a custom chip costs $5–20 million in engineering and fabrication costs alone — roughly 13–50x AudioFocus's total documented funding of $393K. [36]
The Variscite VAR-SOM-MX8 board that AudioFocus used for its prototype is a system-on-module roughly the size of a credit card — orders of magnitude larger and more power-hungry than what a consumer hearing aid can accommodate. The team knew this. Mobin stated publicly in September 2020 that the in-ear-canal form factor was "5–10 years away." [37] That statement, made 16 months into the company's life, was effectively a public acknowledgment that the consumer product was beyond the company's near-term reach.
The team attempted to address this by targeting the behind-the-ear form factor first — a larger device with more room for compute — and iterating through three hardware design cycles. [38] By July 2023, the BTE prototype was running. But BTE is still not a consumer product; it is a clinical research device. The attempt to stage the hardware problem — BTE first, in-ear later — was rational, but it did not resolve the fundamental constraint: even the BTE prototype required research-grade compute that no hearing aid manufacturer would ship.
Total documented funding was $393K. [39] This figure is not a seed round for a hardware-AI medical device company — it is approximately one senior engineer's annual compensation in the Bay Area. The investor base (YC, Xoogler network, NIA grant, UC Berkeley) reflects a company that relied heavily on institutional relationships and non-dilutive grants rather than attracting commercial venture capital. [40]
No Series A or follow-on venture round is documented. This is the most direct signal of commercial investor skepticism. Venture investors evaluating AudioFocus in 2020 or 2021 would have seen: a four-person team, a research-grade prototype, a 5–10 year timeline to consumer form factor stated by the founder himself, and a regulatory environment requiring FDA clearance for a prescription medical device. The risk-adjusted return calculation did not work for standard venture timelines.
The company attempted to compensate through grant funding — the NIA A2 Pilot Grant is a meaningful non-dilutive award — but grant funding operates on academic timelines and cannot substitute for the capital required to hire chip designers, run clinical trials at scale, and build manufacturing relationships. The grant validated the science; it did not fund the commercialization.
The NIA grant and the academic collaborations with Stanford, Johns Hopkins, and the University of the Pacific were genuine achievements. [41] They produced credible clinical data — the 2–3x QuickSIN/ANL improvement — and gave the company scientific legitimacy. But they also pulled the company toward an academic research model at the expense of commercial velocity.
Clinical research moves slowly. IRB approvals, patient recruitment, data collection, and analysis take months to years. A company operating on grant funding and academic collaboration timelines cannot iterate at startup speed. By July 2023, AudioFocus was still running a pilot study with "a professor in San Francisco" — a description that sounds like a research collaboration, not a commercial deployment. [42] The company spent four years generating evidence that the technology worked, without building the commercial infrastructure to sell it.
Mobin's credentials were exceptional for building the algorithm and securing academic credibility. A PhD in auditory neuroscience and ML from UC Berkeley, combined with production ML engineering experience at Google Brain, is precisely the background needed to design a novel acoustics-informed deep learning system. [43]
But building a consumer medical device requires a different skill set: chip architecture, FDA regulatory strategy, audiologist channel development, manufacturing partnerships, and Series A fundraising from hardware-focused investors. The team included a hardware expert (Dr. Reza Kassayan, formerly of EarLens), but a four-person team cannot simultaneously advance ML research, embedded hardware design, clinical validation, regulatory strategy, and commercial development. The company appears to have prioritized the first three — where the founders had the deepest expertise — at the expense of the last two.
There was no shutdown announcement, no acqui-hire press release, and no post-mortem blog post. Mobin's LinkedIn profile was updated to list Modal — an AI infrastructure company — as his current employer, and his LinkedIn description of AudioFocus shifted to past tense: "Tackling the low adherence rate of hearing aids will forever hold a special place in my heart, especially because hearing aids have such an impact reducing cognitive decline & social isolation." [44] The YC company page remains live with no indication of shutdown or acquisition. The company appears to have simply run out of money and momentum, with the founder moving on without a formal conclusion.
The absence of a shutdown announcement is itself informative. Companies that are acquired announce it. Companies that fail dramatically often generate press coverage. Companies that quietly exhaust their runway and dissolve — particularly those operating in a research mode with no commercial customers — simply stop. AudioFocus stopped.
The gap between "works in a lab" and "works in a product" is a capital problem, not just an engineering problem. AudioFocus demonstrated 2–3x clinical improvement on validated tests — the algorithm worked. But translating that algorithm from a research-grade embedded board to a consumer hearing aid required custom silicon or highly optimized embedded ML, both of which cost far more than $393K. Deep-tech hardware startups need to raise capital commensurate with the miniaturization challenge, not just the software challenge. A founder who publicly estimates a 5–10 year timeline to consumer form factor in year two of a seed-funded startup has implicitly described a company that cannot succeed on venture timelines without a step-change in funding.
Founder-market fit must extend to commercialization, not just technology. Mobin's auditory neuroscience PhD and Google Brain experience were ideal for building the algorithm and securing academic credibility. They were less suited to the FDA regulatory pathway, audiologist channel development, and hardware-focused Series A fundraising that AudioFocus needed. A co-founder or early executive with medical device commercialization experience — someone who had navigated FDA 510(k) clearance and built relationships with hearing aid distributors — might have changed the company's trajectory.
Grant funding validates science but does not substitute for commercial capital. The NIA A2 Pilot Grant and academic collaborations with Stanford, Johns Hopkins, and the University of the Pacific were genuine achievements that confirmed the technology's clinical value. But grant-funded research operates on academic timelines and produces papers and pilot data, not products. AudioFocus spent four years generating clinical evidence without building the commercial infrastructure to monetize it. Non-dilutive grants are valuable supplements to venture capital; they are not replacements for it.
Publicly stating a 5–10 year timeline to consumer form factor is a fundraising liability. Mobin's September 2020 Hacker News comment was honest and technically accurate. It was also, from a venture investor's perspective, a signal that the commercial opportunity was beyond standard fund timelines. Investors who read that comment in 2020 would have been evaluating a 2025–2030 consumer product launch — outside the return window of most seed and Series A funds. Transparency about technical timelines is admirable; it also has consequences for the company's ability to raise follow-on capital.
The OTC hearing aid market opened too late to help. The FDA's August 2022 creation of the over-the-counter hearing aid category would have given AudioFocus a direct-to-consumer channel without audiologist intermediaries — potentially a significant commercial unlock. But by 2022, AudioFocus had been operating for three years on $393K and still had only a research prototype. The regulatory tailwind arrived after the company had already exhausted its runway and momentum.