HealthcareThursday, April 2, 2026· 2 min read

Kintsugi's depression-detecting AI goes open-source after FDA setback

Source: The Verge AI

TL;DR

After seven years of development, California startup Kintsugi is shutting down after missing FDA clearance timelines, but it's releasing most of its speech-based depression and anxiety detection tech as open-source. The move could accelerate research, enable repurposing (for example, deepfake-audio detection), and surface lessons about AI regulation in healthcare.

Key Takeaways

  • 1Kintsugi built AI that analyzes how people speak to detect signs of depression and anxiety.
  • 2The company is closing after failing to secure FDA clearance in time, but will open-source most of its technology.
  • 3Open-sourcing can broaden access for researchers, clinicians, and developers and speed follow-on innovations.
  • 4Parts of the system may find new life outside healthcare, including use cases like deepfake-audio detection.
  • 5The story highlights both the promise of clinical AI and the real regulatory hurdles that shape deployment.

Kintsugi shutters but shares its work

After seven years of building a voice-based mental health detection system, California startup Kintsugi has announced it will close operations after failing to secure FDA clearance within the timeframe it needed. Rather than letting the research and engineering vanish, the company is releasing most of its software as open-source — a move that turns an individual setback into a potential win for the wider community.

How the technology worked: Kintsugi focused not on the content of speech but on paralinguistic features — the cadence, tone, and other markers of how someone speaks — to flag signs of depression and anxiety. That approach offered a promising complement to traditional clinical assessments, which rely heavily on questionnaires and interviews rather than objective signal-driven markers.

By open-sourcing the code and models, Kintsugi is enabling researchers, clinicians, and developers to study, validate, and adapt the tools. Some components may be repurposed beyond clinical settings — the Verge notes potential applications like detecting deepfake audio — which can amplify the project's real-world impact even as the original company winds down.

Beyond the immediate software release, the story is a constructive reminder that clinical AI progress often follows a longer arc: prototypes and pilots, iterative validation, and close work with regulators. Kintsugi's choice to share its work broadly makes its technical advances available to more teams and could accelerate safer, better-validated AI tools for mental health and adjacent challenges.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.