Kintsugi shutters but shares its work
After seven years of building a voice-based mental health detection system, California startup Kintsugi has announced it will close operations after failing to secure FDA clearance within the timeframe it needed. Rather than letting the research and engineering vanish, the company is releasing most of its software as open-source — a move that turns an individual setback into a potential win for the wider community.
How the technology worked: Kintsugi focused not on the content of speech but on paralinguistic features — the cadence, tone, and other markers of how someone speaks — to flag signs of depression and anxiety. That approach offered a promising complement to traditional clinical assessments, which rely heavily on questionnaires and interviews rather than objective signal-driven markers.
By open-sourcing the code and models, Kintsugi is enabling researchers, clinicians, and developers to study, validate, and adapt the tools. Some components may be repurposed beyond clinical settings — the Verge notes potential applications like detecting deepfake audio — which can amplify the project's real-world impact even as the original company winds down.
Beyond the immediate software release, the story is a constructive reminder that clinical AI progress often follows a longer arc: prototypes and pilots, iterative validation, and close work with regulators. Kintsugi's choice to share its work broadly makes its technical advances available to more teams and could accelerate safer, better-validated AI tools for mental health and adjacent challenges.