Karin Stephan on Designing Technology That Heals, Hosted by Sanjay Puri on the RegulatingAI Podcast
Karin Stephan tells on RegulatingAI Podcast how to build ethical, empathetic AI for mental health, focusing on memory, safe regulation, and youth vulnerability.
One of the most misunderstood parts is that we immediately jump to it's either AI or a human and that's just simply not the discussion we should have.”
WASHINGTON, DC, UNITED STATES, December 12, 2025 /EINPresswire.com/ -- AI promises a solution to the global mental health crisis, yet recent tragedies, like the tragic case of the Belgian teen who took his life after an interaction with a chatbot, expose the profound risks of unchecked development. On the RegulatingAI Podcast, host Sanjay Puri spoke with Karin Stephan, Co-founder of Yearick, to cut through the noise and discuss what truly safe, emotionally intelligent technology looks like.— Karin Stephan
From Music to Mind: Understanding the Human Need
Karin's journey from running a music school for 19 years to becoming a mental health tech entrepreneur was rooted in a simple but profound observation: her students stuck around because the school offered a safe space where they felt heard and guided (referencing). She realized being heard is a fundamental, unmet human need. Her pivot to AI was driven by the mission to provide that crucial "companion" and "listening ear" on a scale that human professionals cannot match (referencing).
The Crisis Point: Why Current AI Fails the Vulnerable
Karin argues that the danger lies not in the tech itself, but in the misaligned incentives of developers who fail to "obsess with observing" their human users (referencing). Teens are uniquely vulnerable. In their formative years, they are driven to explore and take risks, yet lack the fully developed cognitive ability to process or oversee the consequences of their actions (referencing).
Threading the Needle: Accessibility and Safe Intervention
AI is necessary, but solutions must be accessible and embedded into people's lives—at work, at school—regardless of financial background (referencing). The technology should be a bridge during acute moments, offering a breathing exercise at 3 a.m. or a non-judgmental space to process bullying (referencing). However, it is not a fix for systemic failures like abuse or a toxic workplace.
The Regulatory Challenge: Liability vs. User Experience
Karin insists that regulation is essential but warns that current approaches often prioritize liability over user safety. For example, abruptly blocking a user who is opening up about self-harm with a large warning banner—a common liability shield—only frustrates and isolates them at their most vulnerable point (referencing). Instead, regulators must demand nuanced, seamless escalation to human resources.
Furthermore, she argues that privacy is technically possible and must be built-in from the start, challenging the common practice of data collection for marketing purposes (referencing).
Building Truly Intelligent Companions
For AI to truly help, it must go beyond reducing complex feelings to basic labels like "sad" or "anxious." The technology must help the user become more nuanced in their emotional literacy (referencing).
Finally, Karin highlights what AI can do better than humans in mental health: memory (referencing). An AI with deep, persistent memory can connect the dots in a user’s story over time, allowing for a much more individualized and effective response. The ultimate goal, she concludes, is not AI or a human, but using the power of AI to augment and amplify human support.
Upasana Das
Knowledge Networks
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

