Gen Z is turning to AI for therapy. This deep dive explores why it’s happening, the product risks it creates, and what PMs can learn from the ethical dilemmas.
Gen Z’s AI therapy use reveals emerging neural feedback loops where chatbots adapt emotions in real time. Yet few tools integrate biometric sensors or verify user mental states for tailored interventions.
Very useful overview, thank you - your point on building more slowly and learning to iterate is key, there are a lot of mental health tech companies rushing into this space and while generally I think this is a good thing and can be a great first point of care for people who can't see a therapist, I'm sure there's a lot of variation in the quality of these tools - we really need companies that are transparent about how they're using AI and have third party ethical review boards at each stage of development
Gen Z’s AI therapy use reveals emerging neural feedback loops where chatbots adapt emotions in real time. Yet few tools integrate biometric sensors or verify user mental states for tailored interventions.
So you're saying that getting these biomarkers into the mix, will make it better?
But won't it also make it riskier? Like a lot of personal data being shared
Very useful overview, thank you - your point on building more slowly and learning to iterate is key, there are a lot of mental health tech companies rushing into this space and while generally I think this is a good thing and can be a great first point of care for people who can't see a therapist, I'm sure there's a lot of variation in the quality of these tools - we really need companies that are transparent about how they're using AI and have third party ethical review boards at each stage of development
Agree. Transparency is a great point. That builds confidence and trust in the product.
People are way more comfortable sharing data when they know how it is used.