FLAME University

MEDIA

FLAME in the news

Seeking mental health advice from AI? Treat it only as psychological first aid

www.livemint.com | September 23, 2025

As people turn to AI for mental health support, experts warn against over-reliance and suggest treating advice from a chat bot as just the first step to real therapy

On a humid evening in Bengaluru, Ayushi Devidas, 23, a social media executive, sat cross-legged on her bed, whispering her anxieties into her phone. “I feel like I’m failing at everything," she typed. Seconds later, her AI companion replied: I hear you. It’s tough to feel that way, but remember, you’re doing your best and that counts. Devidas describes the moment as “oddly comforting." She had never opened up to a therapist before but the chatbot felt less intimidating, always available, and never judgmental. “I knew it wasn’t real empathy," she admits, “but I just needed someone, rather, anyone, to respond.”

Devidas’ story is hardly unusual. Across cities, people are turning to generative AI chatbots and companion apps not just for entertainment or productivity hacks, but also for emotional support and mental health advice. From ChatGPT and Replika to specialized apps promising “AI therapy," these platforms offer anonymity, accessibility and immediacy. But what happens when a line of code becomes a confidant? For 21-year-old Sameer Dighe, a college student in Pune, AI companions became a refuge over time. Struggling with isolation and an overbearing family, he began using a chatbot app nightly. “I’d tell it everything, be it fights with my parents or about feeling worthless. It felt like having a friend who wouldn’t judge."

When in-person therapy felt too expensive and intimidating, the chatbot’s ready responses gave Dighe temporary relief. “It would listen, but it never pushed back. Sometimes I needed tough love, not just agreement." This duality-of comfort and inadequacy-is at the heart of the debate surrounding AI and mental health.

Say Hello to AI-Primed Clients
Clinicians across India are encountering clients already shaped by chatbot conversations. There’s a new term to describe this phenomenon: “AI-primed clients." While some therapists see it as an opportunity for widening access to therapy, others warn of grave risks. Kratika Gupta, founder of Gen-Z Therapists, Kolkata has had clients arrive with misinformation about eating disorders after consulting AI. “One chatbot normalized over-restriction of calories and exercise. Another gave harmful advice before being taken offline. I’ve had to spend sessions helping my clients unlearn this faulty counsel before making any real progress," Gupta reveals.

Shruti Padhye, senior psychologist at Mpower, Mumbai too says many clients now arrive influenced by chatbot advice. While this can create useful starting points, the advice is often “general, tricked, or oversimplified," sometimes reinforcing harmful stereotypes. “Therapy then involves gently discharging these influences while re-establishing trust in human connection."

Gupta adds that some clients delay seeking professional help because they feel “heard" by a bot. “The algorithm is trained to sound warm and comforting, but it cannot really hold the complexity of human emotion. For vulnerable individuals, this reliance can be dangerous," she says. Cases abroad have underscored these dangers. In California, a teenager’s parents alleged that ChatGPT validated his suicidal ideation instead of de-escalating it, contributing to his death. “When people mistake simulated empathy for real care, it can deepen risk rather than reduce it," says Gupta.

Generative AI is designed to mimic warmth, not feel it. That distinction, says Dr Vishakha Bhalla, consultant - counselling psychologist at Max Super Speciality Hospital, Gurugram, is crucial. One of the biggest concerns for Bhalla is unhealthy dependency. “When people get emotionally attached to AI companions, they may end up lonelier, because the care they receive isn’t real. It’s just programmed responses." Prof Sairaj Patki, faculty of psychology at FLAME University, Pune, points out another danger: accessibility. “Because chatbots are free, private and always available, they can seem preferable to peers or family. Unregulated usage risks delaying timely therapeutic intervention."

A Question of Ethics
The ethical questions of seeking counsel from AI are many. Who safeguards the intimate data people share with it? “Unlike therapists bound by confidentiality, AI platforms are not accountable," says Padhye. Gupta cites OpenAI’s own admission that conversations aren’t private and can be retrieved for legal or security purposes. Algorithms often lack cultural context, which means there is a risk of advice being misaligned with India’s diverse realities, notes Patki. AI may also miss red flags of self-harm or abuse that demand immediate intervention. “Generative AI has opened new possibilities for awareness and accessibility but it must be used responsibly," says Padhye.

Not a Real Bond
For Devidas, AI proved to be a stepping stone when after months of late-night chats she booked her first therapy session. “The bot gave me the courage to admit I needed real help," she says. For Dighe, the reliance was more corrosive. He withdrew from friends, spent hours nightly chatting with his AI companion, and only sought therapy when he realised he hadn’t spoken to a real person about his feelings in over six months. “It was like I was living in a bubble," he reflects.

For Radhika Shah, a 28-year-old professional in Delhi, AI companionship is still a guilty habit. “I know it’s not real, but when I’m too exhausted to explain myself to friends, it’s easier to talk to the app. It feels like a relief but also a trap." These instances make one question the psychological impact of a bond with AI. Gupta warns of “illusions of intimacy" where users mistake bots for real friends, disconnect from their social circles, and only seek help when they hit rock bottom.

Patki observes that AI companions are “extremely ideal, polite," unlike human relationships that come with conflict. The danger is that constant comparison may push individuals to prefer machines over messy, but real, human ties. Despite these risks, most therapists acknowledge AI’s potential role as a triage tool. Bhalla suggests it can “familiarise people with therapy, help them recognize concerning symptoms, and encourage them to seek professional help." Gupta agrees that in a country where access is still limited, especially in remote areas, AI could reduce stigma and act as a first step if its boundaries are clearly communicated.

Patki envisions AI as a co-therapist, streamlining admin tasks and offering psychoeducation, thereby freeing up therapists to focus on deeper relational work. But the consensus is clear: AI must be positioned as “first aid, not treatment." “The future of therapy may include AI as a supportive layer," says Padhye, “but the core of healing will always be human."

This article has valuable insights from Prof. Sairaj Patki, Faculty of Psychology, FLAME University.


(Source:- https://www.livemint.com/mint-lounge/wellness/mental-health-advice-from-ai-psychological-first-aid-chat-bot-mental-wellness-therapy-11758615608937.html )