FLAME University

MEDIA

FLAME in the news

AI in Mental Health: Can Chatbots Truly Support Well-being?

https://nenow.in/opinion/ai-in-mental-health-can-chatbots-truly-support-well-being.html | April 17, 2025

While AI chatbots or conversational platforms can provide us with a space to express our stress, they cannot comprehend or rightly address deeper-level mental health concerns

In the past two years, the role that AI plays in my life has changed dramatically. ChatGPT, the first of many open AI platforms, was my key to citations in college. Now, as a working professional, it plays multiple roles such as an editor, a proofreader, a friend I bounce pitch ideas off, or a coach who helps me plan my future.

AI chatbots have existed for some time, bringing their own set of issues, such as false positives, regulatory issues, data privacy, and consent. However, recently, I used OpenAI amid a stressful evening filled with anxiety when I turned to the ChatGPT app on my phone. It felt like a non-judgmental outlet that could potentially give me actionable advice or research-backed reasons for my feelings—essentially, a robotic logic-based response that could snap me out of my state without biases.

While it did exactly that, to my surprise, the suggestions were coupled with words of affirmation I have often heard therapists use. It is not shocking to consider that popular phrases used to express empathy are easily available online. However, it raises the question: if AI can mimic advice received from mental health professionals and (to a degree) provide comfort, what would this mean for the industry?

There is no denying that many still carry the weight of stigma when it comes to traditional therapy. For those reluctant to visit a counselor or therapist, whether due to fear of societal judgment or the discomfort of sharing struggles with another person, an accessible digital companion can feel like a solution. Studies indicate that AI chatbots can act as a safe (temporary) solution (Fitzpatrick et al., 2017; Vaidyam et al., 2019). In addition, research has shown that before reaching out to human professionals, individuals may find these digital interactions with conversational agents comforting (Provoost et al., 2017).

Early studies suggest that digital interventions can provide relief from symptoms of worry by employing techniques similar to cognitive-behavioral therapy (Fitzpatrick et al., 2017; Fulmer et al., 2018). The chatbots have the ability to give structured, yet algorithmic, responses that can offer a semblance of empathy and understanding (Fulmer et al., 2018; Vaidyam et al., 2019). However, these interventions are not without challenges. The advice provided by AI can often be too generic or out-of-context, failing to capture the nuanced, personal nature of emotional distress being faced by an individual (Provoost et al., 2017; Hoermann et al., 2017). 

However, in the current age of digital evolution, mental health apps powered by AI aren’t confined solely to specialized chatbots. The evolution of platforms such as ChatGPT and Claude on our smartphones acts as quick tools providing us comfort and solutions at the palm of our hand. Whether one is navigating a difficult conversation at work or feeling overwhelmed by personal challenges, these AI channels have the increasing potential to be adopted as immediate, on-demand stress busters. Prior to the invention of OpenAI, conversational tools like Siri and Cortana were utilized for similar purposes (Miner et al., 2016).

Despite the growing AI tools, it must be considered that mental health concerns are complex, with each situation being unique and influenced by an individual’s histories, emotions, and experiences, which no algorithm can fully decipher. In addition, while AI chatbots or conversational platforms can provide us with a space to express our stress, worry, or sadness and potentially provide temporary relief in the moment, they cannot comprehend or rightly address deeper-level mental health concerns such as psychosis, manic or depressive episodes, or any other mental health concern truly plaguing an individual’s life. Thus, while AI might offer a temporary fix, it does not have the ability to replace the human touch and lens of genuine empathy and expertise essential for understanding and resolving the intricacies of our inner dialogue. To put it simply, the human condition is best understood by another human (at least that is what we know for now).

Another popular concern in the digital age is data privacy. Research has discussed that the rapid evolution of technology is overtaking the rate at which ethical guidelines are developed to protect our data (Torous & Nebeker, 2017). Additionally, while the intricate implications of language models and data privacy are still being understood, a study conducted on ChatGPT’s o3 model suggested the need for robust ethical guidelines and regulatory frameworks to govern the deployment and further development of these AI systems, ensuring that their benefits do not come at an unacceptable ethical cost (Floridi & Chiriatti, 2020).

Looking ahead, as AI continues to evolve and integrate more seamlessly into our daily routines, its potential seems boundless. While no one can know how it will revolutionize our lives, in the context of mental health, I believe it will always be a step behind the nuanced care offered by a mental health practitioner.

Authors: Muskaan Grover, FLAME Alumna & Prof. Moitrayee Das, Faculty of Psychology, FLAME University.

(Source:- www.nenow.in )