Artificial Intelligence-powered mental health therapy remains a contentious prospect because of its limitations in sensitive cases, risk of privacy breaches and lack of compassion.
But in a country like India, where there is enormous social stigma around mental illness and a severe shortage of skilled healthcare professionals, chatbot therapy could offer hope.
Researchers at the AI-Natural Language Processing-Machine Learning (AI-NLP-ML) Group in IIT Patna’s Department of Computer Science and Engineering (CSE) have developed an annotated dataset of around 5,000 English dialogues to make chatbots more polite and empathetic.
Researchers Priyanshu Priya, Mauajama Firdaus and Asif Ekbal are the brains behind the dataset titled POEM (from POliteness and EMotion) that works as an entry point/primary assistance tool to offer mental health and legal counselling to women and child victims of crime.
The AI agents can, later, use the details collected to refer the victims to specialised support services. The research has been featured in ScienceDirect, a global database of scientific research journal articles.
India has only 0.3 psychiatrists per 100,000 population. About 7.5% of its population suffers from some form of mental illness, according to the World Health Organisation.
Humane element
Priyanshu, a PhD student at the CSE Department, accepted that distressed victims may have reservations about interacting with a machine but said politeness could replicate the human touch.
“The idea is to provide a more consoling, reassuring presence; a support system at once empathetic to the victim and ready with basic legal counsel, like the relevant IPC sections and the extent of punishment,” she told DH.
The project is led by Prof Asif Ekbal, Associate Professor, Department of CSE. Mauajama was a PhD student at IIT Patna and is currently a postdoctoral researcher at the University of Alberta, Canada.
The exercise detected and classified politeness and emotions in utterances. For annotating politeness in conversations, three labels were used – polite, impolite, and neutral. The researchers used 16 categories to annotate the emotional conversations, including anticipation, confidence, hope, anger, fear, guilt, and trust.
A conversation around domestic violence demonstrates how victim engagement improves when stock assurances like “don’t worry” or “try to be calm” are replaced with the more empathetic “we understand your frustration” or “we will try our best”.
The researchers consulted Dr Jyotsna Agrawal, Associate Professor, Department of Clinical Psychology, NIMHANS, and a top legal expert, in drafting guidelines for the dialogues.
The guidelines are aimed at making the agents more patient (do not pressurise victims for details), non-judgemental, and respectful of the victims’ privacy. With such datasets, NLP professionals can develop AI systems that respond better to the victims, Priyanshu said.