AI improves mental health supporter empathy

Artificial intelligence is already making waves in medicine, finding potential treatment programs and better help for patients, and now it is moving towards Psychological Health sector.

HAILEY, an AI chat interface, is showing promise as a tool to help mental health friends who support staff interact with individuals seeking support online.

A study of HAILEY performance was published in the journal Natural machine intelligence.

Mental health problems abound in the population. According to a 2021 survey by the Australian Federal Government’s Institute for Health and Welfare, more than two out of five Australians (44%) are estimated to have experienced psychological problems in their lifetime. In the 12 months leading up to the survey, an estimated 21% – 4.2 million Australians experienced mental health problems.

For these people, anxiety is the most common, followed by mental illness and substance abuse.

In many cases, long-term assistance is difficult due to the cost and side effects of the disease itself. Access to treatment and counseling is often limited.

Also Read :  PM Modi to launch 5G services today

Read more: New AI tools can help diagnose rare diseases and predict treatment


Peer-to-peer platforms in non-clinical forums can provide some care and have been shown to be significantly associated with improvement in mental health symptoms. In particular, their availability makes online mental health support services an important part of helping people with mental health conditions with the potential to change lives or even save lives. Psychological studies show that in these settings, empathy is important.

Tim Althoff, an assistant professor of computer science and his associates at the University of Washington, designed HAILEY to facilitate communication between support staff and support seekers.

The chat interface uses a pre-developed language model that is specifically trained for empathy writing.

To test HAILEY, the team selected 300 mental health supporters from the TalkLife platform to participate in a controlled trial. Participants were divided into two groups, one with the help of HAILEY. Support staff responded to real-world postings that were filtered to avoid accident-related content.

Also Read :  Where are all the personal robots we were promised?

For one group, HAILEY suggested for phrases to replace or insert. Support staff can then choose to ignore or accept HAILEY suggestions. For example, HAILEY suggested replacing the phrase “do not worry” with “it must be a real struggle”.

The authors found that collaborative approaches between support staff and AI led to a 19.6% increase in conversation empathy. This was evaluated by a previously valid AI model.

The increase in empathy in the conversation was higher than the 38.9% among peer-to-peer co-authors, the author writes, “identifying yourself as having difficulty providing support.”

Their analysis shows that “peer-to-peer advocates can use AI improvements, both directly and indirectly, without having to rely too heavily on AI, while reporting the effectiveness of post-improvement autonomy improves.” Ours shows the potential of feedback-driven AI-in-the-loop writing systems to empower people in socially open and high-stakes tasks such as empathetic conversation.

Also Read :  Rafael’s ‘Drone Dome’ counter-UAS system wins Pentagon certification

Read more: ChatGPT is making waves, but what does AI chat tool mean for the future of writing?


However, the authors note that further research is needed to ensure the safety of such AI devices “in high-stakes settings such as mental health care” due to considerations surrounding “security, privacy and “Bias.”

“There is a risk that attempts to help AI could have the opposite effect on vulnerable seekers or peer supporters. The current study includes a number of measures to reduce the risks and side effects. First, our collaborative AI-in-the-loop writing approach ensures that the main conversation remains between two people, with AI providing feedback only when it seems useful and allowing human supporters to accept or Deny it. “Providing such a human agency is safer than relying solely on AI.”



Source

Leave a Reply

Your email address will not be published.