National Eating Disorder Helpline Embraces AI Chatbot and Dismisses Staff

News Report Technology

In Brief

The National Eating Disorder Helpline has made the decision to replace its human staff with an AI chatbot.

The helpline’s staff union strongly condemns the decision, expressing disappointment with the organization.

The Trust Project is a worldwide group of news organizations working to establish transparency standards.

helpline ai chatbot

The National Eating Disorder Association has decided to shut down its longstanding telephone helpline. From June 1, NEDA will let go of a small team responsible for managing and operating the helpline. Instead, the organization will introduce an AI-driven chatbot called “Tessa” to assist individuals seeking support. The move comes after the employees running the eating disorder helpline formed a union, leading to their dismissal by NEDA.

NEDA’s staff was notified of their termination and the helpline’s closure only four days after they announced a union earlier this month. The workers’ group, Helpline Associates United, claims that NEDA is punishing them for unionizing.

The association’s spokesperson Chase told Gizmodo that NEDA has moved on from the helpline, which was established in 1999 but is now considered obsolete due to the internet. Instead, the organization aims to enhance the online experience and plans to launch an updated website by the end of 2023.

NEDA wrote that approximately 70,000 individuals sought assistance from the helpline managed by the association last year. During the COVID-19 crisis, the number of people seeking help more than doubled and never returned to pre-pandemic levels. 

The workers’ union condemns the decision:  “A chatbot is no substitute for human empathy, and we believe this decision will cause irreparable harm to the eating disorders community.”

Helpline workers provide a vital service to people struggling with various issues and need someone to listen and offer support. Replacing them with an AI chatbot raises significant concerns. Not only does an AI chatbot potentially lack the ability to empathize with people and understand their emotions, but it also may struggle to handle complex or sensitive situations that need human judgment and intervention. An AI chatbot is incapable of establishing trust or making people feel valued and respected; on the other hand, it could worsen issues with incorrect automated responses.

Read more:


Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Agne Cimermanaite

Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in Literary Studies from the University of Amsterdam and has an extensive background in writing about a wide range of topics including cybersecurity, travel, art, and culture. She has also volunteered as an editor for the animal rights organization, “Open Cages,” where she helped raise awareness about animal welfare issues. Currently, Agne splits her time between Barcelona, Spain, and Vilnius, Lithuania, where she continues to pursue her passion for journalism. Contact her on [email protected]

Follow Author

More Articles
© Metaverse Post 2022