4,000 people underwent AI robot psychotherapy without knowing it
Koko, a non-profit mental health platform, links people in need of help with volunteers.
Users communicate with the Koko bot by sending multiple-choice questions.
4,000 people received unannounced psychological support from OpenAI GPT-3.
The bot was just as effective in providing therapeutic responses as a human.
Through chat services like Telegram and Discord, the non-profit mental health platform Koko links teenagers and adults in need of mental health care with volunteers. Users log onto the Koko Care’s Discord server and communicate with the Koko bot directly by sending it multiple-choice questions, such as “What is the darkest idea you have about this?” Then bot anonymously communicates the person’s worries to another user on the server, who can then respond anonymously with their own brief message.
This innovative way of connecting people in need of mental health assistance with volunteers has enabled Koko to bridge the gap between those who have not been able to access the help they need and those who are willing to offer it.
However, Rob Morris, the co-founder of Koko, made the decision to test psychotherapy volunteers on an AI bot rather than actual people. Approximately 4,000 people received psychological support from OpenAI GPT-3, as per Morris’s tweet. The results of this study showed that the AI bot was just as effective in providing therapeutic responses as a human.
It seemed people appreciated the psychological support and assistance that GPT-3 offered. However, following the announcement, Morris swiftly took it down from Telegram and Discord. Why? This type of therapy stopped working once individuals realized the messages were generated by a machine. It might be that the impersonation of empathy could seem odd and hollow.
However, something else might be at play as well. Rob Morris’s experiment has drawn criticism on social media and has been labeled unethical. After all, the participants were deceived and were not able to give informed consent; instead, they became unwilling participants in this study.
In a regular scientific study, participants are informed about how the study will be carried out, and they can withdraw their consent at any time up until the publication of the results. Here, there has been a major breach of trust: People with mental health issues entrusted the platform to provide them with help from another human being, all while being provided help from AI. This does not mean that the help per se was of bad quality. However, in any therapeutic setting, trust is a major factor in future outcomes. This type of deception could shatter the trust people once had in a very promising platform.
Koko’s experiment highlighted how easy it was to deceive people on social media, leading to concerns that the technology could be used maliciously by people trying to spread misinformation and fake news. This time it was done in good faith, but what about the next time?
Read more about AI:
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.