In Brief
Koko, a non-profit mental health platform, links people in need of help with volunteers.
Users communicate with the Koko bot by sending multiple-choice questions.
4,000 people received unannounced psychological support from OpenAI GPT-3.
The bot was just as effective in providing therapeutic responses as a human.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Through chat services like Telegram and Discord, the non-profit mental health platform Koko links teenagers and adults in need of mental health care with volunteers. Users log onto the Koko Care’s Discord server and communicate with the Koko bot directly by sending it multiple-choice questions, such as “What is the darkest idea you have about this?” Then bot anonymously communicates the person’s worries to another user on the server, who can then respond anonymously with their own brief message.
This innovative way of connecting people in need of mental health assistance with volunteers has enabled Koko to bridge the gap between those who have not been able to access the help they need and those who are willing to offer it.

However, Rob Morris, the co-founder of Koko, made the decision to test psychotherapy volunteers on an AI bot rather than actual people. Approximately 4,000 people received psychological support from OpenAI GPT-3, as per Morris’s tweet. The results of this study showed that the AI bot was just as effective in providing therapeutic responses as a human.
AI-generated communications received ratings that were noticeably higher than messages written by humans. A 50% reduction in response time has also been made,
Rob Morris points out.
It seemed people appreciated the psychological support and assistance that GPT-3 offered. However, following the announcement, Morris swiftly took it down from Telegram and Discord. Why? This type of therapy stopped working once individuals realized the messages were generated by a machine. It might be that the impersonation of empathy could seem odd and hollow.
However, something else might be at play as well. Rob Morris’s experiment has drawn criticism on social media and has been labeled unethical. After all, the participants were deceived and were not able to give informed consent; instead, they became unwilling participants in this study.
In a regular scientific study, participants are informed about how the study will be carried out, and they can withdraw their consent at any time up until the publication of the results. Here, there has been a major breach of trust: People with mental health issues entrusted the platform to provide them with help from another human being, all while being provided help from AI. This does not mean that the help per se was of bad quality. However, in any therapeutic setting, trust is a major factor in future outcomes. This type of deception could shatter the trust people once had in a very promising platform.
Koko’s experiment highlighted how easy it was to deceive people on social media, leading to concerns that the technology could be used maliciously by people trying to spread misinformation and fake news. This time it was done in good faith, but what about the next time?
Read more about AI:
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.