The Stanford experiment has revealed an insurmountable vulnerability of people’s linguistic intuition, revealing the low probability of identifying the authorship of AI.
This is due to the ability of AI like ChatGPT to manufacture fakes so expertly that people are unable to tell them apart from the truth.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
The new “Stanford experiment” has revealed an insurmountable vulnerability of people’s linguistic intuition. Because of it, the probability of us identifying the authorship of AI is greatly diminished.
There are numerous concerns for the globe today as a result of the ChatGPT revolution. The most evident is that AI like ChatGPT is able to manufacture fakes so expertly that people are unable to tell them apart from the truth. The results of the trials undertaken have demonstrated the enormous capacity of the ChatGPT super ability to deceive. However, it was unclear how this superpower worked—what does AI possess that can make highly clever and educated people believe what it is doing?
|Read more: 20+ Best Telegram AI Chatbots of 2023|
A new “Stanford experiment” was conducted by the Stanford Social Networking Lab in collaboration with the Cornell University Research Center to answer this very question.
The result of a series of six experiments involving 4,600 people is sensational and depressing.
People were tasked with determining whether self-presentations were written by a person or an AI. According to the researchers, self-presentation is one of the most personal and consequential elements of linguistic communication because our attitude toward any statement largely depends on who (as we believe) its author is.
At the heart of the linguistic perception of people are heuristics. The term heuristics refers to the mental shortcuts people take to arrive at decisions, solve problems, and make judgments. It helps people ease their mental workload and avoid cognitive overload.
“A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language,” the study reads. Experiments have shown that when perceiving a text written by AI, people automatically use the same heuristics as when communicating with other people. And this is a fundamental mistake. The fact is that these are intuitive heuristics for us, and AI reads them easily and operates with them like a multiplication table.
As a result, AI can use heuristics to create text that people perceive as “more human than human.” This greatly increases the deceptive potential of AI texts and speech, encouraging us to trust this “most human person” more than the statements of real people.
The chances of people identifying an AI-written self-presentation are 50/50. Things are even worse in people’s romantic communications. Almost 70% of adults cannot distinguish between a love letter written by ChatGPT and a letter written by a person.
Read more about ChatGPT:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.