AI is already at least at the level of humans in decision-making.
The German Max Planck Institute for Biological Cybernetics decided to test this and conducted a study to compare the cognitive abilities of humans and GPT-3.
The researchers decided to check this by using canonical psychological tests on people to test their skills in decision-making, information search, and cause-and-effect relationships.
Even more striking is the fact that this ability of the AI is not just at the level of people but also makes the same mistakes that are common to people.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
AI’s decision-making skills are already on par with those of humans, according to the result of the GPT-3 cognitive ability measurement carried out at the Max Planck Institute.
Enthusiasts and skeptics of large language models like GPT-3 continue to vehemently argue whether the breakthrough achievements of the ChatGPT bot, which uses GPT-3 technology, prove the bot is as intelligent as humans. The debate is pointless, though: The definition of the term “intelligence” is ambiguous. Everyone considers intelligence to be something else, and the range of definitions is enormous:
- From a definition by Linda Gottfredson, “Intelligence is an integral mental ability that includes the ability to summarize, plan, solve problems, think abstractly, understand complex ideas, and learn quickly from experience.”
- Meanwhile, according to Edward Boring, “intelligence is what an intelligence test measures.”
|Recommended post: Top 5 GPT-powered extensions for Google Sheets and Docs in 2023|
The situation is complicated by the fact that, whatever one may say, there is no clear reason for the emergence of large language models of intelligence comparable to that of a human. After all, the only thing that GPT-3 (and ChatGPT) can do is deftly predict the next word based on huge statistics of word sequence samples in texts written by people.
However, this skill alone allows ChatGPT surprising creativity: Apart from answering any question it is asked, it is also able to write stories, scientific articles, theses, and even code (which is enough to pass some exams at the human level).
But, is this reason enough to discuss the emergence of AI (based on GPT-3 and ChatGPT) with intelligence comparable to human intelligence?
To answer this question, let’s recall Gregory Treverton’s definition that “intelligence is ultimately storytelling” (this definition explains why intelligence and reason are encompassed by the same word in English).
The logic here is this:
- Intelligence, in whatever form it is defined, is intended for complex decision-making in non-trivial tasks.
- In making such decisions, in addition to formal thinking, narrative thinking plays a huge role; for instance, in jurisprudence, when jury decisions are made, not formal thinking but narrative plays a decisive role in how they evaluate evidence and make verdict decisions.
- So why shouldn’t ChatGPT’s excellently developed narrative and formal thinking be the basis for the emergence of human-like intelligence?
The German Max Planck Institute for Biological Cybernetics decided to test this and conducted a study to compare the cognitive abilities of humans and GPT-3. The researchers decided to check this by using canonical psychological tests on people to test their skills in decision-making, information search, and cause-and-effect relationships.
|Recommended post: Top 10 AI-powered SEO tools in 2023 for digital marketers|
The results of the study, published in the journal Proceedings of the National Academy of Sciences, are amazing:
- AI solves the problem of making the right decision based on descriptions as well as or better than humans.
- Even more striking is the fact that this ability of the AI is not just at the level of people but also makes the same mistakes that are common to people.
Moreover, the ability to make the right decisions was tested in tasks described by vignettes: a short description of people and/or situations in response to which people reveal their ideas, values, social norms, or their own impressions. It would seem that nothing of the above can exist in AI. However, this does not prevent AI from making the same decisions as humans.
In the other two cognitive abilities, AI falls short of humans.
- When searching for information, GPT-3 does not show signs of directed research.
- In cause-and-effect problems, GPT-3 is at the level of a small child, but that’s for now.
The authors believe that in order to catch up with people with these two abilities, AI lacks only active communication with us and with the rest of the world, but it will pass quickly. After all, millions already communicate with ChatGPT.
Read more about ChatGPT and AI:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.