Opinion Technology
February 10, 2023

Max Planck Institute: GPT-3 Cognitive Ability Measurement Produces Astonishing Results

In Brief

AI is already at least at the level of humans in decision-making.

The German Max Planck Institute for Biological Cybernetics decided to test this and conducted a study to compare the cognitive abilities of humans and GPT-3.

The researchers decided to check this by using canonical psychological tests on people to test their skills in decision-making, information search, and cause-and-effect relationships.

Even more striking is the fact that this ability of the AI is not just at the level of people but also makes the same mistakes that are common to people.

AI’s decision-making skills are already on par with those of humans, according to the result of the GPT-3 cognitive ability measurement carried out at the Max Planck Institute.

Enthusiasts and skeptics of large language models like GPT-3 continue to vehemently argue whether the breakthrough achievements of the ChatGPT bot, which uses GPT-3 technology, prove the bot is as intelligent as humans. The debate is pointless, though: The definition of the term “intelligence” is ambiguous. Everyone considers intelligence to be something else, and the range of definitions is enormous:

  • From a definition by Linda Gottfredson, “Intelligence is an integral mental ability that includes the ability to summarize, plan, solve problems, think abstractly, understand complex ideas, and learn quickly from experience.”
  • Meanwhile, according to Edward Boring, “intelligence is what an intelligence test measures.”
Max Planck Institute: GPT-3 cognitive ability measurement produced astonishing results
@Midjourney / Cacoethes
Recommended post: Top 5 GPT-powered extensions for Google Sheets and Docs in 2023

The situation is complicated by the fact that, whatever one may say, there is no clear reason for the emergence of large language models of intelligence comparable to that of a human. After all, the only thing that GPT-3 (and ChatGPT) can do is deftly predict the next word based on huge statistics of word sequence samples in texts written by people.

However, this skill alone allows ChatGPT surprising creativity: Apart from answering any question it is asked, it is also able to write stories, scientific articles, theses, and even code (which is enough to pass some exams at the human level).

But, is this reason enough to discuss the emergence of AI (based on GPT-3 and ChatGPT) with intelligence comparable to human intelligence?

To answer this question, let’s recall Gregory Treverton’s definition that “intelligence is ultimately storytelling” (this definition explains why intelligence and reason are encompassed by the same word in English).

The logic here is this:

  • Intelligence, in whatever form it is defined, is intended for complex decision-making in non-trivial tasks.
  • In making such decisions, in addition to formal thinking, narrative thinking plays a huge role; for instance, in jurisprudence, when jury decisions are made, not formal thinking but narrative plays a decisive role in how they evaluate evidence and make verdict decisions.
  • So why shouldn’t ChatGPT’s excellently developed narrative and formal thinking be the basis for the emergence of human-like intelligence?

The German Max Planck Institute for Biological Cybernetics decided to test this and conducted a study to compare the cognitive abilities of humans and GPT-3. The researchers decided to check this by using canonical psychological tests on people to test their skills in decision-making, information search, and cause-and-effect relationships.

Max Planck Institute: GPT-3 cognitive ability measurement produced astonishing results
@Midjourney / Cacoethes
Recommended post: Top 10 AI-powered SEO tools in 2023 for digital marketers

The results of the study, published in the journal Proceedings of the National Academy of Sciences, are amazing:

  • AI solves the problem of making the right decision based on descriptions as well as or better than humans.
  • Even more striking is the fact that this ability of the AI is not just at the level of people but also makes the same mistakes that are common to people.

Moreover, the ability to make the right decisions was tested in tasks described by vignettes: a short description of people and/or situations in response to which people reveal their ideas, values, social norms, or their own impressions. It would seem that nothing of the above can exist in AI. However, this does not prevent AI from making the same decisions as humans.

In the other two cognitive abilities, AI falls short of humans.

  • When searching for information, GPT-3 does not show signs of directed research.
  • In cause-and-effect problems, GPT-3 is at the level of a small child, but that’s for now.

The authors believe that in order to catch up with people with these two abilities, AI lacks only active communication with us and with the rest of the world, but it will pass quickly. After all, millions already communicate with ChatGPT.

Read more about ChatGPT and AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories
Join Our Newsletter.
Latest News

The DOGE Frenzy: Analysing Dogecoin’s (DOGE) Recent Surge in Value

The cryptocurrency industry is rapidly expanding, and meme coins are preparing for a significant upswing. Dogecoin (DOGE), ...

Know More

The Evolution of AI-Generated Content in the Metaverse

The emergence of generative AI content is one of the most fascinating developments inside the virtual environment ...

Know More
Join Our Innovative Tech Community
Read More
Read more
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (22-26.04)
Digest Business Markets Technology
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (22-26.04)
April 26, 2024
Vitalik Buterin Comments On Centralization Of PoW, Notes It Was Temporary Stage Until PoS
News Report Technology
Vitalik Buterin Comments On Centralization Of PoW, Notes It Was Temporary Stage Until PoS
April 26, 2024
Offchain Labs Reveals Discovery Of Two Critical Vulnerabilities In Optimism’s OP Stack’s Fraud Proofs
News Report Software Technology
Offchain Labs Reveals Discovery Of Two Critical Vulnerabilities In Optimism’s OP Stack’s Fraud Proofs
April 26, 2024
Dymension’s Open Market For Bridging Liquidity From RollApps eIBC Launches On Mainnet 
News Report Technology
Dymension’s Open Market For Bridging Liquidity From RollApps eIBC Launches On Mainnet 
April 26, 2024