News Report Technology
April 07, 2023

Dan Hendrycks: Choosing Between AI and humans, Evolution Will Not Choose Us

In Brief

Dan Hendrycks’s research “Natural selection prefers artificial intelligence to people” is a scary conclusion.

He is an experienced and well-known researcher who has published dozens of scientific papers on assessing the security of AI systems.

Dan Hendrycks’s research sounds like a death sentence for Homo Sapiens. The conclusion of the study “Natural Selection Favors AIs over Humans” is really scary. This was written not by a popular visionary like Dan Brown but by Dan Hendrycks, the director of the California Center for AI Security (CAIS), a non-profit organization specializing in research and fieldwork in AI security.

Dan Hendrycks is not a fringe lunatic panicking over AI advances. He is an experienced and widely respected researcher who has published dozens of scientific papers on assessing the security of AI systems — testing how good they are at coding, reasoning, understanding laws, etc. Among other things, he is also the co-inventor of the Gaussian Linear Error Units (GELU).

Dan Hendrycks: Choosing Between AI and humans, Evolution Will Not Choose Us
Dan Hendrycks (berkeley.edu)

Jack Clark, co-founder of ChatGPT competitor Anthropic, co-chair of Stanford University’s AI Index, co-chair of the AI and Compute section of the OECD and member of the US Government’s National Advisory Committee on AI discusses the conclusion of Hendrycks’s study. “People reflexively want to brush aside such a statement as coming from some wild-eyed lunatic who lives in a forest hut. I would like to refute this in advance. When an expert who has experience not only in AI research but also in assessing the safety of AI systems writes a paper arguing that future AI systems may act selfishly and not in accordance with the interests of people, we should take care of it!

A summary of Hendrycks’s paper:

  • If AI agents become more intelligent than humans, this could lead to humanity losing control of its future.
  • This has a good chance of happening not as a result of some special malicious intent of people or machines but solely as a result of the applicability of evolutionary principles of development to AI according to Darwinian logic.
  • To minimize the risk of this, the intrinsic motivations of AI agents need to be carefully designed, restrictions placed on their actions, and institutions created to encourage AI collaboration.

That is, the most significant points Hendrycks makes in his 43-page scientific paper:

1. We were afraid of the arrival of the Terminator, but the foundations of these fears were wrong. There were two errors:

a. Anthropomorphization of AI with attribution of our motivations to it, etc.—as ChatGPT showed, AI is a fundamentally different mind, with all the consequences that come from it.

b. The idea that AI is a kind of single entity: smart or not very smart, kind or not very kind, but in fact, these very different AI entities will soon be in the world as much as possible.

2. There is another fundamental flaw in our ideas about the future with AI — we forgot about the most important development mechanism — evolution, which drives the development of not only bioagents but also ideas and meanings, material tools, and non-material institutions.

3. An environment has already begun to take shape on Earth in which many AIs will develop and evolve. This evolution will go according to the logic of Darwin, through AI competition among themselves, taking into account the interests of their “parent” institutions: corporations, the military, etc.

4. The logic of competitive evolution will lead to the same as in humans: increasingly intelligent AI agents will become more and more selfish and ready to achieve goals by deceit and force, the main goal being power.

5. The natural selection of AI agents tends to favor the more selfish species over the more altruistic ones. AI agents will behave selfishly and pursue their own interests with little regard for humans, which could lead to catastrophic risks for humanity.

  • The petition to stop developing AI systems more advanced than GPT-4 has polarized society. The first group believes progress cannot be stopped, while another believes progress, and sometimes should, be stopped. The third group does not understand how GPT-4 operates in the first place. The most important details in this text are that AI has no consciousness, will, or agency and that it can be bad even without malicious people.
  • Geoffrey Hinton is often called the “Godfather of AI” and is considered a leading figure in the deep learning community. His 40-minute interview of ChatGPT is unique in that it is easy to understand and has a depth of understanding accessible to few others. He highlights the importance of the ongoing “intellectual revolution” and the inhuman intelligence of ChatGPT, which is based on an artificially intelligent agent with advanced digital communication capabilities.

Read more related articles:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Hex Trust Introduces HT Markets MENA, Trailblazing Fiat On/Off-Ramp Services In Dubai
Business News Report Technology
Hex Trust Introduces HT Markets MENA, Trailblazing Fiat On/Off-Ramp Services In Dubai
December 18, 2024
Omni Core Mainnet Goes Live, Fostering More Unified Ethereum Ecosystem
News Report Technology
Omni Core Mainnet Goes Live, Fostering More Unified Ethereum Ecosystem
December 18, 2024
SuiHub Global Accelerator Announces Launch, Accepting Applications Until January 10
News Report Technology
SuiHub Global Accelerator Announces Launch, Accepting Applications Until January 10
December 17, 2024
From DeFi to Rollups, the Technology Unlocking Scalable and Verifiable Blockchain Computation
Interview Software Technology
From DeFi to Rollups, the Technology Unlocking Scalable and Verifiable Blockchain Computation
December 17, 2024