AI Experts and Public Figures Raise Alarms on AI Extinction Risk
In Brief
Prominent AI experts and influential figures express their shared concerns regarding the risks associated with AI.
Despite significant progress in AI, many challenges related to AI risk remain unresolved.
Prominent AI experts and public figures have issued a statement warning that AI could pose a threat to humanity’s survival, comparable to or even greater than the risks of pandemics and nuclear war. The 350 signatories argue that AI could have unforeseen and catastrophic consequences, such as unleashing autonomous weapons, disrupting social and economic systems, or creating superintelligent agents that could outsmart and overpower humans.
They call for policymakers to take these risks seriously and to adopt measures to ensure safe and beneficial AI development. The statement is signed by some of the leading figures in AI research and development, such as OpenAI CEO Sam Altman, the co-founder of Skype Jaan Tallinn, DeepMind CEO Demis Hassabis, and AI computer scientist Geoffrey Hinton, as well as influential public personalities, such as musician Grimes and podcaster Sam Harris.
Among the signatories were winners of the 2018 Turing Award for their contributions to deep learning: Geoffrey Hinton and Yoshua Bengio. They were joined by professors from prestigious universities such as Harvard and Tsinghua University in China, as well as MIT’s Max Tegmark.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,”
the letter published by CAIS states.
The Center for AI Safety (CAIS), a nonprofit organization based in San Francisco, aims to address the safety concerns related to artificial intelligence. While recognizing the potential benefits of AI, CAIS emphasizes the need to develop and use it safely. Despite significant advancements in AI, numerous fundamental challenges in AI safety remain unresolved, the company believes. According to its website, CAIS’ mission is to mitigate risks by conducting safety research, fostering a community of AI safety researchers, and advocating for safety standards.
“Tere are many “important and urgent risks from AI,” not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization. These are all important risks that need to be addressed. Societies can manage multiple risks at once; it’s not “either/or” but “yes/and.” From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well,”
Dan Hendrycks, the director of CAIS wrote on Twitter.
In March, over 1,100 tech experts signed an open letter demanding a six-month halt to the training of AI systems surpassing GPT-4. European lawmakers also recently approved stricter draft legislation, known as the AI Act, regulating AI tools like ChatGPT. The act includes requirements for safety checks, data governance, and risk mitigations for foundational AI models and prohibits practices like manipulative techniques and certain uses of biometrics.
Read more:
- OpenAI Unveils Its Latest Approach to Ensuring AI Safety
- Is Google going to announce a text-to-avatar generator for gamers?
- Amazon’s CodeWhisperer Gives Developers the Edge They Need in the AI Arena
- Vitalik Buterin and MIRI Director Nate Soares Delve into the Dangers of AI
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on [email protected].
More articlesAgne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on [email protected].