News Report Technology
May 23, 2023

OpenAI Raises Alarm on Superintelligence and AI’s Potential to Surpass Human Capabilities in the Next Decade

In Brief

OpenAI issues a call for regulation of superintelligence, highlighting the need for governance in light of AI’s rapid advancements.

AI systems are projected to surpass human expertise and corporate productivity within a decade, according to OpenAI.

OpenAI emphasizes the importance of public oversight and democratic control for powerful AI systems.

OpenAI Raises Alarm on Superintelligence and AI's Potential to Surpass Human Capabilities in the Next Decade

OpenAI, the creator of ChatGPT, has made a thought-provoking call for the regulation of superintelligence, drawing parallels to the nuclear energy regulation. In a recent blog post, OpenAI highlighted the potential implications of AI’s rapid advancements and emphasized the pressing need for governance in this evolving landscape. The company stated that AI systems would surpass experts and the largest corporations in productivity and skills within ten years.

“We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination,” Sam Altman, Greg Brockman, and Ilya Sutskever from OpenAI emphasized. 

Superintelligence describes an entity that exceeds the overall human intelligence or some specific aspects of intelligence. According to the authors, AI superintelligence will have an unparalleled level of power, encompassing both positive and negative aspects.

The Development and Risks of the Inevitable Superintelligence

OpenAI has identified three important ideas that play a pivotal role in navigating the successful development of superintelligence. These include coordination among leading development efforts, the establishment of an international authority akin to the International Atomic Energy Agency (IAEA), and the development of technical capabilities for safety.

While OpenAI acknowledges that AI systems come with risks, these risks are comparable to those associated with other internet-related technologies. Altman, Brockman, and Sutskever also express confidence that society’s current approaches to managing these risks are suitable. However, the main concern is about future systems that will have unprecedented power.

“By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar,” the blog post read.

The authors argue that powerful AI systems need public oversight and democratic control. They also explain why they are building this technology at OpenAI: to create a better world and to avoid the risks of stopping it. AI helps in various areas, including education, creativity, and productivity, as well as general economic growth.

OpenAI thinks it’s difficult and risky to stop superintelligence from being created. Superintelligence has considerable benefits, gets cheaper every year, more people are working on it, and it’s part of the company’s technology path.

Ilman Shazhaev, techpreneur in AI and the co-founder of Farcana Labs, shared a few comments regarding the news. Projections indicate that if not properly managed, superintelligence may be one of human’s most destructive inventions of all time. However, conversations on the deployment of the technology remain divisive, as it has not yet been developed. Pushing for a stop in development based on the fear of predictions may deprive humanity of the opportunities that the new technology might have in store.

“OpenAI’s decentralized governance approach can help maintain its broad safety. With the right regulations, the program could be shut down in the event it poses a threat. Should these safeguards be in place, then Superintelligence may be an innovation worth exploring,” said Shazhaev. 

By openly discussing its views on AI superintelligence and proposed regulatory measures, OpenAI seems to foster informed discussions and invite diverse perspectives.

Sam Altman strongly believes in widespread AI availability to the public. Acknowledging that it’s impossible to anticipate all problems in advance, he advocates for addressing issues at the earliest possible stage. However, Altman also emphasizes the importance of independent audits for systems like ChatGPT before release. He further acknowledges the possibility of implementing measures such as limiting the pace of new model creation or establishing a committee to assess the safety of AI models before market release. Notably, Altman predicts that the quantity of intelligence in the universe will double every 18 months.

Read more:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on [email protected].

More articles
Agne Cimerman
Agne Cimerman

Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on [email protected].

The DOGE Frenzy: Analysing Dogecoin’s (DOGE) Recent Surge in Value

The cryptocurrency industry is rapidly expanding, and meme coins are preparing for a significant upswing. Dogecoin (DOGE), ...

Know More

The Evolution of AI-Generated Content in the Metaverse

The emergence of generative AI content is one of the most fascinating developments inside the virtual environment ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Modular Blockchain Sophon Announces Node Sale And Allocates 20% SOPH Token Supply To Node Operators
Business News Report Technology
Modular Blockchain Sophon Announces Node Sale And Allocates 20% SOPH Token Supply To Node Operators
April 19, 2024
What’s Next for Aleph Zero? Antoni Zolciak Shares Mainnet Updates, Plans, and Key Partnerships at TOKEN2049
Interview Software Technology
What’s Next for Aleph Zero? Antoni Zolciak Shares Mainnet Updates, Plans, and Key Partnerships at TOKEN2049
April 19, 2024
MyShell Launches AI Consumer Layer 2 Network With AltLayer Powered By EigenDA And Optimism
Business Technology
MyShell Launches AI Consumer Layer 2 Network With AltLayer Powered By EigenDA And Optimism
April 19, 2024
New DeFi Opportunities with Nimbora: Compatibility with Argent X and Braavos Wallets Simplifies Access to Yield Strategies Across Chains
Interview Software Technology
New DeFi Opportunities with Nimbora: Compatibility with Argent X and Braavos Wallets Simplifies Access to Yield Strategies Across Chains
April 19, 2024