OpenAI: AI Could Potentially Do a Lot of Harm to People, But Trying to Stop Progress is Not an Option
In Brief
OpenAI’s document “Planning for AGI and Beyond” suggests a gradual, slow introduction of more complex and “smart” models, organizing independent audits of large systems, and limiting computing resources.
OpenAI wants AI to be open to everyone equally and create more consistent and manageable models to reduce bias and undesirable behavior.
OpenAI recently published a document titled “Planning for AGI and Beyond.” It outlines the broad ideas guiding the company’s actions as the world moves closer to the development of AGI (artificial general intelligence).
OpenAI with believes that while AI has the potential to cause significant harm to people, attempting to halt progress is not an option. As a result, we must learn how to maintain this improvement so that it does not deteriorate significantly.
The document can be found here. Let’s highlight a few crucial elements from it:
- OpenAI wants AGI to help people thrive in all aspects of their lives, both economically and scientifically.
Instead of creating super-cool large AI and immediately releasing it into the world, OpenAI will introduce more complicated and “smart” models gradually. The assumption is that gradually increasing AI capabilities will not shock the world, giving humans time to adjust, adapt the economy, and build procedures for interacting with AI. - OpenAI invites all organizations to formally adopt the aforementioned principle (gradually roll out powerful AI). Altman also recommends limiting the amount of computing resources that may be used to train models and setting up impartial audits of major systems before they are made available to the general public. Instead of engaging in competitions like “who will be the first to roll out a cooler and bigger model?” OpenAI asks enterprises to work together to improve the safety of AI.
- OpenAI believes it’s critical that governments are informed about training runs that exceed a particular scale. This is an intriguing concept, and we’d like to know more about the plans OpenAI has in this regard. This proposal doesn’t seem all that great to me at first.
- OpenAI will try to develop models that are more dependable and controllable, most likely to limit bias and unfavorable behavior in AI. The approach taken by OpenAI in this case is to make the model available to the general public in a very constrained form while also giving users the option to personalize it. Since consumers won’t be able to “personalize” anything there, I’m not sure how this will assist the model as a whole become less biased. Unless it absolves the firm of liability for the model’s actions.
Overall, the document, points in the right way. However, there was a minor conflict between “we will not immediately show cool models to the public, but we will talk about them to states” and “people should know about the progress of AI.” In short, we shall await detailed clarifications from OpenAI on all topics.
Read more about AI:
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet.
More articlesDamir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet.