In Brief
OpenAI’s document “Planning for AGI and Beyond” suggests a gradual, slow introduction of more complex and “smart” models, organizing independent audits of large systems, and limiting computing resources.
OpenAI wants AI to be open to everyone equally and create more consistent and manageable models to reduce bias and undesirable behavior.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
OpenAI recently published a document titled “Planning for AGI and Beyond.” It outlines the broad ideas guiding the company’s actions as the world moves closer to the development of AGI (artificial general intelligence).

OpenAI with believes that while AI has the potential to cause significant harm to people, attempting to halt progress is not an option. As a result, we must learn how to maintain this improvement so that it does not deteriorate significantly.
The document can be found here. Let’s highlight a few crucial elements from it:
- OpenAI wants AGI to help people thrive in all aspects of their lives, both economically and scientifically.
Instead of creating super-cool large AI and immediately releasing it into the world, OpenAI will introduce more complicated and “smart” models gradually. The assumption is that gradually increasing AI capabilities will not shock the world, giving humans time to adjust, adapt the economy, and build procedures for interacting with AI. - OpenAI invites all organizations to formally adopt the aforementioned principle (gradually roll out powerful AI). Altman also recommends limiting the amount of computing resources that may be used to train models and setting up impartial audits of major systems before they are made available to the general public. Instead of engaging in competitions like “who will be the first to roll out a cooler and bigger model?” OpenAI asks enterprises to work together to improve the safety of AI.
- OpenAI believes it’s critical that governments are informed about training runs that exceed a particular scale. This is an intriguing concept, and we’d like to know more about the plans OpenAI has in this regard. This proposal doesn’t seem all that great to me at first.
- OpenAI will try to develop models that are more dependable and controllable, most likely to limit bias and unfavorable behavior in AI. The approach taken by OpenAI in this case is to make the model available to the general public in a very constrained form while also giving users the option to personalize it. Since consumers won’t be able to “personalize” anything there, I’m not sure how this will assist the model as a whole become less biased. Unless it absolves the firm of liability for the model’s actions.
Overall, the document, points in the right way. However, there was a minor conflict between “we will not immediately show cool models to the public, but we will talk about them to states” and “people should know about the progress of AI.” In short, we shall await detailed clarifications from OpenAI on all topics.
Read more about AI:
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.