News Report Technology
October 27, 2023

AI Companies Should Spend 30% of Their Funding on R&D in Safety and Ethics

The damage caused by prejudice and false information is already apparent. There are indications that other hazards may also surface. It’s critical to mitigate current risks and foresee emerging ones.

AI Companies Should Spend 30% of Their Funding on R&D in Safety and Ethics

We wouldn’t know how to ensure advanced autonomous systems or AGI are safe or how to test them if they were available right now. Furthermore, governments lack the institutions necessary to stop abuse and put safe practises in place, even if they did. The authors support developing efficient government oversight and redirecting R&D efforts towards safety and ethics.

Control and honesty (more sophisticated systems can outsmart testing by producing false but convincing answers), robustness (in new conditions with distribution shift or adversarial inputs), interpretability (understanding work), risk assessment (new abilities emerge that are difficult to predict), and the emergence of new challenges (unprecedented failure modes) are some of the R&D challenges that will not be resolved by developing more powerful AI systems.

The authors suggest that safety and ethics should receive at least one-third of the funding for AI R&D.

Standards need to be enforced in relation to both national institutions and global governance. AI lacks these, but the pharmaceutical, financial, and nuclear industries do. There are now incentives for nations and businesses to economise at the expense of security. Companies can profit from AI advancements while leaving society to bear the consequences, much like industries dump waste into rivers.

Strong technical know-how and the ability to move quickly are requirements for national institutions. In the global arena, partnerships and agreements are essential. Bureaucratic obstacles to small and predictable models must be avoided in order to safeguard academic research and low-risk applications. Frontier models—a select group of the most potent systems trained on billion-dollar supercomputers—should receive the most focus.

Governments must be more open about developments in order for regulations to be effective. Regulators ought to mandate model registration, safeguard internal informants, mandate incident reporting, and keep an eye on the development of models and the use of supercomputers.

Regulators must also have access to these systems prior to their release into production in order to evaluate potentially harmful features like pathogen generation, self-replication, and system penetration.

Systems with the potential to be dangerous need a variety of control methods. Frontier model creators must also be held legally accountable for any damage to their systems that could have been avoided. This ought to encourage security investment. More features, such as government licencing, the capacity to halt development in response to potentially dangerous capabilities, access controls, and information security measures impervious to state-level hackers, might be required for extremely capable systems.

Although there are no rules, businesses should quickly clarify their if-then responsibilities by outlining the precise steps they will take if certain model capabilities cross a red line. These measures need to be thoroughly explained and independently confirmed. Thus, it is. A separate Policy supplement compiles a summary of theses.

  • In October, the Frontier Model Forum has introduced an AI Safety Fund of over $10 million, aiming to drive advancements in AI safety research. The fund, a collaboration between the Frontier Model Forum and philanthropic partners, will provide support to independent researchers worldwide affiliated with academic institutions, research organizations, and startups. The primary contributors to the initiative are Anthropic, Google, Microsoft, OpenAI, along with philanthropic organizations such as the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn. The AI Safety Fund primarily focuses on bolstering the development of new evaluation techniques and red teaming approaches for AI models, aiming to uncover potential hazards. The Forum plans to establish an Advisory Board in the coming months, and will issue its first call for proposals and grant awards shortly thereafter.
Related: UK Competition and Markets Authority Launches Review of AI Models as Government Regulation Efforts Escalate

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories
Join Our Newsletter.
Latest News

The DOGE Frenzy: Analysing Dogecoin’s (DOGE) Recent Surge in Value

The cryptocurrency industry is rapidly expanding, and meme coins are preparing for a significant upswing. Dogecoin (DOGE), ...

Know More

The Evolution of AI-Generated Content in the Metaverse

The emergence of generative AI content is one of the most fascinating developments inside the virtual environment ...

Know More
Join Our Innovative Tech Community
Read More
Read more
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (22-26.04)
Digest Business Markets Technology
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (22-26.04)
April 26, 2024
Vitalik Buterin Comments On Centralization Of PoW, Notes It Was Temporary Stage Until PoS
News Report Technology
Vitalik Buterin Comments On Centralization Of PoW, Notes It Was Temporary Stage Until PoS
April 26, 2024
Offchain Labs Reveals Discovery Of Two Critical Vulnerabilities In Optimism’s OP Stack’s Fraud Proofs
News Report Software Technology
Offchain Labs Reveals Discovery Of Two Critical Vulnerabilities In Optimism’s OP Stack’s Fraud Proofs
April 26, 2024
Dymension’s Open Market For Bridging Liquidity From RollApps eIBC Launches On Mainnet 
News Report Technology
Dymension’s Open Market For Bridging Liquidity From RollApps eIBC Launches On Mainnet 
April 26, 2024