US, UK and 16 other Countries Sign International Agreement for “Responsible AI” Development
US, UK and 16 other countries signed the first comprehensive international agreement to work towards responsible AI development.
In a move to ensure the safety of artificial intelligence (AI), United States, UK, Singapore and more than a dozen other nations have unveiled the first comprehensive international agreement to work towards responsible AI development.
The initiative was outlined in a 20-page document released on Sunday, aims to establish a framework for companies designing and utilizing AI to “prioritize security measures” from the outset.
Though non-binding, the agreement represents a collaborative effort for AI by 18 countries, including the United States, Singapore, Britain, Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, and Nigeria.
Moreover, it underscores the shared understanding that AI systems should be developed and deployed with a primary focus on safeguarding both customers and the broader public from potential misuse.
US Cybersecurity and Infrastructure Security Agency (CISA)’s director, Jen Easterly emphasized the significance of this agreement, noting that it marks the first time that multiple nations have affirmed the necessity of prioritizing security during the design phase of AI systems.
She highlighted that the guidelines move beyond the allure of features and market competition, emphasizing the importance of security considerations.
The framework addresses key issues related to preventing AI technology from falling into the wrong hands, including recommendations for robust security testing before the release of AI models. Among the stipulations are measures to monitor AI systems for potential abuse, protect data from tampering, and vet software suppliers.
However, the agreement does not delve into more contentious aspects, such as defining appropriate uses of AI or addressing concerns about data-gathering methods.
This international agreement adds to a series of global initiatives attempting to shape the trajectory of AI development. The guidelines signify a collective acknowledgement of the need to prioritize safety considerations in the realm of artificial intelligence.
The Growing Prominence to Curb “AI Risks”
Last month, the Group of Seven (G7) industrial countries already decided and set to agree on a code of conduct for companies developing advanced artificial intelligence systems next week, as per a G7 document.
According to the document, the 11-point code aims to promote safe, secure, and trustworthy AI globally, indicating the intent of governments working to mitigate the risks and potential misuse of the technology. It further emphasizes that the code is crucial in leveraging the benefits while addressing associated risks and challenges.
It urges companies to actively undertake measures to identify, evaluate, and mitigate risks throughout the AI lifecycle.
Similarly, the European Union reached an early agreement on the Artificial Intelligence Act, potentially becoming the world’s inaugural comprehensive set of laws regulating the use of AI technology.
Under the act, companies utilizing generative AI tools such as ChatGPT and Midjourney must now disclose any copyrighted material employed in developing their systems. The legislation is presently in a stage where EU lawmakers and member states collaborate to finalize the bill’s details.
As per the proposed regulations, AI tools are “categorized based on the risk level” they pose, spanning from low to limited, high or unacceptable.
Looking ahead, the global momentum for regulating AI continues to build, with the Group of Seven (G7) gearing up to establish a code of conduct for advanced AI systems. The forthcoming AI agreement, alongside the recently reached international accord and the European Union’s work in AI legislation, signifies a concerted effort to address the evolving landscape of artificial intelligence, emphasizing the need for safety measures and responsible development.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.