EU’s AI Tech Security Evaluations Could Hamper Global Trade and Innovation
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
The European Union is conducting risk assessments on crucial technologies, including semiconductors, AI, quantum tech, and biotechnology, to safeguard economic security.
These evaluations may lead to export restrictions or investment controls, focusing on potential military and human rights concerns.
The EU is set to conduct assessments on semiconductors, AI, quantum technologies and biotechnology to determine if they pose economic security risks. The assessments may result in export restrictions or investment controls in third countries, particularly focusing on concerns related to military use and human rights abuses.
The European Commission has identified four technologies, with six more to be evaluated later, as part of its European Economic Security Strategy unveiled in June. It aims to assess the risks associated with these technologies in collaboration with its 27 member states and consulting firms before implementing any measures.
Potential responses may include fostering investment or seeking partnerships to reduce dependencies. Although the EU’s assessment will not target specific third countries, it emphasizes the importance of partnering with like-minded nations and reducing reliance on certain countries, alluding to China.
The four technologies chosen for immediate evaluation encompass advanced semiconductor technologies (with a focus on microelectronics and chip-making equipment), artificial intelligence (including data analytics and object recognition), quantum technologies (covering cryptography, communications, and sensing) and biotechnology (involving genetic modifications and genomic techniques).
The EU plans to propose risk assessments for other technologies in early 2024, aligning with similar security assessments carried out by the U.S., Japan, Britain, and Australia.
Proposed European AI Regulations Draw Mixed Reviews
A few months ago, the European Union proposed a comprehensive framework for regulating AI, which includes rules against biometric surveillance and requirements for generative AI systems to disclose when they produce AI-generated content.
The regulation also mandates companies to disclose copyrighted material used for training AI systems and assess the impact of “high-risk applications” on human rights and the environment.
However, over 150 business leaders, including Meta, Renault, and Siemens, expressed concerns about the proposed rules, arguing that they could harm Europe’s competitiveness and technological sovereignty. OpenAI’s CEO, Sam Altman, also criticized the regulations, suggesting the need to redefine general-purpose AI systems in the EU AI Act.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.