MIT Research Group Publishes White Papers Addressing AI Governance
In Brief
MIT published policy papers outlining a governance framework for AI, and guidance for US policymakers on safe development of technology.
A committee of MIT leaders and scholars published a set of policy briefs outlining governance frameworks for artificial intelligence (AI), aimed at offering guidance to US policymakers for the safe development of technology beneficial to society.
The proposed approach involves expanding existing regulatory and liability measures to establish a practical means of overseeing AI.
The main policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” suggests regulating AI tools through existing US government entities responsible for relevant domains. Recommendations underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit its applications.
For example, the paper highlights the strict licensing laws in the medical field in the US.
If AI is employed for medical prescriptions or diagnoses, under the guise of being a doctor, it should be evident that such actions would violate the law as severely as human malpractice. Similarly, autonomous vehicles utilizing AI systems are subject to regulation in the same manner as other vehicles.
Another important step in establishing regulatory and liability frameworks, according to the paper, involves AI providers proactively defining the purpose and intent of AI applications.
“In a lot of cases, the models are from providers, and you develop an application on top, but they are part of the stack. What is the responsibility there? If systems are not on top of the stack, it doesn’t mean they should not be considered”
said Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT’s Department of Electrical Engineering and Computer Science (EECS).
Having AI providers clearly define the purpose and intent of AI tools, and requiring guardrails to prevent misuse could help determine the extent to which either companies or end users are accountable for specific problems.
The project includes multiple additional policy papers covering various specific topics. Some of these papers explore the potential for AI to augment and assist workers rather than being deployed to replace them—an outcome that could contribute to more equitable long-term economic growth across society.
Suggestions for Regulatory Approach
The policy brief suggests exploring improvements in the audit processes for emerging AI tools, with potential initiation by the government, user-driven efforts, or arising from legal liability proceedings.
The paper proposes the examination of the possibility of establishing a new government-approved “self-regulatory organization” (SRO) agency that could accumulate domain-specific knowledge, enabling it to be adaptable and responsive to the rapidly evolving AI industry.
“We think that if the government considers new agencies, it should really look at this SRO structure. They are not handing over the keys to the store, as it’s still something that’s government-chartered and overseen”
said Dan Huttenlocher, dean of the MIT Schwarzman College of Computing.
As the policy papers make clear, there are several additional particular legal matters that will need addressing in the realm of AI. Copyright and other intellectual property issues related to AI generally are already the subject of litigation. While “human plus” legal issues, where AI has capacities that go beyond what humans are capable of doing include things like mass-surveillance tools, may require special legal consideration.
Global Shift in AI Governance Unfold
The initiative is taking place against the backdrop of increased interest in AI over the past year, coupled with substantial new industry investments in the field.
Concurrently, the European Union is in the process of finalizing AI regulations based on its own approach, which assigns varying levels of risk to specific types of applications.
Within this framework, general-purpose AI technologies like language models have become a new focal point of discussion. Any governance effort must grapple with the complexities of regulating both general and specific AI tools, addressing a range of potential issues including misinformation, deepfakes, surveillance, and more.
In navigating the evolving landscape of AI governance, MIT‘s committee proposes a comprehensive framework, urging policymakers to adapt existing measures and establish a nuanced approach that not only safeguards against misuse but also fosters responsible innovation for the benefit of society.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articlesAlisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.