Microsoft president Brad Smith published a blog post on Thursday emphasizing the importance of responsible AI.
The blog post discussed the role of large language models, such as ChatGPT, and how they can impact society.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Microsoft president Brad Smith published a blog post on Thursday, stressing the importance of responsible AI.
He highlighted the company’s belief that artificial intelligence (AI) has the potential to change the world for the better but stressed the need for developers and organizations to ensure that AI systems are developed and used ethically. The blog post discussed the role of large language models, such as ChatGPT, in shaping the future of AI and stressed the need for careful consideration of their impact on society.
As ethical issues in AI and the use of ChatGPT come to light, Smith noted that language models have been trained on vast amounts of data and that it is important to ensure that they are not perpetuating harmful biases and inaccuracies.
According to a study by Koch, Denton, Hanna, and Foster, more than 50% of the dataset that AI is trained on comes from 12 institutions, most of which are located in the US. This leaves out datasets from Africa, Asia, and South America, leading to the perpetuation of Western ethnocentric cultural biases.
Ironically, ChatGPT is trained by people from these regions, known as the Global South. Much of the annotation, content moderation, experimenting, and testing of invasive AI apps are done in Global South countries, where workers are paid as low as 11 cents per hour.
Microsoft emphasized its commitment to responsible AI and highlighted its recent initiatives, including the creation of Aether Committee in 2017 with researchers, engineers, and policy experts to focus the development of tools to detect and address bias in AI systems. The company also updated its responsible AI framework last year.
The company also called on other organizations and individuals to prioritize responsible AI and work together to create a more equitable future. Smith outlined three key goals:
- Ensuring that AI is built and used responsibly and ethically. This includes democratic law-making processes to engage in conversations about AI protection under the law, as well as the need for outcomes-focused and durable AI regulations that are interoperable and adaptive.
- Advancing international competitiveness and national security with AI. This means recognizing the role of the US and other nations committed to democratic values in maintaining technological leadership. With the combination of OpenAI and Microsoft, and DeepMind within Google, the United States is well placed to maintain technological leadership,” Smith wrote in the blog post.
- Ensuring that AI serves society broadly, empowering workers and students, and promoting fair and inclusive economic growth; addressing the climate crisis and promoting the development of clean energy technology.
By outlining its key goals for responsible AI, Microsoft joins institutions such as The Collaborative AI Responsibility Lab (The CAIR Lab) – part of the Center for Governance and Markets at the University of Pittsburgh – to increase the adoption of responsible AI by combining research with activism.
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.