News Report Technology
July 10, 2023

Sam Altman: OpenAI’s Approach to Addressing AI “Hallucinations” Aims for Better Explainable AI

In Brief

OpenAI is working on a novel AI model training method called “process supervision” to address AI hallucinations.

The strategy aims to create reasoning engines that can process factual information, while also relying on historical data.

OpenAI’s CEO, Sam Altman, believes that within one to two years, the team will have largely solved the problem of hallucinations.

Keeping with the theme of AI model hallucinations. We came across a meeting from Sam Altman‘s global tour that was recorded in New Delhi. The applicability of models is significantly limited by hallucinations, so one of the visitors there inquired about how to deal with them.

OpenAI's Approach to Addressing AI "Hallucinations" Aims for Better Explainable AI
Credit: Metaverse Post (mpost.io)
Related: 100 Best ChatGPT Prompts to Unleash AI’s Potential

Sam has already stated that he wants models to be more like reasoning engines than knowledge repositories, on the one hand. On the other hand, even in this situation, the model must be able to draw from a foundation (our history) and work with data.

I believe that within one and a half to two years, our team will have largely solved the problem of hallucinations. By that time, we will have stopped referring to it as a problem. The model will have to learn to discern when and what you require (whether you can fake it or it just messes up the answer), since there is a delicate balance between being “creative” and “actually accurate” This is generally one of the biggest issues for us when it comes to the model’s speed and cost per use. And there is no doubt that we are attempting to make things better.

Sam Altman

OpenAI is making significant progress in addressing the issue of AI “hallucinations” by developing a novel AI model training method. Concerns about misinformation generated by AI systems, particularly in domains requiring complex reasoning, have prompted a focus on hallucination mitigation.

When models fabricate information and present it as factual data, AI hallucinations occur. OpenAI’s new strategy, known as “process supervision,” aims to address this issue by encouraging human-like thought processes within the models. The research aims to identify and correct logical errors or hallucinations as a first step toward developing aligned AI or artificial general intelligence. As part of this effort, OpenAI has released a comprehensive dataset consisting of 800,000 human labels that were utilized to train the model referenced in the research paper.

While the development of “process supervision” represents a promising advancement, some experts remain cautious. Senior counsel at the Electronic Privacy Information Center expressed skepticism, highlighting that the research alone does not fully alleviate concerns surrounding misinformation and inaccurate outcomes when AI models are deployed in real-world scenarios. To further evaluate the proposed strategy, OpenAI is likely to submit the research paper for peer review at an upcoming conference. As of now, OpenAI has not responded to requests for comment regarding the implementation timeline for integrating the new strategy into ChatGPT and other products.

OpenAI’s CEO, Sam Altman, emphasized the significance of striking a balance between creativity and accuracy within AI models. Altman envisions models that function as reasoning engines, not just repositories of knowledge. However, he also acknowledged the need for models to rely on a foundational base, such as historical data, and effectively process factual information.

The development of this innovative approach and the ongoing efforts to address AI hallucinations showcase OpenAI’s commitment to advancing the field of AI while ensuring responsible and reliable outcomes. As OpenAI continues to refine its strategies and seek solutions to the challenges posed by AI hallucinations, the prospect of achieving better explainable AI becomes increasingly tangible.

  • OpenAI’s ChatGPT, a chatbot powered by the advanced GPT-3 and GPT-4 models, has witnessed unprecedented growth, surpassing 100 million monthly users in a record-breaking two months. With Microsoft’s substantial investment of over $13 billion, OpenAI’s value has soared to approximately $29 billion.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Malta Embraces Crypto Future As Gate.MT CEO Highlights Next Wave Of Blockchain Evolution
News Report Technology
Malta Embraces Crypto Future As Gate.MT CEO Highlights Next Wave Of Blockchain Evolution
November 1, 2024
Binance Blockchain Week 2024 Ignites Dubai with Bold Visions for Web3, AI, and the Future of Crypto Innovation
Opinion Business Lifestyle Markets Technology
Binance Blockchain Week 2024 Ignites Dubai with Bold Visions for Web3, AI, and the Future of Crypto Innovation
November 1, 2024
Hedera Introduces Bonzo Finance Liquidity Layer To Catalyze DeFi Growth On Its Network
News Report Technology
Hedera Introduces Bonzo Finance Liquidity Layer To Catalyze DeFi Growth On Its Network
November 1, 2024
Layer 1 Blockchains or Layer 2 Solutions The Intense Debate Over the Future of Blockchain Scalability
Opinion Software Technology
Layer 1 Blockchains or Layer 2 Solutions The Intense Debate Over the Future of Blockchain Scalability
November 1, 2024