ChatGPT was taught by the world’s poorest people

News Report Technology

In Brief

Some AI ethicists have attacked OpenAI’s choice to outsource the training of its ChatGPT model to Sama, claiming that the business is exploiting low-cost labor.


The Trust Project is a worldwide group of news organizations working to establish transparency standards.


ChatGPT was developed with the help of people from some of the poorest regions of the world, according to recently leaked documents. OpenAI Corporation began cooperating with Sama, which employs millions of workers from the poorest parts of the world.

OpenAI Corporation, a for-profit artificial intelligence research company, has announced a partnership with Sama, a social enterprise that employs millions of workers from the poorest parts of the world. This move comes as OpenAI seeks to outsource the training of its ChatGPT natural language processing model to low-cost labor.

Sama is a social enterprise that employs millions of workers from the poorest parts of the world, including Kenya, Uganda, and India. The company has come under fire in the past for its working conditions, with many employees complaining of long hours and low pay. However, OpenAI has defended its decision to partner with Sama, arguing that the company provides much-needed employment opportunities to workers who are otherwise living in poverty.

OpenAI’s decision to outsource the training of its ChatGPT model to Sama has been criticized by some AI ethicists, who argue that the company is exploiting low-cost labor. These folks were the ones who coded the ChatGPT training set. For $1.32 per hour, they scanned texts from all across the Internet for hazardous content.

Many Sama employees stated that their psychological health suffered as a result of their work. OpenAI did not deny using Sama employees for outsourcing but instead emphasized that this labor has pulled many people out of poverty.

“We must not forget that ChatGPT and other generative models are not magical—they are built on enormous supply chains of human labor and extracted data,” AI ethicist Andrew Strait remarked.

  • The AI showed impressive abilities in its predecessor, GPT-3, which was able to link sentences together. However, GPT-3 was not an easy sell due to its tendency to blurt out violent, sexist, and racist remarks.
  • This is because the AI was trained on hundreds of billions of words taken from the Internet, a vast repository of human language. Since parts of the Internet are rife with toxicity and bias, there was no easy way to clean up these sections of training data. Even a team of hundreds of people would take decades to manually review a huge dataset.

Read more about ChatGPT:

Disclaimer

Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Damir Yalalov

Damir is the Editor/SEO/Product Lead at mpost.io. He is most interested in SecureTech, Blockchain, and FinTech startups. Damir earned a bachelor's degree in physics.

Follow Author

More Articles
🗞 Metaverse Newsletter
👾 Follow us
  YouTube Icon     YouTube Icon     YouTube Icon     YouTube Icon