OpenAI updates GPT-3 model to help generate text of higher quality
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
OpenAI, an artificial intelligence research lab, announced an updated version of their text-generating GPT-3 model. The new model, dubbed “Davinci,” is said to produce a text of higher quality than the original GPT-3 model. As a result, GPT-3 has become less toxic, less likely to get confused with all the data, and generally better at all tasks. Even the 1.3B of the new model is said to be better than the 175B of the old one. It looks like Reinforcement Learning is back in vogue now, thanks to language models.
Architecturally, this is still the same GPT-3, the core feature is in additional training:
- First, researchers trained the model on primary data.
- Then, they manually marked the quality of the resulting outputs and trained the reward model to predict it.
- Next, the Reinforcement Learning algorithm (PPO) was used, which slightly tuned GPT according to this reward model.
Here’s what the updated version of GPT-3 from OpenAI can do:
- Follows instructions better (done with RL and the InstructGPT method).
- Produce higher quality writing: the model has been tuned on more texts and has better perplexity.
- Continue writing long texts: The text limit is 4K characters, two times lower than code-davinci-002.
The price is the same as code-davinci-002, so there is no reason not to use it.
OpenAI’s goal is to eventually create a model that can generate text that is indistinguishable from human-written text. The updated GPT-3 model is a step in the right direction, but there is still a long way to go.
Read more about GPT-3 below:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.