In Brief
Meta launches first 120B model Galactica for scientific texts
Galactica models can be used for science-related tasks like finding citations
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Meta AI and Paperswithcode have recently released the first 120B model Galactica that is trained on scientific texts (articles, textbooks, etc.). This is a huge breakthrough in the field of AI, as it will allow for more accurate and faster predictions to be made about the world around us.

Scientific essays, lectures, formulas, abstracts, and even computer notebooks are just a few of the wonderful things that Galactica can produce. The weights and code for the model are completely open-source.
The problem is that information overload in science was the initial promise of computing. The specialty of traditional computers, however, was retrieval and storage rather than pattern recognition. As a result, while their ability to digest data has increased, their intellect has not.
The main goal of Galactica is to help researchers who are drowning in publications and finding it harder and harder to separate the important from the irrelevant.

It can be used for many things, including reading academic material, posing scientific queries, and writing scientific code.
- Galactica is a potent large language model (LLM) that has been trained on more than 48 million scientific papers, books, articles, chemicals, proteins, and other sources. More than 360 million in-context citations and over 50 million unique references, normalized across a wide range of sources, make up the huge corpus on which Galactica models are trained. Galactica may then recommend citations and find related publications because of this.
- Meta has announced a slew of new AI models and tools in recent months, including AI news editors or text-to-video generators.
Read more related articles:
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.