XLM-V: A New Method of Multilingual Masked Language Models That Attempts to Address the Problem of Vocabulary Bottleneck

News Report Technology

In Brief

The article raises the following problem: language models increase in parameters, grow in depth, but the vocabulary is still the same in size.

Researchers start training a new model with 1 million tokens from the vocabulary in an unexpected way.

The researchers were determined to see what kind of improvement they could make with such a significant increase in tokens.


The Trust Project is a worldwide group of news organizations working to establish transparency standards.

The issue raised by the article entitled “XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models” is that when language models’ parameters and depth increase, their vocabulary sizes remain unchanged. For instance, the mT5 model has 13B parameters but a 250K-word vocabulary that supports more than 100 languages. Thus, each language has approximately 2,500 unique tokens, which is obviously a very small number.

XLM-V: A new method of Multilingual Masked Language Models that attempts to address the problem of vocabulary bottleneck
@ Midjourney / Shalv

What action do the authors take? They start training a new model with 1 million tokens from the vocabulary in an unexpected way. XLM-R previously existed, however, with this upgrade, it will become XLM-V. The writers were determined to see what kind of improvement they could make with such a significant increase in tokens.

Related article: AI Model Training Costs Are Expected to Rise from $100 Million to $500 Million by 2030

What about XLM-V is new that XLM-R did not?

What about XLM-V is new that XLM-R did not?

The Improving Multilingual Models with Language-Clustered Vocabularies method is used to construct lexical representation vectors for each language as follows: for each language in the set of languages, they make up a binary vector, each element of which is a specific word in the language. One indicates that the word is included in the language’s dictionary (you can view an image with a graphic description in the attachments.) However, by creating a vector utilizing the negative logarithmic probability of occurrence of each lexeme, the authors enhance how references are made.

  1. The vectors are grouped after that. Additionally, a sentencepiece model is trained on each particular cluster to stop the transfer of vocabulary between lexically unrelated languages.
  2. The ALP assesses a dictionary’s capacity to represent a specific language.
  3. Utilizing the algorithm for creating ULM dictionaries is the following step. which begins with a big initial dictionary and incrementally trims it down till the number of tokens is below a certain threshold for dictionary size.

Read more about AI:

Disclaimer

Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Damir Yalalov

Damir is the Editor/SEO/Product Lead at mpost.io. He is most interested in SecureTech, Blockchain, and FinTech startups. Damir earned a bachelor's degree in physics.

Follow Author

More Articles