July 27, 2024
New Method to Reduce Size of Multilingual Language Models

New Method Developed to Reduce Size of Multilingual Language Models

Researchers from Johns Hopkins University have introduced a new approach to optimize multilingual language models (MLMs) by reducing the number of parameters required for each language. MLMs, which can predict, generate, and extract text from multiple languages, are ideal for cross-lingual communication and translation but tend to perform best when focused on a few languages.

Traditionally, when adding new languages to an MLM, separate dense neural networks are designed, resulting in an explosion of parameters. However, the researchers opted for a different approach called Language-Specific Matrix Synthesis. This method utilizes low-rank matrices to compress data and reduce the number of parameters needed for each additional language. By sharing a smaller number of parameters among multiple languages, the researchers achieved comparable performance without compromising the model’s size.

The team tested their method on a model capable of understanding 95 different languages and found that it achieved superior performance in multilingual settings while using fewer parameters. This reduction in the model’s size has the potential to significantly decrease hardware requirements, making it possible to deploy a single AI application capable of handling hundreds of languages.

The researchers aim to apply their method to larger MLMs and develop robust AI systems that can comprehend multiple languages as effectively as they do in English. By reducing the size of language models without sacrificing performance, this research paves the way for the deployment of truly multilingual AI models in devices of all sizes.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it