Stability AI Releases New Small-Scale Language Model for Lower-Equipped Devices
With large-scale language models becoming more powerful, the need for improved hardware for operation is also on the rise. As a result, the development of small models that can run on lower-equipped devices has become a new focus in the industry.
Stability AI has recently released the Stable LM 2 1.6B, a small language generation model that supports 7 languages and is developed using the latest language modeling algorithms. Despite its smaller scale, Stability AI emphasizes that the performance of the Stable LM 2 1.6B is comparable to that of larger-scale models. In fact, in most benchmark tests, the Stable LM 2 1.6B outperforms models with less than 2 billion parameters, including Microsoft’s Phi-2 and Tiny Llama 1.1B, and even outperforms Stability AI’s own 3 billion parameter model Stable LM 3B.
However, the small size of the Stable LM 2 1.6B does come with certain limitations, such as a higher incidence of “hallucinations” and insufficient ability to prevent harmful language. Despite these limitations, the model can operate on lower-equipped devices, thereby lowering the threshold for generative AI models and helping to develop the developer community.
This new development in small-scale language models is indicative of the industry’s shift towards making AI technology more accessible and practical for a wider range of devices and applications. As language models continue to evolve, it is clear that there is a growing demand for models that can deliver powerful performance while still being adaptable to various hardware configurations.
Source: Stability AI