Liquid AI releases LFM2-24B-A2B, a 24B parameter mixture-of-experts model
Liquid AI has released LFM2-24B-A2B, a 24-billion parameter mixture-of-experts model designed for text generation and conversational tasks. The model supports nine languages including English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, and Portuguese.
Liquid AI has released LFM2-24B-A2B, a 24-billion parameter mixture-of-experts (MoE) model for text generation and conversational applications. The model is now available on Hugging Face.
Model Specifications
LFM2-24B-A2B is a text-generation model built on the LFM2 architecture. The model employs a mixture-of-experts approach, a technique that allows different parameter subsets to activate for different inputs, potentially reducing computational overhead compared to dense models of similar capacity.
The model supports nine languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, and Portuguese. This multilingual capability positions it for deployment across diverse linguistic regions.
Deployment and Access
The model is compatible with Hugging Face endpoints and is available for US-region deployment. Liquid AI has published the model under a custom license ("license:other"), meaning usage terms differ from standard open-source licenses—users should review the specific terms before deployment.
The model carries the tag "edge," suggesting optimization for edge deployment scenarios where computational resources are constrained.
Technical Context
Liquid AI's LFM2 architecture appears designed to balance model capacity with inference efficiency through its mixture-of-experts design. The 24-billion parameter scale represents a middle ground in current model sizing, larger than smaller instruction-tuned models but significantly smaller than frontier 70B+ parameter models.
The model includes references to arxiv:2511.23404, indicating academic documentation may be available for the underlying architecture.
What This Means
Liquid AI's release of LFM2-24B-A2B adds another option to the growing ecosystem of open and semi-open models. The mixture-of-experts approach and explicit edge-deployment tagging suggest targeting use cases where inference cost and latency matter more than maximum capability. The multilingual support and US endpoint compatibility indicate focus on commercial deployment rather than pure research release. Builders evaluating cost-efficient alternatives to larger models should benchmark this model against comparable MoE designs from other providers.