model release

Liquid AI releases LFM2-24B-A2B, a 24B parameter mixture-of-experts model

TL;DR

Liquid AI has released LFM2-24B-A2B, a 24-billion parameter mixture-of-experts model designed for text generation and conversational tasks. The model supports nine languages including English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, and Portuguese.

1 min read
0

Liquid AI has released LFM2-24B-A2B, a 24-billion parameter mixture-of-experts (MoE) model for text generation and conversational applications. The model is now available on Hugging Face.

Model Specifications

LFM2-24B-A2B is a text-generation model built on the LFM2 architecture. The model employs a mixture-of-experts approach, a technique that allows different parameter subsets to activate for different inputs, potentially reducing computational overhead compared to dense models of similar capacity.

The model supports nine languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, and Portuguese. This multilingual capability positions it for deployment across diverse linguistic regions.

Deployment and Access

The model is compatible with Hugging Face endpoints and is available for US-region deployment. Liquid AI has published the model under a custom license ("license:other"), meaning usage terms differ from standard open-source licenses—users should review the specific terms before deployment.

The model carries the tag "edge," suggesting optimization for edge deployment scenarios where computational resources are constrained.

Technical Context

Liquid AI's LFM2 architecture appears designed to balance model capacity with inference efficiency through its mixture-of-experts design. The 24-billion parameter scale represents a middle ground in current model sizing, larger than smaller instruction-tuned models but significantly smaller than frontier 70B+ parameter models.

The model includes references to arxiv:2511.23404, indicating academic documentation may be available for the underlying architecture.

What This Means

Liquid AI's release of LFM2-24B-A2B adds another option to the growing ecosystem of open and semi-open models. The mixture-of-experts approach and explicit edge-deployment tagging suggest targeting use cases where inference cost and latency matter more than maximum capability. The multilingual support and US endpoint compatibility indicate focus on commercial deployment rather than pure research release. Builders evaluating cost-efficient alternatives to larger models should benchmark this model against comparable MoE designs from other providers.

Related Articles

model release

Moonshot AI Releases Kimi K2.6: 1T-Parameter MoE Model with 256K Context and Agent Swarm Capabilities

Moonshot AI has released Kimi K2.6, an open-source multimodal model with 1 trillion total parameters (32B activated) and 256K context window. The model achieves 80.2% on SWE-Bench Verified, 58.6% on SWE-Bench Pro, and supports horizontal scaling to 300 sub-agents executing 4,000 coordinated steps.

model release

Alibaba Releases Qwen3.6-35B-A3B: 35B Parameter MoE Model with 262K Context Window

Alibaba has released Qwen3.6-35B-A3B, the first open-weight model in the Qwen3.6 series. The model features 35B total parameters with 3B activated, a native 262K context window extensible to 1.01M tokens, and achieves 73.4% on SWE-bench Verified using 256 experts with 8 activated per token.

model release

OpenAI Releases GPT-5.4 Image 2 with 272K Context Window and Image Generation

OpenAI has released GPT-5.4 Image 2, combining the GPT-5.4 reasoning model with image generation capabilities. The multimodal model features a 272K token context window and is priced at $8 per million input tokens and $15 per million output tokens.

model release

OpenAI releases ChatGPT Images 2.0 with 3840x2160 resolution at $30 per 1M output tokens

OpenAI released ChatGPT Images 2.0, pricing output tokens at $30 per million with maximum resolution of 3840x2160 pixels. CEO Sam Altman claims the improvement from gpt-image-1 to gpt-image-2 equals the jump from GPT-3 to GPT-5.

Comments

Loading...