multilingual

7 articles tagged with multilingual

March 23, 2026
model releaseNVIDIA

NVIDIA releases Nemotron 3 Content Safety 4B for multimodal, multilingual moderation

NVIDIA released Nemotron 3 Content Safety 4B, an open-source multimodal safety model designed to moderate content across text, images, and multiple languages. Built on Gemma-3 4B-IT with a 128K context window, the model achieved 84% average accuracy on multimodal safety benchmarks and supports over 140 languages through culturally-aware training data.

March 14, 2026
product update

Descript uses OpenAI models to scale multilingual video dubbing with optimized translations

Descript has integrated OpenAI models to enable multilingual video dubbing at scale, optimizing translations for both semantic accuracy and speech timing to produce natural-sounding dubbed content. The system balances meaning preservation with practical constraints of dubbed audio synchronization.

March 12, 2026
model releaseNVIDIA

NVIDIA releases Nemotron-3-Super-120B, a 120B parameter model with latent MoE architecture

NVIDIA has released Nemotron-3-Super-120B-A12B-NVFP4, a 120-billion parameter text generation model featuring a latent Mixture-of-Experts (MoE) architecture. The model supports 8 languages including English, French, Spanish, Italian, German, Japanese, and Chinese, and is available on Hugging Face with 8-bit quantization support through NVIDIA's ModelOpt toolkit.

March 11, 2026
model releaseNVIDIA

NVIDIA releases Nemotron-3-Super-120B, a 120B parameter model with latent MoE architecture

NVIDIA has released Nemotron-3-Super-120B-A12B-BF16, a 120 billion parameter model designed for text generation and conversational tasks. The model employs a latent mixture-of-experts (MoE) architecture and supports multiple languages including English, French, Spanish, Italian, German, Japanese, and Chinese.

March 9, 2026
model release

IBM releases Granite 4.0 1B Speech: multilingual model for edge devices

IBM has released Granite 4.0 1B Speech, a 1 billion parameter multilingual speech model designed for edge deployment. The model supports multiple languages and is optimized for devices with limited computational resources.

February 24, 2026
model release

Liquid AI releases LFM2-24B-A2B, a 24B parameter mixture-of-experts model

Liquid AI has released LFM2-24B-A2B, a 24-billion parameter mixture-of-experts model designed for text generation and conversational tasks. The model supports nine languages including English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, and Portuguese.

February 22, 2026
model releaseCohere

Cohere releases tiny-aya-global, multilingual text model covering 100+ languages

Cohere Labs has released tiny-aya-global, a lightweight text generation model trained to support conversational tasks across 100+ languages. The model is available on Hugging Face under a CC-BY-NC-4.0 license and builds on the tiny-aya-base architecture.