foundation-model
5 articles tagged with foundation-model
DeepSeek Releases V4-Pro-Base with 1.6 Trillion Parameters
DeepSeek has released DeepSeek-V4-Pro-Base, a 1.6 trillion parameter foundation model now available on Hugging Face. The base model uses BF16 precision for weights and includes support for F8_E4M3, I64, and F32 tensor types.
NVIDIA Releases GR00T N1.7, 3B-Parameter Open-Source Humanoid Robot Model Trained on 20,854 Hours of Human Video
NVIDIA released GR00T N1.7, a 3-billion parameter open-source Vision-Language-Action model for humanoid robots with commercial licensing. The model was trained on 20,854 hours of human egocentric video data and demonstrates the first documented scaling law for robot dexterity, where increasing human video data from 1,000 to 20,000 hours more than doubles task completion rates.
Z.ai releases GLM-5V Turbo, native multimodal model for vision-based coding
Z.ai has released GLM-5V Turbo, a native multimodal foundation model designed for vision-based coding and agent-driven tasks. The model supports image, video, and text inputs with a 202,752 token context window, priced at $1.20 per million input tokens and $4 per million output tokens.
Meta releases SAM 3.1, adding 7x faster multi-object tracking to vision foundation model
Meta has released SAM 3.1, an update to its Segment Anything Model that adds Object Multiplex, a shared-memory approach for joint multi-object tracking. The new version achieves approximately 7x faster inference when tracking 128 objects on a single H100 GPU while improving video object segmentation (VOS) performance on 6 out of 7 benchmarks.
Xiaomi releases MiMo-V2-Pro with 1M context window and 1T+ parameters
Xiaomi released MiMo-V2-Pro on March 18, 2026, a flagship foundation model with over 1 trillion total parameters and a 1,048,576 token context window. The model is priced at $1 per million input tokens and $3 per million output tokens, positioning it as an agent-focused system comparable to top-tier models.