ai-models

8 articles tagged with ai-models

April 9, 2026
model release

Meta AI app jumps to No. 5 on App Store following Muse Spark launch

Meta's AI app surged from No. 57 to No. 5 on the U.S. App Store within 24 hours of launching Muse Spark, Meta's new multimodal AI model. The model accepts voice, text, and image inputs and features reasoning capabilities for science and math tasks, visual coding, and multi-agent functionality.

April 8, 2026
model release

Meta replaces Llama with Muse Spark AI, launches Contemplating mode for complex reasoning

Meta has discontinued its Llama model line and launched Muse Spark as the foundation of its new AI strategy under Meta Superintelligence Labs. The model features a Contemplating mode for complex reasoning tasks and specializes in multimodal perception, health applications, and agentic tasks. Muse Spark is available today in Meta AI apps, with a private API preview for select partners.

April 2, 2026
model releaseMicrosoft

Microsoft releases three in-house AI models for speech and images, signaling independence from OpenAI

Microsoft released public preview versions of three proprietary AI models: MAI-Transcribe-1 for speech recognition across 25 languages at 50% lower GPU cost than alternatives, MAI-Voice-1 for speech synthesis generating 60 seconds of audio in under a second, and MAI-Image-2 for text-to-image generation. The models are available exclusively through Microsoft Azure AI Foundry and already power Copilot, Bing, and PowerPoint.

model releaseMicrosoft

Microsoft releases three multimodal AI models to compete with OpenAI and Google

Microsoft AI released three foundational models on April 2: MAI-Transcribe-1 for speech-to-text across 25 languages, MAI-Voice-1 for audio generation, and MAI-Image-2 for video generation. The company positions these models as cheaper alternatives to Google and OpenAI offerings. Models are available on Microsoft Foundry with pricing starting at $0.36 per hour for transcription.

March 26, 2026
product updateGitHub

GitHub will train Copilot models on user interaction data starting April 2026

GitHub will use Copilot interaction data from Free, Pro, and Pro+ plan users to train AI models starting April 24, 2026, unless users actively opt out. The policy does not affect Copilot Business and Enterprise customers. Data shared will include prompts, outputs, code snippets, filenames, and repository structures.

March 23, 2026
model releaseMicrosoft

Microsoft's superintelligence team releases MAI-Image-2, ranks third in text-to-image generation

Microsoft's superintelligence team, led by Mustafa Suleyman, has released MAI-Image-2, a text-to-image generator that currently ranks third on the Arena.ai leaderboard for text-to-image models, behind OpenAI's GPT-Image-1.5 and Google's Nano Banana 2. The model is now available for testing in the MAI Playground and will roll out to Copilot and Bing Image Creator, with API access opening to all developers through Microsoft Foundry.

March 15, 2026
product updateMicrosoft

GitHub removes premium AI models from free Copilot Student plan

GitHub has removed premium AI models including GPT-5.4, Claude Opus, and Claude Sonnet from its free Copilot Student plan effective March 12, 2026. The change leaves students with access only to lower-cost models: Claude 4.5 Haiku, Gemini 3.1 Pro, and GPT-5.3 Codex. GitHub's decision triggered 2,874 downvotes versus 21 upvotes on the announcement, with students arguing they need premium models to learn industry-standard tools.

February 20, 2026
model release

Google announces Gemini 3.1 Pro for complex problem-solving tasks

Google announced Gemini 3.1 Pro, positioning the model for complex problem-solving tasks requiring deeper reasoning than previous versions. The release follows Gemini 3 Pro (November 2025) and Gemini 3 Flash (December 2025).