retrieval
7 articles tagged with retrieval
AWS adds multimodal embeddings to Amazon Bedrock for manufacturing document retrieval
AWS released multimodal embedding capabilities for Amazon Nova on Bedrock, allowing manufacturing organizations to retrieve information from technical documents that combine text, engineering diagrams, and images. The model supports configurable dimensions from 256 to 3072 and processes text, images, and multi-page documents into a shared vector space.
IBM Releases Granite Embedding 311M R2 With 32K Context, 200+ Language Support
IBM released Granite Embedding 311M Multilingual R2, a 311-million parameter dense embedding model with 32,768-token context length and support for 200+ languages. The model scores 64.0 on Multilingual MTEB Retrieval (18 tasks), an 11.8-point improvement over its predecessor, and ships with ONNX and OpenVINO models for production deployment.
AI agent skills fail in real-world conditions, researchers find testing 34,000 skills
A large-scale study testing 34,198 real-world skills reveals that AI agent performance drops drastically when moving from curated benchmarks to realistic conditions. Claude Opus 4.6 saw pass rates fall from 55.4% with hand-selected skills to 38.4% in truly realistic scenarios, while weaker models like Kimi K2.5 actually perform below their no-skill baseline.
Microsoft open-sources Harrier embedding model with 27B parameters, 131K context window
Microsoft's Bing team has open-sourced Harrier, a 27-billion-parameter embedding model that supports over 100 languages and features a 131,072-token context window. The model ranks first on the MTEB v2 multilingual benchmark, outperforming proprietary offerings from OpenAI and Amazon, and is available on Hugging Face under the MIT license.
Microsoft releases Harrier embedding models with 32K context window, achieving 74.3 on MTEB v2
Microsoft released the Harrier-OSS embedding model family, comprising three variants with 270M, 600M, and 27B parameters. The largest model achieves 74.3 on the Multilingual MTEB v2 benchmark. All models support 32,768 max tokens and multilingual inputs across 40+ languages.
Microsoft releases Harrier embedding models with 32K token context, tops multilingual benchmark
Microsoft has released Harrier-OSS-v1, a family of multilingual text embedding models trained with contrastive learning and knowledge distillation. The 0.6B parameter variant achieves a 69.0 score on the Multilingual MTEB v2 benchmark with support for 32,768 token context windows and 45+ languages.
Chroma releases Context-1, a 20B parameter retrieval agent for complex multi-hop search
Chroma has released Context-1, a 20B parameter Mixture of Experts model trained specifically for retrieval tasks that require multi-hop reasoning. The model decomposes complex queries into subqueries, performs parallel tool calls, and actively prunes its own context mid-search—achieving comparable performance to frontier models at a fraction of the cost and up to 10x faster inference speed.