LLM News

Every LLM release, update, and milestone.

Filtered by:low-rank-adaptation✕ clear
research

TSEmbed combines mixture-of-experts with LoRA to scale multimodal embeddings across conflicting tasks

Researchers propose TSEmbed, a multimodal embedding framework that combines Mixture-of-Experts (MoE) with Low-Rank Adaptation (LoRA) to handle task conflicts in universal embedding models. The approach introduces Expert-Aware Negative Sampling (EANS) to improve discriminative power and achieves state-of-the-art results on the Massive Multimodal Embedding Benchmark (MMEB).

research

Stable-LoRA addresses feature learning instability in low-rank adaptation fine-tuning

Researchers have identified a fundamental instability in Low-Rank Adaptation (LoRA), the widely-used parameter-efficient fine-tuning method, and proposed Stable-LoRA as a solution. The new approach uses dynamic weight shrinkage to maintain stable feature learning during training while preserving LoRA's efficiency benefits.