LLM News

Every LLM release, update, and milestone.

Filtered by:neural-audio-codec✕ clear
research

Meta researchers show flattened speech tokens outperform hierarchical models in Llama-Mimi

Meta researchers propose Llama-Mimi, a speech language model that flattens multi-level RVQ tokens from neural audio codecs into single sequences processed by a standard Transformer decoder. The approach outperforms hierarchical models on most tasks while achieving best-in-class acoustic consistency performance.