LLM News

Every LLM release, update, and milestone.

Filtered by:speech-recognition✕ clear
research

Researchers map accent bias in speech recognition to specific neural subspaces

A new audit technique called ACES reveals that accent-discriminative information in speech recognition models concentrates in low-dimensional subspaces at early layers. Testing Wav2Vec2-base on five English accents, researchers found accent data clusters in layer 3 with just 8 dimensions, but attempting to remove it paradoxically worsens fairness.

researchApple

Apple Research Identifies 'Text-Speech Understanding Gap' Limiting LLM Speech Performance

Apple researchers have identified a fundamental limitation in speech-adapted large language models: they consistently underperform their text-based counterparts on language understanding tasks. The team terms this the 'text-speech understanding gap' and documents that speech-adapted LLMs lag behind both their original text versions and cascaded speech-to-text pipelines.