Researchers map accent bias in speech recognition to specific neural subspaces
A new audit technique called ACES reveals that accent-discriminative information in speech recognition models concentrates in low-dimensional subspaces at early layers. Testing Wav2Vec2-base on five English accents, researchers found accent data clusters in layer 3 with just 8 dimensions, but attempting to remove it paradoxically worsens fairness.