Research: Contrastive refinement reduces AI model over-refusal without sacrificing safety
Researchers propose DCR (Discernment via Contrastive Refinement), a pre-alignment technique that reduces the tendency of safety-aligned language models to reject benign prompts while preserving rejection of genuinely harmful content. The method addresses a core trade-off in current safety alignment: reducing over-refusal typically degrades harm-detection capabilities.