researchOpenAI
OpenAI releases IH-Challenge dataset to train models to reject untrusted instructions
OpenAI has released IH-Challenge, a training dataset designed to teach AI models to reliably distinguish between trusted and untrusted instructions. Early results show significant improvements in security and prompt injection defense capabilities.