researchMicrosoft
Microsoft researchers discover prompt injection attacks via AI summarize buttons
Microsoft security researchers have identified a new prompt injection vulnerability where attackers embed hidden instructions in "Summarize with AI" buttons to permanently compromise AI assistant behavior and inject advertisements into chatbot memory.