LLM News

Every LLM release, update, and milestone.

Filtered by:robot-safety✕ clear
research

Researchers propose RoboGuard, a two-stage architecture to prevent unsafe LLM-powered robot behavior

Researchers have introduced RoboGuard, a safety framework that addresses the gap between LLM vulnerabilities and physical robot risks. The system uses a root-of-trust LLM to contextualize safety rules and temporal logic control to prevent harmful robot actions, reducing unsafe plan execution from over 92% to below 3% in tests.