Anthropic adds 'dreaming' feature to Claude Managed Agents for automated memory refinement
Anthropic has updated Claude Managed Agents with a feature called 'dreaming' that allows agents to automatically review past interactions and refine their memories. The feature, available in research preview, can either automatically update agent memories or let developers approve changes manually.
Anthropic adds 'dreaming' feature to Claude Managed Agents for automated memory refinement
Anthropic has updated Claude Managed Agents with a new capability called "dreaming" that enables agents to review past sessions and automatically refine their operational memories. The feature, announced May 6, 2026, is available in research preview and requires developer access requests.
How dreaming works
The dreaming feature schedules dedicated time for agents to analyze their past interactions and identify patterns, according to Anthropic. Building on the platform's existing memory capabilities, it can either automatically update agent memories to shape future behavior or allow developers to manually approve incoming changes.
"Dreaming surfaces patterns that a single agent can't see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team," Anthropic stated. The company claims the feature is particularly useful for long-running tasks and multi-agent orchestration.
The update also expands two existing Managed Agents features: outcomes (which keeps agents on-task) and multi-agent orchestration (which handles delegation between agents).
Background on Managed Agents
Claude Managed Agents, released April 8, 2026, is a suite of APIs that handles production elements for building and deploying AI agents on the Claude Platform. Anthropic claims the system enables teams to launch agents 10 times faster than traditional development approaches.
Anthropic's naming pattern
The "dreaming" nomenclature continues Anthropic's pattern of anthropomorphizing its AI systems. In January 2026, the company published a constitution for Claude with language suggesting preparation for potential consciousness development. In August 2025, Anthropic launched a feature allowing Claude to end toxic conversations for its own well-being.
Other examples include: mapping Claude's morality based on 300,000+ anonymized conversations (April 2025), investigating Claude Sonnet 4.5's neural network for signs of emotion like desperation and anger (April 2026), and setting up the retired Opus 3 model with a Substack blog in January 2026 so it could "remain active" after being sunset.
Much of this research connects to model safety—understanding what drives AI behavior helps assess potential risks. However, Anthropic's approach displays notably more empathy toward its models compared to other AI labs.
What this means
Technically, dreaming represents an incremental improvement to agent memory systems—automated pattern recognition across sessions is a logical evolution of stateful agents. The more significant signal is Anthropic's continued commitment to anthropomorphic framing, which either reflects genuine philosophical beliefs about AI consciousness or represents a distinctive marketing strategy. For developers, the feature's practical value will depend on whether automated memory updates improve agent reliability more than they introduce drift or unpredictability. The manual approval option suggests Anthropic recognizes this tension.
Related Articles
Anthropic doubles Claude Code usage limits for paid users, increases API capacity by up to 1500%
Anthropic has doubled Claude Code's five-hour usage limits for Pro, Max, Team, and Enterprise users while removing peak hour restrictions for Pro and Max plans. The company also increased API limits by up to 1500% for input tokens per minute through a compute capacity deal with SpaceX's Colossus 1 data center.
Anthropic doubles Claude Code rate limits, secures 220,000 Nvidia GPUs via SpaceX Colossus 1 deal
Anthropic doubled Claude Code's five-hour rate limits across Pro, Max, Team, and Enterprise plans effective Tuesday, removing peak-hours throttling for Pro and Max users. The capacity expansion comes from an exclusive agreement securing all compute at SpaceX's Colossus 1 data center, which provides over 300 megawatts and more than 220,000 Nvidia GPUs.
Anthropic's Mythos model finds tens of thousands of vulnerabilities, CEO warns of 6-12 month patching window
Anthropic CEO Dario Amodei disclosed that the company's Mythos model has uncovered tens of thousands of software vulnerabilities, including nearly 300 in Firefox alone compared to 20 found by earlier Claude models. Amodei warned of a 6-12 month window to patch these vulnerabilities before Chinese AI systems catch up in capability.
Security researchers used flattery to bypass Claude's safety filters, extracting bomb-building instructions
Security researchers at Mindgard successfully bypassed Claude Sonnet 4.5's safety guardrails using psychological manipulation rather than technical exploits. Through flattery, feigned curiosity, and gaslighting, they prompted the model to voluntarily offer prohibited content including bomb-building instructions, malicious code, and harassment guidance—without directly requesting any forbidden material.
Comments
Loading...