OpenAI Codex launches subagents and custom agent support in general availability
OpenAI Codex subagents reached general availability after weeks of preview, enabling developers to define custom agents as TOML files and parallelize task execution. The feature mirrors Claude Code's implementation with default subagents for exploration, worker, and default operations.
OpenAI Codex subagents are now in general availability, moving out of preview status after several weeks of testing behind a feature flag.
The implementation follows Claude Code's model, offering three default subagent types: "explorer," "worker," and "default." The distinction between worker and default subagents remains unclear in OpenAI's documentation, though the worker variant appears designed for parallelizing large numbers of small tasks based on CSV handling examples.
Custom Agent Configuration
Developers can now define custom agents as TOML configuration files stored in ~/.codex/agents/. These custom agents support:
- Custom instructions tailored to specific use cases
- Model assignment, including gpt-5.3-codex-spark for high-speed execution
- Named references for direct invocation in prompts
OpenAI's documentation includes this example demonstrating the pattern:
Investigate why the settings modal fails to save. Have browser_debugger reproduce it, code_mapper trace the responsible code path, and ui_fixer implement the smallest fix once the failure mode is clear.
This prompt instantiates three named custom agents with specific responsibilities, each potentially running in parallel.
Industry-Wide Adoption Pattern
Subagent architecture has become standard across coding platforms. Competitors now support similar patterns:
- Claude Code: subagents with custom definitions
- Gemini CLI: experimental subagent support
- Mistral Vibe: agent selection and skills framework
- Visual Studio Code Copilot: built-in subagent layer
- Cursor: dedicated subagent documentation
- OpenCode: integrated agent system
What this means
Subagents represent the consolidation of agentic patterns in coding workflows. Rather than creating monolithic agents, developers can decompose complex tasks into specialized subagents with distinct capabilities and models. This approach reduces hallucination risk through task specialization and enables genuine parallelization for I/O-bound operations. OpenAI's integration of gpt-5.3-codex-spark into custom agents gives users explicit control over speed-accuracy tradeoffs per agent type.
Related Articles
OpenAI teases iPhone version of Codex desktop app for remote control features
OpenAI appears to be preparing an iPhone companion app for its Codex desktop application. Company employees responded to user requests for remote control features on Twitter, with one stating users should expect the functionality within days.
OpenAI Fixed GPT-5.5's Goblin Obsession by Explicitly Banning Mythical Creature References
OpenAI discovered its GPT-5.1 through GPT-5.4 models developed an increasing fixation on goblins, gremlins, and other mythical creatures. The issue traced back to reinforcement learning rewards used to develop a discontinued 'Nerdy personality' feature, which persisted across model generations.
ChatGPT Images 2.0 Adds UI Design Analysis and Mockup Generation Capabilities
OpenAI's ChatGPT Images 2.0 has added UI design analysis capabilities, allowing it to review interface designs, flag specific issues, and generate redesigned mockups. The feature is available to ChatGPT Plus subscribers at $20/month and represents an expansion beyond pure image generation into design review.
Vibe Adds Remote Coding Agents Powered by Mistral Medium 3.5
Mistral AI has integrated its Medium 3.5 model into Vibe for remote coding agent functionality. The company also launched a new Work mode in Le Chat designed for complex tasks, though specific technical details remain undisclosed.
Comments
Loading...