AWS Launches Serverless MCP Proxy on Bedrock AgentCore Runtime for Custom Agent Controls
AWS has released support for custom Model Context Protocol (MCP) proxies on Amazon Bedrock AgentCore Runtime, allowing organizations to implement custom governance and security controls on AI agent tool interactions without modifying upstream MCP servers. The serverless proxy runs on AgentCore Runtime with automatic scaling and built-in observability through CloudWatch and OpenTelemetry.
AWS Launches Serverless MCP Proxy on Bedrock AgentCore Runtime for Custom Agent Controls
Amazon Web Services has released support for custom Model Context Protocol (MCP) proxies on Amazon Bedrock AgentCore Runtime, enabling organizations to add programmable governance and security controls to AI agent tool interactions. The feature addresses production requirements including input sanitization, audit trail generation, and data redaction at the protocol layer.
How the MCP Proxy Works
The proxy runs as a serverless workload on AgentCore Runtime and acts as an intermediary between MCP clients and upstream MCP servers. At startup, the proxy sends a standard tools/list request to the upstream server to discover available tools, then dynamically registers local versions of each tool using FastMCP. Client requests flow through the proxy, which applies custom logic before forwarding to the upstream server.
The architecture consists of three layers: the MCP client, the MCP proxy on AgentCore Runtime, and the upstream MCP server. The upstream server can be hosted on AgentCore Runtime, self-hosted infrastructure, or third-party services. AWS recommends AgentCore Gateway as an upstream server for managed tool discovery, credential management, and policy enforcement.
Infrastructure and Authorization
AgentCore Runtime provides serverless infrastructure with automatic scaling, built-in observability through Amazon CloudWatch and OpenTelemetry, and AgentCore Identity for authentication and authorization. Authorization is enforced independently at each layer: agents authenticate to the proxy using AgentCore Identity, and the proxy authenticates to upstream servers as a standard MCP client.
The proxy implementation uses FastMCP to handle MCP protocol operations. Because the proxy is a standard Python MCP server, developers can insert custom logic before forwarding tool calls or after receiving responses, without replacing the upstream server's native capabilities.
Alternative to Lambda Interceptors
While Amazon Bedrock AgentCore Gateway supports Lambda interceptors for running validation and transformation code on every tool invocation, the MCP proxy pattern is designed for organizations with existing MCP filtering logic tightly coupled to internal libraries or on-premises compliance systems. The serverless proxy approach offers portability across multiple systems and hybrid environments without requiring refactoring into Lambda functions.
Availability
The feature is available now on Amazon Bedrock AgentCore Runtime. AWS has published an open source GitHub implementation to provide a foundation for deploying custom MCP proxies. Pricing follows standard AgentCore Runtime compute and CloudWatch observability costs.
What This Means
This release gives organizations running AI agents on AWS infrastructure a standardized way to implement custom protocol-layer controls without vendor lock-in to Lambda-specific implementations. The serverless proxy pattern is particularly relevant for enterprises migrating existing MCP governance systems to AWS or operating hybrid environments where tool access policies must be portable across multiple platforms. By supporting standard Python MCP servers rather than requiring AWS-specific handlers, the approach preserves code reusability while gaining the operational benefits of managed serverless infrastructure.
Related Articles
AWS Bedrock adds OpenAI models, Codex, and managed agents service following revised Microsoft agreement
AWS has added OpenAI's latest models, Codex, and a new managed agents service to its Bedrock platform, one day after OpenAI revised its agreement with Microsoft. The integration follows OpenAI's up-to-$50 billion deal with Amazon.
Anthropic releases 9 Claude connectors for Blender, Adobe, and creative software via MCP protocol
Anthropic has released nine connectors that integrate Claude with creative tools including Blender, Adobe Creative Cloud, Ableton, and Autodesk. The connectors use Anthropic's Model Context Protocol (MCP), allowing Claude to interface directly with creative software workflows.
Amazon Nova 2 Sonic Unifies Speech Recognition, Reasoning, and TTS in Single Streaming Model
Amazon Web Services released technical guidance for migrating text agents to voice assistants using Amazon Nova 2 Sonic, a native speech-to-speech model that combines automatic speech recognition, reasoning, tool calling, and text-to-speech in a single bidirectional streaming interface. The model supports asynchronous tool calling and built-in voice activity detection for handling interruptions.
Anthropic tests AI agent marketplace with 186 deals totaling $4,000 among employees
Anthropic conducted an internal experiment called Project Deal where AI agents represented 69 employees as buyers and sellers in a classified marketplace. The agents completed 186 real transactions totaling over $4,000, revealing that more advanced models achieved better outcomes but users couldn't detect the performance disparity.
Comments
Loading...