← Back to Blog

Building Connected AI Agents with Amazon Bedrock AgentCore Gateway

by Sebastian Hof

Enhancing AI agents with external tool integration using Amazon Bedrock AgentCore Gateway.

AIAI AgentsAWSMCP

This post was also published on Elevate Tech Blog on Medium

Building Connected AI Agents

Introduction

In my previous post, I explored how Amazon Bedrock AgentCore simplifies deploying AI agents to production. While having a deployed agent is powerful, its capabilities are limited to what’s built into the agent itself. Today’s most effective AI systems don’t operate in isolation — they connect to external tools, databases, and APIs to access real-time information and perform actions in the world.

In this article, I’ll take our research agent to the next level by integrating it with external tools using Amazon Bedrock AgentCore Gateway. This powerful feature allows your deployed agents to securely connect with external services, dramatically expanding their capabilities while maintaining the security and scalability benefits of AgentCore.

Why Tool Integration is Critical for Production AI Agents

The key difference between a demo and a production AI agent is external system connectivity. Tool integration transforms agents from static responders into dynamic systems that can:

  • Access real-time data beyond training cutoffs
  • Execute actions in external systems and databases
  • Leverage specialized services for search, computation, and processing
  • Reduce hallucinations by grounding responses in authoritative data sources
  • Integrate with existing infrastructure through APIs and microservices

Without tool integration, AI agents remain isolated systems limited to their training data — useful for demos, but insufficient for production workloads that require current information and system interactions.

Understanding AgentCore Gateway Architecture

Amazon Bedrock AgentCore Gateway provides a managed integration layer between your AI agents and external services. It implements the Model Context Protocol (MCP) standard, ensuring consistent communication patterns across different tool integrations.

Core Architecture Components

Press enter or click to view image in full size

AgentCore Gateway Architecture

AgentCore Gateway acts as an MCP server, providing a unified access point for AI agents to discover and interact with tools. The architecture consists of these key components:

1. Gateway (MCP Server)

  • Provides a single endpoint for agents to access multiple tools
  • Handles MCP protocol translation to REST APIs and Lambda invocations
  • Manages tool discovery and semantic search capabilities
  • Combines multiple tool sources into one unified interface

2. Gateway Targets

The gateway supports three target types that define how tools are exposed:

  • OpenAPI/Smithy Target: Transforms existing REST APIs into MCP-compatible tools using OpenAPI/Smithy specifications
  • Lambda Target: Connects AWS Lambda functions as tools for custom business logic
  • Integration Target: Pre-configured connectors for common enterprise tools (Tavily, Salesforce, Jira, Slack, ServiceNow, Microsoft 365, etc.) with 15+ available providers

3. Authentication Components

  • AgentCore Gateway Authorizer: OAuth-based inbound authentication (agent identity verification)
  • AgentCore Credential Provider: Outbound authentication for accessing external APIs and services

4. Key Capabilities

  • Security Guard: OAuth authorization management
  • Translation: Protocol conversion between MCP and target APIs
  • Composition: Multiple tools combined into single MCP endpoint
  • Secure Credential Exchange: Automatic credential injection per tool
  • Semantic Tool Selection: Context-aware tool discovery and selection

This architecture enables agents to access diverse external capabilities through a standardized MCP interface while maintaining enterprise-grade security and automatic scaling.

Hands-On Implementation: Building a Research Agent with Web Search

Let’s enhance our research agent by integrating Tavily’s search API through AgentCore Gateway.

Prerequisites

  • Deployed agent on AgentCore (See previous tutorial)
  • Agent framework with MCP support (LangChain, LlamaIndex, or StrandsAgents)
  • AWS CLI with appropriate permissions

Step 1: External Service Setup

First, establish access to Tavily’s search API:

  • Account Creation: Register at https://tavily.com/
  • API Key Generation: Navigate to API settings and create a new key

Step 2: Credential Management with Outbound Identity

AgentCore Gateway uses Outbound Identity for secure credential management. This approach provides centralized credential storage with automatic encryption via AWS Secrets Manager.

Console Configuration

  1. Navigate to Amazon Bedrock AgentCore Service
  2. Select IdentityAdd OAuth client / API KeyAdd API Key
  3. Configure:
  • Name: tavily-api-key
  • API Key: *<*your-tavily-api-key>
  • Description: Tavily search API integration

AWS CLI Configuration

aws bedrock-agentcore-control create-api-key-credential-provider \  
  --name tavily-api-key \  
  --api-key "your-tavily-api-key" \  
  --description "Tavily search API integration"

Step 3: Gateway Configuration

Create the AgentCore Gateway with proper target configuration:

Console Setup

  1. Navigate to “Gateways” → “Create gateway”
  2. Configure:

Press enter or click to view image in full size

Create Gateway Configuration

Basic Configuration

  • Name: open-research-agent-gateway
  • Description: Gateway for open research agent tool integration

Inbound Authentication

  • Schema Configuration: Quick create with Cognito

This creates a Cognito User Pool with OAuth2 client credentials flow. You can also use an existing identity provider, such as Okta, AzureAD, or other OIDC-compliant providers.

Permissions

  • Select Use an IAM service role
  • Select Create and use a new service role

Target Configuration

Press enter or click to view image in full size

Target Configuration

  • Target Name: tavily-integration
  • Target Description: Integration with Tavily
  • Target Type: Integrations
  • Integration Provider: Tavily
  • Tool Template: Tavily template
  • Outbound Auth: API Key (select your created credential from Step 2)
  1. Click on Create gateway

Step 4: Agent Code Integration

Modify your agent code to support MCP tool calling. Y For the base code used in these modifications, see:

git clone git@github.com:sebastianhof/open_deep_research.git  
git checkout agent-core

MCP Configuration in src/open_deep_research/configuration.py:

TAVILY_MCP_URL = os.getenv("TAVILY_MCP_URL")  
...  
class Configuration(BaseModel):  
  ...  
  search_api: SearchAPI = Field(  
          default=SearchAPI.NONE, # We must deactivate the default search api  
  ...  
  mcp_config: Optional[MCPConfig] = Field(  
          default=MCPConfig(  
              url=TAVILY_MCP_URL,  
              tools=[  
                  # Format is `{target-name}___{tool-name}`  
                  'tavily-integration___TavilySearchExtract',  
                  'tavily-integration___TavilySearchPost'  
              ],  
              auth_required=True  
          ) if TAVILY_MCP_URL else None,  
          optional=True,  
          metadata={  
              "x_oap_ui_config": {  
                  "type": "mcp",  
                  "description": "MCP server configuration"  
              }  
          }  
      )  
      mcp_prompt: Optional[str] = Field(  
          default="You have access to the Tavily API through an MCP (Model Context Protocol) Server. This allows you to perform web searches and gather real-time information from the internet to support your research tasks. Use the available MCP tools to search for current information, verify facts, and gather comprehensive data for your research.",  
          optional=True,  
          metadata={  
              "x_oap_ui_config": {  
                  "type": "text",  
                  "description": "Any additional instructions to pass along to the Agent regarding the MCP tools that are available to it."  
              }  
          }  
      )

Authentication Flow insrc/open_deep_research/utils.py

async def fetch_tokens(config: RunnableConfig) -> dict[str, Any]:      
    current_tokens = await get_tokens(config)  
    if current_tokens:  
        return current_tokens

mcp_tokens = await fetch_cognito_token()  
    await set_tokens(config, mcp_tokens)  
    return mcp_tokens



async def fetch_cognito_token() -> Optional[Dict[str, Any]]:  
    """  
    Fetch access token from AWS Cognito using client credentials flow.



Returns:  
        dict: Token response if successful, None otherwise  
    """  
    user_pool_domain = os.getenv("COGNITO_USER_POOL_DOMAIN")  
    client_id = os.getenv("COGNITO_CLIENT_ID")  
    client_secret = os.getenv("COGNITO_CLIENT_SECRET")



if not all([user_pool_domain, client_id, client_secret]):  
        return None



# Construct the Cognito OAuth2 token endpoint using the full domain  
    token_url = f"{user_pool_domain.rstrip('/')}/oauth2/token"



headers = {  
        "Content-Type": "application/x-www-form-urlencoded"  
    }



data = {  
        "grant_type": "client_credentials",  
        "client_id": client_id,  
        "client_secret": client_secret  
    }



try:  
        connector = aiohttp.TCPConnector(force_close=True, enable_cleanup_closed=True)  
        async with aiohttp.ClientSession(connector=connector) as session:  
            async with session.post(  
                token_url,  
                headers=headers,  
                data=data  
            ) as response:  
                if response.status == 200:  
                    token_data = await response.json()  
                    if token_data.get("access_token"):  
                        return token_data  
                    else:  
                        return None  
                else:  
                    error_text = await response.text()  
                    logging.error(f"Failed to fetch Cognito token. Status: {response.status}, Error: {error_text}")  
                    return None  
    except Exception as e:  
        logging.error(f"Exception occurred while fetching Cognito token: {e}")  
        return None

Step 5: Production Deployment

Deploy the enhanced agent:

Container Build and Push

export AWS_REGION=<YOUR_AWS_REGION>  
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

# ECR authentication  
aws ecr get-login-password --region $AWS_REGION | \  
    docker login --username AWS --password-stdin \  
    $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com



# Build and push  
docker-buildx build \  
    --platform linux/arm64 \  
    -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/open-research-agent:latest \  
    --push .

AgentCore Runtime Update

Navigate to your agent runtime and update hosting configuration with environment variables:

TAVILY_MCP_URL=https://your-gateway-id.gateway.bedrock-agentcore.region.amazonaws.com  
COGNITO_USER_POOL_DOMAIN=https://your-domain.auth.region.amazoncognito.com  
COGNITO_CLIENT_ID=your_client_id  
COGNITO_CLIENT_SECRET=your_client_secret  
AWS_REGION=your-region

Step 6: Testing and Validation

Test your deployed agent through the Bedrock AgentCore console:

  1. Navigate to TestAgent sandbox
  2. Select your Runtime agent and Endpoint
  3. Provide an input prompt:
{  
  "prompt": "<your-research-question>"  
}

Press enter or click to view image in full size

Each search operation typically consumes ~20 Tavily credits.

Extending Your Gateway: Adding More Targets and Tools

Once you have a working gateway with Tavily integration, you can expand your agent’s capabilities by adding additional targets. AgentCore Gateway supports multiple target types, allowing you to create comprehensive tool ecosystems for your AI agents.

Lambda Targets for Custom Business Logic

Lambda targets enable you to implement custom tools with your own business logic. This is ideal for:

  • Custom Data Processing: Transform or analyze data using your specific algorithms
  • Internal System Integration: Connect to proprietary databases or legacy systems
  • Complex Workflows: Orchestrate multi-step processes that require custom logic
  • Security-sensitive Operations: Keep sensitive logic within your AWS environment

Implementation Details: See the Lambda Target Documentation for complete setup instructions, permission requirements, and tool schema specifications.

OpenAPI Targets for REST API Integration

OpenAPI targets allow you to connect to existing REST APIs by providing an OpenAPI 3.0 specification. This is perfect for:

  • Third-party SaaS APIs: Connect to external services with documented APIs
  • Internal Microservices: Integrate with your existing API infrastructure
  • Partner APIs: Access partner systems through standardized interfaces
  • Public Data Sources: Connect to open data APIs and services

Implementation Details: See the OpenAPI Target Documentation for supported features, limitations, and configuration requirements.

Bring Your Own MCP Server (Future Enhancement)

While AgentCore Gateway provides three powerful target types, the broader MCP ecosystem offers numerous community-built servers that could significantly expand your tool options. These servers, typically deployed and run using Docker, uvx, or npx, include specialized connectors for databases, file systems, Git repositories, and custom APIs.

Currently, AgentCore Gateway cannot directly integrate with these community MCP servers. However, a potential workaround involves implementing MCP server functionality within Lambda functions. This approach would require translating MCP server protocols to Lambda-compatible interfaces, handling MCP tool discovery and invocation patterns, and managing state and connections within the Lambda execution model.

This capability would unlock the broader MCP community ecosystem for AgentCore Gateway users, moving beyond the current three target types to leverage hundreds of specialized tools. The Lambda-based workaround approach needs further investigation to determine feasibility and implementation patterns.

Conclusion

Amazon Bedrock AgentCore Gateway transforms AI agents from isolated systems into connected, capable platforms. By implementing the patterns and practices outlined in this article, you can build production-ready agents that leverage external tools while maintaining security, performance, and reliability.

The key architectural benefits include:

  • Standardized Integration: MCP protocol ensures consistent tool interfaces
  • Managed Security: Automated credential management and rotation
  • Scalable Architecture: Built-in connection pooling and rate limiting

As AI agents become more sophisticated, the ability to integrate with external systems will be crucial for delivering real value in production environments.

What’s Next: Adding Memory to Your Connected Agents

In my next blog post, I’ll explore how to give your tool-enabled agents persistent memory capabilities using Amazon Bedrock AgentCore Memory. I’ll demonstrate how to enhance our research agent with long-term memory, enabling it to remember previous conversations, learn from interactions, and maintain context across sessions.

This memory integration will transform our connected research agent into an intelligent system that can:

  • Remember previous research topics and findings
  • Build knowledge over time from user interactions
  • Maintain conversation context across multiple sessions
  • Provide personalized responses based on historical interactions