The Model Context Protocol has emerged as the foundational infrastructure for building autonomous AI agents at enterprise scale. In the year since its November 2024 release, MCP has evolved from an internal Anthropic tool to a Linux Foundation-governed standard adopted by OpenAI, Google DeepMind, Microsoft, and thousands of development teams worldwide. This transformation reflects a fundamental truth: autonomous agents require standardized integration patterns to connect with enterprise systems, and MCP provides exactly that infrastructure. Building production-ready agents with MCP demands understanding not just the protocol mechanics but the architectural patterns, security requirements, and implementation strategies that separate experimental deployments from scalable enterprise systems.
This article provides comprehensive technical guidance for implementing autonomous AI agents using MCP. We will cover the protocol architecture, building custom MCP servers in multiple languages, implementing OAuth 2.1 security patterns, handling long-running tasks, and establishing the infrastructure patterns required for production deployment. The code examples progress from foundational concepts through production-ready implementations, with working examples in Python, Node.js, and C#.
Understanding MCP Architecture
The Model Context Protocol solves the M×N integration problem that plagued earlier AI implementations. Before MCP, connecting M applications to N data sources required M×N custom integrations. Each AI system needed its own connector for every data source, tool, or service it accessed. This approach does not scale. MCP collapses that complexity into M+N implementations by providing a universal interface where AI systems connect to MCP servers rather than directly to data sources.
The architecture consists of three core components working together. MCP clients are the AI applications or agent frameworks that need access to external capabilities. These include systems like Claude Desktop, ChatGPT, Cursor, or custom agent implementations. The MCP client initiates connections to servers and translates agent requests into protocol messages. MCP servers expose specific capabilities like accessing databases, calling APIs, executing commands, or retrieving documents. Servers implement the protocol specification to provide standardized interfaces to underlying systems. The transport layer handles communication between clients and servers, supporting both local STDIO transport for development and HTTP/SSE transport for remote deployments.
graph TB
subgraph "AI Application Layer"
A[AI Agent/LLM]
B[MCP Client]
end
subgraph "Protocol Layer"
C[Transport STDIO/HTTP]
D[JSON-RPC Messages]
end
subgraph "MCP Server Layer"
E[MCP Server 1]
F[MCP Server 2]
G[MCP Server N]
end
subgraph "Data Source Layer"
H[Database]
I[REST APIs]
J[File Systems]
K[Enterprise Systems]
end
A -->|Requests| B
B -->|Protocol Messages| C
C -->|JSON-RPC| D
D -->|Tools/Resources| E
D -->|Tools/Resources| F
D -->|Tools/Resources| G
E -->|Query/Execute| H
F -->|HTTP Calls| I
G -->|Read/Write| J
E -->|Integration| K
style A fill:#e1f5ff
style B fill:#c3f0ca
style D fill:#fff4c3
style E fill:#ffd4c3
style F fill:#ffd4c3
style G fill:#ffd4c3The protocol communication model uses JSON-RPC 2.0 for message exchange. Clients send requests to servers specifying the method and parameters. Servers respond with results or errors. This stateless request-response pattern simplifies implementation and enables servers to scale horizontally. Messages travel through transports that abstract the underlying communication mechanism, allowing the same server code to work locally via STDIO or remotely via HTTP without modification.
MCP defines three primary capability types that servers expose. Tools are executable functions that agents can invoke with parameters, similar to API endpoints. For example, a database tool might accept SQL queries and return result sets. Resources are data sources that agents can read, such as files, documents, or structured data. Resources support listing, reading, and optionally subscribing to changes. Prompts are reusable templates that guide agent behavior for specific tasks. Servers can expose prompts that clients use to structure agent requests consistently.
Building Your First MCP Server in Python
Python provides the most accessible entry point for MCP server development through the FastMCP framework, now part of the official MCP Python SDK. FastMCP uses Python type hints and docstrings to automatically generate tool definitions, making it straightforward to expose existing functions as agent capabilities. Let us build a practical example that demonstrates core concepts while remaining production-ready.
The following implementation creates an MCP server that provides database query capabilities. This pattern applies to virtually any backend system where you want agents to have controlled access.
# database_mcp_server.py
import asyncio
import logging
from typing import Any, Optional
from mcp.server import Server
from mcp.server.stdio import stdio_server
import sqlite3
from datetime import datetime
# Configure logging to stderr (stdout would corrupt JSON-RPC)
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.StreamHandler()]
)
logger = logging.getLogger(__name__)
class DatabaseMCPServer:
"""MCP Server providing controlled database access for AI agents."""
def __init__(self, db_path: str):
self.db_path = db_path
self.server = Server("database-mcp")
self._register_tools()
def _register_tools(self):
"""Register available tools with the MCP server."""
@self.server.list_tools()
async def list_tools():
return [
{
"name": "query_database",
"description": "Execute a read-only SQL query against the database",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The SQL SELECT query to execute"
},
"parameters": {
"type": "array",
"items": {"type": "string"},
"description": "Optional parameters for parameterized queries"
}
},
"required": ["query"]
}
},
{
"name": "list_tables",
"description": "List all tables in the database",
"inputSchema": {
"type": "object",
"properties": {}
}
}
]
@self.server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[Any]:
"""Execute the requested tool."""
try:
if name == "query_database":
return await self._query_database(
arguments.get("query"),
arguments.get("parameters", [])
)
elif name == "list_tables":
return await self._list_tables()
else:
raise ValueError(f"Unknown tool: {name}")
except Exception as e:
logger.error(f"Tool execution error: {e}")
raise
async def _query_database(
self,
query: str,
parameters: list = None
) -> list[dict]:
"""Execute a read-only database query with safety checks."""
# Security: Only allow SELECT statements
if not query.strip().upper().startswith("SELECT"):
raise ValueError("Only SELECT queries are allowed")
# Additional security: Block dangerous patterns
dangerous_patterns = ["DROP", "DELETE", "INSERT", "UPDATE", "ALTER"]
query_upper = query.upper()
for pattern in dangerous_patterns:
if pattern in query_upper:
raise ValueError(f"Query contains forbidden keyword: {pattern}")
try:
conn = sqlite3.connect(self.db_path)
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
if parameters:
cursor.execute(query, parameters)
else:
cursor.execute(query)
results = [dict(row) for row in cursor.fetchall()]
conn.close()
logger.info(f"Query executed successfully, returned {len(results)} rows")
return [{
"type": "text",
"text": f"Query returned {len(results)} rows:\n{results}"
}]
except sqlite3.Error as e:
logger.error(f"Database error: {e}")
raise ValueError(f"Database query failed: {str(e)}")
async def _list_tables(self) -> list[dict]:
"""List all tables in the database."""
try:
conn = sqlite3.connect(self.db_path)
cursor = conn.cursor()
cursor.execute(
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
)
tables = [row[0] for row in cursor.fetchall()]
conn.close()
return [{
"type": "text",
"text": f"Available tables: {', '.join(tables)}"
}]
except sqlite3.Error as e:
logger.error(f"Error listing tables: {e}")
raise ValueError(f"Failed to list tables: {str(e)}")
async def run(self):
"""Start the MCP server using STDIO transport."""
async with stdio_server() as (read_stream, write_stream):
logger.info("Database MCP Server starting...")
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
"""Entry point for the MCP server."""
# In production, use environment variable or config file for db_path
server = DatabaseMCPServer("./enterprise_data.db")
await server.run()
if __name__ == "__main__":
asyncio.run(main())This implementation demonstrates several production-ready patterns. Security is enforced through query validation that restricts operations to read-only SELECT statements and blocks dangerous SQL keywords. Error handling provides clear feedback when queries fail while logging issues for debugging. The async architecture ensures the server can handle multiple concurrent requests without blocking. Logging writes to stderr rather than stdout to avoid corrupting JSON-RPC messages, a critical detail for STDIO transport.
To deploy this server, create a requirements.txt file with the necessary dependencies and configure your MCP client to connect to it.
# requirements.txt
mcp>=1.2.0
# Install dependencies
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Run the server
python database_mcp_server.pyBuilding MCP Servers in Node.js
Node.js provides excellent performance characteristics for MCP servers, particularly when integrating with existing JavaScript ecosystems or building servers that require high concurrency. The official TypeScript SDK offers comprehensive type safety and excellent developer experience. Let us implement an MCP server that provides access to REST APIs, a common enterprise integration pattern.
// api-mcp-server.js
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import axios from "axios";
/**
* MCP Server for accessing enterprise REST APIs
* Provides controlled API access with rate limiting and error handling
*/
class ApiMCPServer {
constructor(config) {
this.config = config;
this.server = new Server(
{
name: "api-mcp-server",
version: "1.0.0"
},
{
capabilities: {
tools: {}
}
}
);
this.setupHandlers();
this.requestCache = new Map();
this.rateLimiter = new Map();
}
setupHandlers() {
// List available tools
this.server.setRequestHandler("tools/list", async () => ({
tools: [
{
name: "api_get",
description: "Make a GET request to an allowed API endpoint",
inputSchema: {
type: "object",
properties: {
endpoint: {
type: "string",
description: "The API endpoint path"
},
params: {
type: "object",
description: "Query parameters for the request"
},
useCache: {
type: "boolean",
description: "Whether to use cached response if available",
default: true
}
},
required: ["endpoint"]
}
},
{
name: "api_post",
description: "Make a POST request to an allowed API endpoint",
inputSchema: {
type: "object",
properties: {
endpoint: {
type: "string",
description: "The API endpoint path"
},
body: {
type: "object",
description: "Request body data"
}
},
required: ["endpoint", "body"]
}
}
]
}));
// Handle tool execution
this.server.setRequestHandler("tools/call", async (request) => {
const { name, arguments: args } = request.params;
try {
switch (name) {
case "api_get":
return await this.handleApiGet(args);
case "api_post":
return await this.handleApiPost(args);
default:
throw new Error(`Unknown tool: ${name}`);
}
} catch (error) {
console.error(`Tool execution error: ${error.message}`);
return {
content: [{
type: "text",
text: `Error: ${error.message}`
}],
isError: true
};
}
});
}
async handleApiGet(args) {
const { endpoint, params = {}, useCache = true } = args;
// Security: Validate endpoint against allowlist
if (!this.isAllowedEndpoint(endpoint)) {
throw new Error(`Endpoint not allowed: ${endpoint}`);
}
// Rate limiting
await this.checkRateLimit(endpoint);
// Check cache
const cacheKey = `GET:${endpoint}:${JSON.stringify(params)}`;
if (useCache && this.requestCache.has(cacheKey)) {
const cached = this.requestCache.get(cacheKey);
if (Date.now() - cached.timestamp < this.config.cacheTimeout) {
console.error(`Cache hit for ${endpoint}`);
return {
content: [{
type: "text",
text: JSON.stringify(cached.data, null, 2)
}]
};
}
}
// Make API request
try {
const response = await axios.get(
`${this.config.baseUrl}${endpoint}`,
{
params,
headers: {
"Authorization": `Bearer ${this.config.apiKey}`,
"Content-Type": "application/json"
},
timeout: this.config.requestTimeout || 10000
}
);
// Cache successful response
this.requestCache.set(cacheKey, {
data: response.data,
timestamp: Date.now()
});
return {
content: [{
type: "text",
text: JSON.stringify(response.data, null, 2)
}]
};
} catch (error) {
if (error.response) {
throw new Error(
`API error (${error.response.status}): ${error.response.statusText}`
);
} else if (error.request) {
throw new Error("No response received from API");
} else {
throw new Error(`Request setup error: ${error.message}`);
}
}
}
async handleApiPost(args) {
const { endpoint, body } = args;
// Security: Validate endpoint
if (!this.isAllowedEndpoint(endpoint)) {
throw new Error(`Endpoint not allowed: ${endpoint}`);
}
// Rate limiting
await this.checkRateLimit(endpoint);
try {
const response = await axios.post(
`${this.config.baseUrl}${endpoint}`,
body,
{
headers: {
"Authorization": `Bearer ${this.config.apiKey}`,
"Content-Type": "application/json"
},
timeout: this.config.requestTimeout || 10000
}
);
return {
content: [{
type: "text",
text: JSON.stringify(response.data, null, 2)
}]
};
} catch (error) {
if (error.response) {
throw new Error(
`API error (${error.response.status}): ${error.response.statusText}`
);
}
throw new Error(`Request failed: ${error.message}`);
}
}
isAllowedEndpoint(endpoint) {
// Implement your endpoint allowlist logic
const allowedPatterns = this.config.allowedEndpoints || [];
return allowedPatterns.some(pattern => {
const regex = new RegExp(pattern);
return regex.test(endpoint);
});
}
async checkRateLimit(endpoint) {
const key = endpoint;
const now = Date.now();
const windowMs = 60000; // 1 minute
const maxRequests = this.config.maxRequestsPerMinute || 60;
if (!this.rateLimiter.has(key)) {
this.rateLimiter.set(key, []);
}
const timestamps = this.rateLimiter.get(key);
// Remove timestamps outside current window
const recentTimestamps = timestamps.filter(t => now - t < windowMs);
if (recentTimestamps.length >= maxRequests) {
throw new Error(`Rate limit exceeded for ${endpoint}`);
}
recentTimestamps.push(now);
this.rateLimiter.set(key, recentTimestamps);
}
async run() {
const transport = new StdioServerTransport();
await this.server.connect(transport);
console.error("API MCP Server running on stdio");
}
}
// Configuration
const config = {
baseUrl: process.env.API_BASE_URL || "https://api.example.com",
apiKey: process.env.API_KEY,
allowedEndpoints: [
"/api/v1/users/.*",
"/api/v1/products/.*",
"/api/v1/orders/.*"
],
maxRequestsPerMinute: 60,
cacheTimeout: 300000, // 5 minutes
requestTimeout: 10000
};
// Start server
const server = new ApiMCPServer(config);
server.run().catch(console.error);This Node.js implementation demonstrates enterprise patterns including endpoint allowlisting for security, rate limiting to prevent API abuse, response caching to improve performance and reduce external API calls, comprehensive error handling with specific error messages, and timeout management to prevent hanging requests. The configuration-driven approach allows deployment across environments without code changes.
Implementing MCP Servers in C# for .NET Environments
Organizations with existing .NET infrastructure benefit from implementing MCP servers in C# to leverage familiar frameworks and tooling. While the official C# SDK is still maturing, building production-ready MCP servers using ASP.NET Core provides enterprise-grade capabilities with excellent performance characteristics. Let us implement a file system integration server that demonstrates key patterns.
// FileSystemMcpServer.cs
using Microsoft.AspNetCore.Mvc;
using System.Text.Json;
using System.Text.Json.Serialization;
namespace EnterpriseFileSystem.Mcp
{
public class McpRequest
{
[JsonPropertyName("jsonrpc")]
public string JsonRpc { get; set; } = "2.0";
[JsonPropertyName("id")]
public string? Id { get; set; }
[JsonPropertyName("method")]
public string Method { get; set; } = "";
[JsonPropertyName("params")]
public JsonElement? Params { get; set; }
}
public class McpResponse
{
[JsonPropertyName("jsonrpc")]
public string JsonRpc { get; set; } = "2.0";
[JsonPropertyName("id")]
public string? Id { get; set; }
[JsonPropertyName("result")]
public object? Result { get; set; }
[JsonPropertyName("error")]
public McpError? Error { get; set; }
}
public class McpError
{
[JsonPropertyName("code")]
public int Code { get; set; }
[JsonPropertyName("message")]
public string Message { get; set; } = "";
}
[ApiController]
[Route("mcp")]
public class FileSystemMcpController : ControllerBase
{
private readonly ILogger _logger;
private readonly string _basePath;
private readonly HashSet _allowedExtensions;
public FileSystemMcpController(
ILogger logger,
IConfiguration configuration)
{
_logger = logger;
_basePath = configuration["FileSystem:BasePath"] ??
throw new InvalidOperationException("BasePath not configured");
_allowedExtensions = configuration
.GetSection("FileSystem:AllowedExtensions")
.Get>() ?? new HashSet();
}
[HttpPost]
public async Task HandleRequest([FromBody] McpRequest request)
{
try
{
_logger.LogInformation($"Received MCP request: {request.Method}");
var response = request.Method switch
{
"tools/list" => HandleListTools(request),
"tools/call" => await HandleCallTool(request),
"initialize" => HandleInitialize(request),
_ => CreateErrorResponse(
request.Id,
-32601,
$"Method not found: {request.Method}")
};
return Ok(response);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error handling MCP request");
return Ok(CreateErrorResponse(
request.Id,
-32603,
"Internal error"));
}
}
private McpResponse HandleInitialize(McpRequest request)
{
return new McpResponse
{
Id = request.Id,
Result = new
{
protocolVersion = "2024-11-25",
serverInfo = new
{
name = "filesystem-mcp",
version = "1.0.0"
},
capabilities = new
{
tools = new { }
}
}
};
}
private McpResponse HandleListTools(McpRequest request)
{
var tools = new[]
{
new
{
name = "read_file",
description = "Read contents of a file from the allowed directory",
inputSchema = new
{
type = "object",
properties = new
{
filename = new
{
type = "string",
description = "Name of the file to read"
}
},
required = new[] { "filename" }
}
},
new
{
name = "list_files",
description = "List all files in the allowed directory",
inputSchema = new
{
type = "object",
properties = new
{
pattern = new
{
type = "string",
description = "Optional file pattern to filter results"
}
}
}
},
new
{
name = "search_content",
description = "Search for text content within files",
inputSchema = new
{
type = "object",
properties = new
{
searchTerm = new
{
type = "string",
description = "Text to search for"
},
filePattern = new
{
type = "string",
description = "Optional file pattern to limit search"
}
},
required = new[] { "searchTerm" }
}
}
};
return new McpResponse
{
Id = request.Id,
Result = new { tools }
};
}
private async Task HandleCallTool(McpRequest request)
{
if (!request.Params.HasValue)
{
return CreateErrorResponse(request.Id, -32602, "Invalid params");
}
var toolName = request.Params.Value.GetProperty("name").GetString();
var args = request.Params.Value.GetProperty("arguments");
try
{
var result = toolName switch
{
"read_file" => await ReadFile(args),
"list_files" => await ListFiles(args),
"search_content" => await SearchContent(args),
_ => throw new InvalidOperationException($"Unknown tool: {toolName}")
};
return new McpResponse
{
Id = request.Id,
Result = result
};
}
catch (Exception ex)
{
_logger.LogError(ex, $"Error executing tool: {toolName}");
return CreateErrorResponse(
request.Id,
-32000,
$"Tool execution failed: {ex.Message}");
}
}
private async Task The C# implementation provides enterprise features including path traversal prevention to block access outside allowed directories, file type filtering through extension allowlists, comprehensive logging for audit trails, structured error handling with JSON-RPC error codes, and configuration-driven security policies. The ASP.NET Core foundation enables deployment to Azure App Service, Kubernetes, or on-premises IIS with minimal configuration changes.
Implementing OAuth 2.1 Security for Production MCP Servers
The June 2025 and November 2025 MCP specification updates established OAuth 2.1 as the standard authorization mechanism for production deployments. The architecture treats MCP servers as OAuth resource servers that validate tokens issued by dedicated authorization servers. This separation aligns with enterprise security architectures where identity management centralizes in systems like Okta, Auth0, or Azure AD.
The November 2025 specification introduced two critical enterprise features. Client ID Metadata Documents (CIMD) eliminate the need for dynamic client registration by allowing clients to describe themselves via URLs they control. Enterprise-Managed Authorization through Cross App Access (XAA) enables enterprise Identity Providers to maintain visibility and control over MCP client-server connections, addressing the shadow IT concerns that previously plagued MCP deployments.
sequenceDiagram
participant Client as MCP Client
participant Server as MCP Server
participant AS as Authorization Server
participant IdP as Enterprise IdP
Client->>Server: Connect to Server
Server->>Client: 401 + Protected Resource Metadata
Client->>AS: Discover OAuth Endpoints
AS->>Client: OAuth Configuration
Client->>IdP: Authorization Request
IdP->>Client: Authorization Code
Client->>AS: Token Request + PKCE + Resource Indicator
AS->>Client: Access Token (scoped to MCP Server)
Client->>Server: MCP Request + Access Token
Server->>Server: Validate Token (issuer, signature, audience, exp)
Server->>Client: MCP Response
Note over Client,Server: All subsequent requests use validated token
Implementing OAuth protection requires several components working together. The Protected Resource Metadata document informs clients where to find the authorization server. Token validation ensures incoming requests carry valid, unexpired tokens with appropriate audience claims. Resource Indicators prevent token misuse by ensuring tokens issued for one MCP server cannot be used with another. Let us implement OAuth protection in Python using industry-standard libraries.
# oauth_protected_mcp_server.py
from fastapi import FastAPI, Header, HTTPException, Depends
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from jose import jwt, JWTError
from typing import Optional, Dict, Any
import httpx
from functools import lru_cache
import logging
logger = logging.getLogger(__name__)
class OAuthMCPServer:
"""MCP Server with OAuth 2.1 protection."""
def __init__(self, config: Dict[str, Any]):
self.config = config
self.app = FastAPI()
self.security = HTTPBearer()
self._setup_routes()
def _setup_routes(self):
"""Configure FastAPI routes for MCP protocol."""
@self.app.get("/.well-known/oauth-protected-resource-metadata")
async def protected_resource_metadata():
"""RFC 9728 - Protected Resource Metadata."""
return {
"resource": self.config["resource_url"],
"authorization_servers": [
self.config["authorization_server_url"]
],
"bearer_methods_supported": ["header"],
"resource_documentation": self.config.get("documentation_url"),
"resource_signing_alg_values_supported": ["RS256", "ES256"]
}
@self.app.post("/mcp")
async def mcp_endpoint(
request: Dict[str, Any],
token_data: Dict = Depends(self.validate_token)
):
"""Main MCP endpoint with token validation."""
try:
method = request.get("method")
# Check if user has required scope for this method
required_scope = self._get_required_scope(method)
if not self._has_scope(token_data, required_scope):
raise HTTPException(
status_code=403,
detail=f"Insufficient scope: {required_scope} required"
)
# Process MCP request
return await self._handle_mcp_request(request, token_data)
except Exception as e:
logger.error(f"MCP request error: {e}")
raise HTTPException(status_code=500, detail=str(e))
async def validate_token(
self,
credentials: HTTPAuthorizationCredentials = Depends(HTTPBearer())
) -> Dict[str, Any]:
"""Validate OAuth access token."""
token = credentials.credentials
try:
# Get JWKS from authorization server (cached)
jwks = await self._get_jwks()
# Decode and validate token
payload = jwt.decode(
token,
jwks,
algorithms=["RS256", "ES256"],
audience=self.config["resource_url"],
issuer=self.config["authorization_server_url"],
options={
"verify_signature": True,
"verify_aud": True,
"verify_iat": True,
"verify_exp": True,
"verify_nbf": True,
}
)
# Additional validation: Resource Indicator (RFC 8707)
if "aud" not in payload:
raise HTTPException(
status_code=401,
detail="Token missing audience claim"
)
# Validate audience matches this resource
audiences = payload["aud"] if isinstance(payload["aud"], list) else [payload["aud"]]
if self.config["resource_url"] not in audiences:
raise HTTPException(
status_code=401,
detail="Token audience mismatch"
)
logger.info(f"Token validated for user: {payload.get('sub')}")
return payload
except JWTError as e:
logger.error(f"Token validation failed: {e}")
raise HTTPException(
status_code=401,
detail="Invalid token",
headers={"WWW-Authenticate": "Bearer"}
)
@lru_cache(maxsize=1)
async def _get_jwks(self) -> Dict:
"""Fetch and cache JWKS from authorization server."""
async with httpx.AsyncClient() as client:
# Get OAuth discovery document
discovery_url = f"{self.config['authorization_server_url']}/.well-known/oauth-authorization-server"
discovery_response = await client.get(discovery_url)
discovery_response.raise_for_status()
discovery = discovery_response.json()
# Get JWKS
jwks_url = discovery["jwks_uri"]
jwks_response = await client.get(jwks_url)
jwks_response.raise_for_status()
return jwks_response.json()
def _get_required_scope(self, method: str) -> str:
"""Map MCP method to required OAuth scope."""
scope_mapping = {
"tools/list": "mcp:read",
"tools/call": "mcp:execute",
"resources/list": "mcp:read",
"resources/read": "mcp:read"
}
return scope_mapping.get(method, "mcp:read")
def _has_scope(self, token_data: Dict, required_scope: str) -> bool:
"""Check if token has required scope."""
scopes = token_data.get("scope", "").split()
return required_scope in scopes or "mcp:admin" in scopes
async def _handle_mcp_request(
self,
request: Dict[str, Any],
token_data: Dict[str, Any]
) -> Dict[str, Any]:
"""Handle validated MCP request."""
method = request.get("method")
# Add user context to request processing
user_id = token_data.get("sub")
logger.info(f"Processing {method} for user {user_id}")
# Implement your MCP method handlers here
if method == "tools/list":
return {
"jsonrpc": "2.0",
"id": request.get("id"),
"result": {
"tools": [] # Your tool definitions
}
}
# Add other method handlers
raise HTTPException(status_code=404, detail=f"Method not found: {method}")
# Configuration
config = {
"resource_url": "https://mcp-server.example.com",
"authorization_server_url": "https://auth.example.com",
"documentation_url": "https://docs.example.com/mcp"
}
# Create and run server
server = OAuthMCPServer(config)
app = server.app
# To run: uvicorn oauth_protected_mcp_server:app --host 0.0.0.0 --port 8000This OAuth implementation provides production-grade security through several mechanisms. Token validation verifies issuer, signature, audience, and expiration using industry-standard JWT libraries. Resource Indicator support ensures tokens are scoped to the specific MCP server, preventing token misuse. Scope-based authorization maps MCP methods to OAuth scopes, enabling fine-grained access control. JWKS caching reduces latency by avoiding repeated fetches of the authorization server’s public keys. Comprehensive logging creates audit trails for security monitoring.
Organizations implementing OAuth protection must establish several operational patterns. Use dedicated authorization servers rather than building custom implementations. Production-grade systems like Keycloak, Auth0, Okta, or Azure AD provide comprehensive admin interfaces, audit logging, token revocation mechanisms, and enterprise identity provider integration. Implement short-lived tokens with refresh token rotation to minimize the impact of token theft. Deploy MCP gateways to centralize policy enforcement and create auditable boundaries at scale. These gateways validate tokens, enrich context, evaluate policies, transform credentials, and forward requests to backend MCP servers while logging all decisions.
Conclusion and Next Steps
Building autonomous AI agents with Model Context Protocol requires understanding both the protocol mechanics and the production patterns that separate experimental deployments from enterprise-grade systems. This article covered the foundational architecture, implementation patterns across Python, Node.js, and C#, and the OAuth 2.1 security framework essential for production deployment. The code examples progress from basic tool exposure through production-ready implementations with comprehensive error handling, security validation, and operational patterns.
The next article in this series will examine multi-agent orchestration patterns and architecture. We will explore how to coordinate multiple MCP servers and agents to accomplish complex workflows, implement service discovery and routing, handle agent-to-agent communication, design for fault tolerance and resilience, and establish monitoring and observability across distributed agent systems. These patterns transform individual agents into coordinated systems capable of handling enterprise-scale challenges.
References
- Model Context Protocol – Official Specification (November 2025)
- Build an MCP Server – Model Context Protocol Documentation
- One Year of MCP: November 2025 Spec Release
- Model Context Protocol Spec Updates from June 2025 – Auth0
- Client Registration and Enterprise Management in MCP Authorization
- OAuth for MCP – Emerging Enterprise Patterns – GitGuardian
- Diving Into the MCP Authorization Specification – Descope
- Microsoft MCP for Beginners – GitHub Repository
- A Quick Look at MCP with Large Language Models and Node.js – Red Hat
- How to Build a Custom MCP Server with TypeScript – freeCodeCamp
