
MCP Servers
>>> A comprehensive guide to the Model Context Protocol for connecting AI models to external tools and data.
What is the Model Context Protocol?
The Model Context Protocol (MCP) is an open-source communication standard designed to link powerful AI models (like Claude or ChatGPT) with external resources—your files, databases, APIs, and more.
Think of it as a universal adapter for AI agents. Just as USB-C standardized connections for electronics, MCP standardizes how AI applications access data (Resources) and actions (Tools), enabling them to become truly useful beyond their training data.
Why MCP Matters
Without MCP, every AI integration requires custom code. Want Claude to read your calendar? Custom API integration. Want it to query your database? Another custom solution. MCP provides a single, standardized way to connect AI to anything.
One protocol connects your AI to calendars, databases, file systems, APIs, and any custom tools—all using the same interface.
Each integration requires custom code, different authentication flows, unique data formats, and separate maintenance.
The Three Pillars of MCP
Every MCP interaction revolves around three fundamental building blocks that an MCP Server exposes to an MCP Client (the AI application).
1. Tools — The Robot's Hands
Tools are functions the AI can call to perform actions. These are active, schema-defined operations the AI invokes based on user requests.
• search_flights — Query available flights between cities
• send_message — Post a message to Slack or Discord
• create_calendar_event — Add an event to Google Calendar
• execute_query — Run a SQL query on your database
• deploy_function — Deploy code to a cloud platform
Tools are model-controlled—the AI decides when to use them based on the conversation. Each tool has a defined schema specifying its parameters, making it easy for the AI to understand how to invoke it correctly.
2. Resources — The Robot's Eyes
Resources are passive data the AI can read for context. These are structured ways to access files, database schemas, API documentation, or any information the AI needs to understand the environment.
• file:///documents/report.pdf — A local file
• postgres://mydb/schema — Database schema information
• github://repo/README.md — Repository documentation
• calendar://events/today — Today's scheduled events
Resources are application-controlled—your app decides what context to provide to the AI. The client fetches resource data to give the AI relevant background information for its responses.
3. Prompts — Pre-Programmed Macros
Prompts are reusable templates for complex workflows. These are pre-built instructions that guide the AI to use specific combinations of Tools and Resources to complete multi-step tasks.
• "Plan a vacation" — Combines calendar resources, flight search tools, and hotel booking tools
• "Code review" — Uses file resources, linting tools, and documentation resources
• "Daily standup" — Pulls from project management resources and drafts status updates
Prompts are user-controlled—users invoke them explicitly when they want the AI to follow a specific workflow rather than figure out the approach on its own.
Real-World MCP Applications
MCP transforms general-purpose AI into personalized, capable agents that integrate deeply with your tools and data.
Personal Productivity
Development Workflows
A Filesystem Server with access to your project folder enables the AI to:
• Read source files and understand project structure
• Analyze dependencies from package.json or requirements.txt
• Search for patterns across the codebase
• Create or modify files based on instructions
Combined with a Git tool, it can also commit changes and create pull requests.
Enterprise Data Analysis
Architecture: Clients and Servers
Building with MCP requires understanding the two main components and how they communicate.
MCP Servers — The Providers
An MCP Server is a program you write that exposes your data, systems, and logic via the MCP standard. It listens for requests from clients and executes the actual operations.
• Define available Tools with schemas describing parameters and return types
• Expose Resources with URIs and content types
• Handle authentication for sensitive operations
• Execute logic when tools are invoked
• Return structured responses the client can process
Transport Options
MCP Servers communicate via two transport mechanisms:
Best for: Local, desktop-based applications
• Simpler setup—runs as a subprocess
• Used by Claude Desktop, VS Code extensions
• No network configuration required
• Process communication via standard input/output
Best for: Remote, web-based applications
• Enables cloud-hosted MCP servers
• Supports multiple concurrent clients
• Requires API endpoints and CORS configuration
• Needs proper authentication (OAuth 2.1)
MCP Clients — The Consumers
An MCP Client is the application that integrates the AI model and communicates with MCP servers on behalf of the user.
• Discover servers and their capabilities
• Present tools to the AI model for decision-making
• Fetch resources to provide context
• Handle user approval for sensitive tool executions
• Display results back to the user
Popular MCP clients include Claude Desktop, Visual Studio Code (via extensions), and custom applications built with the MCP SDK.
Building Your First MCP Server
Let's walk through creating a simple MCP server that provides weather information.
Python Example
from mcp.server.fastmcp import FastMCP
# Create a server instance
mcp = FastMCP("weather-server")
# Define a tool the AI can call
@mcp.tool()
def get_forecast(city: str, days: int = 5) -> dict:
"""Get weather forecast for a city."""
return {
"city": city,
"forecast": [
{"day": 1, "temp": 72, "conditions": "Sunny"},
{"day": 2, "temp": 68, "conditions": "Partly Cloudy"},
]
}
# Define a resource for current conditions
@mcp.resource("weather://current/{city}")
def current_weather(city: str) -> str:
"""Current weather conditions for a city."""
return f"Temperature: 70°F, Humidity: 45%, Conditions: Clear"
# Run the server
if __name__ == "__main__":
mcp.run()Connecting to Claude Desktop
Once your server is built, add it to Claude Desktop's configuration:
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["/path/to/weather_server.py"]
}
}
}Restart Claude Desktop, and you'll see the weather tools available in your conversations!
Security and Authorization
When building production MCP servers, especially remote ones, security is critical.
OAuth 2.1 Flow
MCP uses OAuth 2.1 for authorization. Here's how it works:
-
Client connects to your protected server
-
Server responds with 401 Unauthorized + link to Protected Resource Metadata
-
Client reads PRM and discovers the Authorization Server (e.g., Auth0, Keycloak)
-
User logs in via browser and grants consent to required scopes
-
Client receives an Access Token
-
All future requests include the token for validation
Human-in-the-Loop
Tools that perform sensitive actions should require explicit user approval:
# Tools with sensitive operations should be marked # The MCP client will prompt user approval before execution @mcp.tool() def send_email(to: str, subject: str, body: str) -> str: """Send an email. Client will request user approval.""" # This only runs after user approves in the client UI return email_service.send(to, subject, body)
This ensures the AI can't take irreversible actions without the user explicitly approving each one.
Advanced MCP Features
Elicitation
Allows servers to request additional information from users during tool execution:
A "Book Hotel" tool might pause mid-execution to ask:
"What is your preferred room type? (Sea view / City view / Standard)"
The user's response is passed back to the tool to continue processing.
Roots
Clients can define filesystem boundaries for security:
// Client configuration restricts server filesystem access
{
"roots": [
"file:///Users/me/projects/current-project",
"file:///Users/me/documents/work"
]
}
// The server can only access files within these directoriesSampling
Servers can request LLM completions back from the client:
A flight search tool returns 100 options. Instead of dumping all data:
-
Server processes the raw results
-
Server "samples" the client's LLM: "Summarize the top 5 flights by price and convenience"
-
Client's AI generates the summary
-
Server returns the polished result
This leverages the client's AI for summarization while keeping data processing on the server.
Getting Started
Ready to build with MCP? Here's your path forward:
For Beginners
-
Install Claude Desktop — It has built-in MCP client support
-
Follow the quickstart guide — Build a simple local server in Python or TypeScript
-
Connect to Claude — See your tools appear in conversation
-
Iterate — Add more tools and resources as needed
For Production
• Use HTTP/SSE transport for remote accessibility
• Implement OAuth 2.1 for authentication
• Add rate limiting and usage quotas
• Enable logging and monitoring
• Define clear tool schemas with validation
• Require approval for sensitive operations
• Test with various client implementations
Explore Existing Servers
The MCP ecosystem includes servers for:
• Filesystem — Read/write local files
• GitHub — Repository access, issue management
• PostgreSQL — Database queries and schema exploration
• Slack — Messaging and channel management
• Google Drive — Document access and search
• Brave Search — Web search capabilities
• Memory — Persistent knowledge storage
MCP Design Patterns
The Database Agent Pattern
The DevOps Agent Pattern
Summary
The Model Context Protocol transforms AI from a standalone text generator into a connected agent that can interact with your real tools and data.
• Tools let AI perform actions (model-controlled)
• Resources give AI context (application-controlled)
• Prompts define reusable workflows (user-controlled)
• STDIO for local servers, HTTP/SSE for remote
• OAuth 2.1 secures production deployments
• Human-in-the-loop ensures safe sensitive operations
Start simple with a local server, then expand as you discover more ways MCP can enhance your AI workflows. The protocol is designed to grow with your needs—from personal productivity tools to enterprise-grade agent systems.