Introduction to MCP: What You Need to Know
I watched Claude hallucinate API endpoints that didn't exist, confidently call made-up functions, and crash our systems with broken JSON. Then we implemented the Model Context Protocol (MCP), and our error rate dropped from 28% to under 3%.
This is what I wish someone had told me when I started.
The Problem MCP Solves
Before MCP, connecting AI assistants to your tools meant:
- Copy-pasting API docs into prompts and hoping Claude generated valid calls (it didn't, 30% of the time)
- Building custom integrations for every AI tool—we built three different ones in a month
- Letting agents scrape your website like a human (slow, brittle, breaks constantly)
MCP provides a better way: AI assistants discover what's available at runtime, call well-defined tools with validated inputs, and receive structured outputs. No guessing, no hallucinating.
We went from maintaining three separate integrations to one MCP server that works with Claude Desktop, Cursor, and any tool that supports MCP.
How MCP Works
MCP has three parts:
- Host (Claude Desktop, Cursor): Manages the AI model and user interface
- Client: Protocol adapter that handles communication
- Server: Your code that exposes capabilities
When we switched to MCP, we moved 80% of our validation logic out of Claude's prompts and into our server code—making everything faster and more reliable.
Three Capabilities Your Server Exposes
Tools are actions the AI can perform—searchDocuments
, createProject
, sendEmail
. Each tool specifies exactly what inputs it needs and what it returns. When users ask Claude to "find our Slack integration," it calls the search_bundles
tool with validated parameters.
Resources are read-only data like documentation or files. When someone asks "How do I set up the Slack bundle?", Claude reads the actual setup guide instead of hallucinating instructions.
Prompts are reusable templates. Honestly, we barely use these yet—tools and resources are where the value is.
What Makes MCP Different
MCP isn't just REST with a new name. The mental model is different:
- REST: Static API docs, CRUD operations (GET/POST/PUT/DELETE), client constructs URLs
- MCP: Runtime discovery, action verbs (
sendEmail
,searchDocs
), server exposes capabilities
Agents think in actions, not HTTP verbs. They want to "send an email," not "POST /api/emails with specific JSON." This alignment is why MCP works so much better than dumping API docs into prompts.
Mistakes We Made (Learn From Ours)
Too many tools: Claude struggles with more than 40 tools. Build one powerful search
tool instead of 20 narrow list endpoints.
Authentication headaches: Don't add api_key
as a parameter to every tool. Extract it from the request headers in your server code.
Wrong transport: Start with stdio for local development (simple, fast). Only use SSE (server-sent events) when you need remote access.
Generic errors: Return structured error messages the AI can understand, not stack traces.
Building Your First MCP Server (15 Minutes)
Here's a minimal MCP server using FastMCP:
from fastmcp import FastMCP
from pydantic import Field
from typing import Annotated
mcp = FastMCP("MyFirstServer")
@mcp.tool(description="Greet a user by name.")
async def greet_user(
name: Annotated[str, Field(description="The user's name")]
) -> dict:
return {"message": f"Hello, {name}! Welcome to MCP."}
if __name__ == "__main__":
mcp.run()
Run it:
pip install fastmcp
fastmcp run server.py
Configure Claude Desktop (on macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
):
{
"mcpServers": {
"MyFirstServer": {
"command": "python",
"args": ["/path/to/server.py"]
}
}
}
Restart Claude Desktop and ask: "Greet me by name."
Why This Matters Now
MCP is evolving fast. A few months ago, agents handled ~60 tools reliably; now it's closer to 40. Anthropic, OpenAI, and Google are all building MCP support into their platforms.
If you're building AI agents that need to do real work with your systems, understanding MCP isn't optional anymore. The protocol has momentum, the tooling is maturing, and the core principles (runtime discovery, validated inputs, action-oriented design) are solid.
We built MCPBundles to make this easier—pre-built integrations so you don't have to build everything from scratch. But whether you use our bundles or build your own, MCP is how AI agents will interact with software going forward.
Key Takeaways
- MCP solves hallucination problems by letting AI discover real capabilities instead of guessing
- One MCP server works everywhere—no more building separate integrations for each AI tool
- Think in actions, not CRUD—
sendEmail
beatsPOST /api/emails
- Start with 5-10 tools, not 100, and use stdio before SSE
- Error rate improvements are real: we went from 28% failures to under 3%
Resources
- Model Context Protocol Spec – Official specification
- Anthropic MCP Docs – How Claude uses MCP
- FastMCP GitHub – Best Python implementation
- MCP TypeScript SDK – Official TypeScript version
- MCP vs API Comparison – Architectural deep dive
Building an MCP server or have questions? We're figuring this out together—reach out.