MCP Servers Should Be Organized by Function, Not by Integration
There's a better way to organize MCP servers that solves the context rot problem plaguing AI workflows everywhere.

Design patterns and best practices
View All TagsThere's a better way to organize MCP servers that solves the context rot problem plaguing AI workflows everywhere.

Anthropic's recent blog post about code execution with MCP got everyone excited about converting tool calls into code. But I think we're optimizing the wrong thing.

When you're building MCP tools, there's a moment where you realize something counterintuitive: the description field isn't just documentation—it's instruction. Every parameter description you write is a teaching moment where the AI learns not just what a parameter is, but when to use it, why it matters, and how it impacts the operation.
This shift in thinking—from documenting to teaching—changes how you design tools. Let me show you what that looks like in practice.

Here's a problem I kept running into: when you're building an MCP server, you face this weird tension between giving AI agents enough control and not drowning them in options. Build 20 different tools and you're burning context window on redundant functionality. Build 3 tools with no parameters and the AI can't do anything useful.
After shipping dozens of MCP integrations, I found something that actually works: six core tools that balance OpenAI's single-string requirements with rich, parameter-driven operations. It's not arbitrary—there's a reason this number keeps working.
