The Six-Tool Pattern: MCP Server Design That Scales
Here's a problem I kept running into: when you're building an MCP server, you face this weird tension between giving AI agents enough control and not drowning them in options. Build 20 different tools and you're burning context window on redundant functionality. Build 3 tools with no parameters and the AI can't do anything useful.
After shipping dozens of MCP integrations, I found something that actually works: six core tools that balance OpenAI's single-string requirements with rich, parameter-driven operations. It's not arbitrary—there's a reason this number keeps working.
The Problem Nobody Talks About
Let me show you what context rot actually looks like. You're building a Weaviate integration, and you think "I need to be thorough," so you create:
weaviate_search
weaviate_search_by_collection
weaviate_search_with_limit
weaviate_get_object
weaviate_list_collections
weaviate_list_collections_with_schema
weaviate_list_objects
weaviate_list_objects_with_limit
weaviate_filter_objects
weaviate_insert_object
weaviate_insert_batch
weaviate_update_object
- ...you get the idea
You end up with 18 tools. Each tool definition consumes context tokens. When the AI needs to list objects, it's evaluating 18 different options, half of which are just variations with slightly different parameters. The AI spends its thinking deciding between weaviate_list_objects
and weaviate_list_objects_with_limit
when it should be spending that effort understanding your data.
This is context rot—burning the AI's working memory on redundant information instead of actual content.
The opposite mistake is just as bad. You think "I'll keep it minimal" and create tools with almost no parameters:
async def search(query: str) -> dict:
# Cool, but how do I specify which collection?
# Or limit results? Or filter by date?
Now the AI has to retrieve everything and filter it client-side, which is slow, expensive, and often impossible with large datasets.
The Six-Tool Pattern
Here's what actually works. You need six tools, split into three categories based on what they do and how they work:
Category 1: The Universal Interface (OpenAI Standard)
These are your fetch
and search
tools. They follow OpenAI's MCP requirements—single-string parameters that work with ChatGPT's deep research mode and any other AI that adopts this pattern.
Tool 1: fetch
- "Get me this specific thing"
Takes one parameter: an ID string like weaviate:object:CollectionName:uuid-123
. The AI doesn't need to know your database structure or routing—it just passes an ID it got from search results, and you figure out where to get it.
The magic is in the ID format. You're encoding routing information (which collection, which UUID) into one string that your server knows how to parse. The AI sees a simple string; you handle the complexity server-side.
Tool 2: search
- "Find things that match this"
Takes one parameter: a query string. But here's the clever part—you can accept both natural language ("customer feedback about pricing") AND structured syntax ("collection:Product limit:20 wireless speakers") in the same parameter.
Your server parses the string. If it sees collection:Product
, you extract that as a filter. If it's pure natural language, you embed it and do semantic search. The AI just sends a string; you figure out what they meant.
This is way smarter than having separate parameters for collection, limit, date filters, etc. OpenAI requires single-string for a reason—it's simpler for the AI to reason about, and it pushes the intelligence to your server where it belongs.
Category 2: Rich List Operations (Parameter-Driven)
Now we get to the interesting part. These tools aren't constrained by OpenAI's single-string requirement because they're not universal discovery tools—they're domain-specific operations. And this is where you can get really expressive with parameters.
Tool 3: weaviate_list_collections
- "Show me what data you have"
This tool needs maybe 6 parameters: pattern matching for names, whether to include schema info, whether to include object counts, filter by vectorizer type, plus limit and offset for pagination.
Here's the key insight: instead of creating six separate tools (list_collections
, list_collections_with_schema
, list_collections_with_count
, etc.), you create ONE tool with six optional parameters.
The magic happens in your parameter descriptions. Don't just say with_schema: bool
. Write a description that teaches the AI when to use it:
"Include full schema definition for each collection (properties, data types, vectorizer config). Set to true when you need to understand the structure before querying. Default: false"
The AI reads this and learns: "Oh, if I need to know the structure before I query, I should set this to true." You're teaching through descriptions, not through creating more tools.
Tool 4: weaviate_list_objects
- "Browse the actual data"
This is your workhorse. The AI wants to explore a collection, and it needs control over:
- Which properties to return (don't send everything if you only need 2 fields)
- Pagination (limit and offset)
- Simple filtering (WHERE clauses)
- Sorting (by which field, in what direction)
- Whether to include vectors (they're huge, make it opt-in)
You could create separate tools for each combination. Or you could create one tool with 8 parameters and write really good descriptions.
The difference is massive. Eight parameters with good descriptions consume way less context than eight separate tools. And the AI learns faster because it sees how parameters compose together.
Here's an example of what a good parameter description looks like:
properties
- "List of property names to include in results. Leave empty to include all properties. Use this to reduce response size when you only need specific fields. Example: ['title', 'price', 'category'] for product data."
See what that's doing? It's not just defining the parameter—it's explaining the tradeoff (all vs. specific), when you'd use it (when you want smaller responses), and giving a concrete example. The AI reads this once and knows how to use it forever.
Category 3: Write Operations (Unified Actions)
This is where most people mess up. They create separate tools for every operation:
weaviate_insert
for creating new objectsweaviate_insert_batch
for creating multiple objectsweaviate_update
for updating existing objectsweaviate_update_batch
for updating multiple objects
That's four tools that do the same thing with minor variations. It's context waste.
Tool 5: weaviate_upsert
- "Create or update things"
One tool. It handles:
- Insert single object (no ID provided = create new)
- Insert batch (array of objects without IDs)
- Update single object (ID provided = update existing)
- Update batch (array of IDs matching array of objects)
How? Smart parameter design. The data
parameter accepts either a dict (single object) or an array (batch). The id
parameter is optional—if you provide it, we update; if you don't, we create. For batch updates, you pass an ids
array that matches your data array.
The AI doesn't sit there thinking "Hmm, do I need insert or update? Do I need the batch version or single version?" It just calls upsert
with whatever data it has, and your server figures out the right operation.
This reduces cognitive load. Four operations become one tool call with different parameter combinations.
Tool 6: weaviate_delete
- "Remove things"
Similar pattern. One tool with a type
parameter:
type="object"
- delete one thingtype="objects"
- delete multiple thingstype="collection"
- delete entire collection (with a requiredconfirm=true
safety check)
You could have three separate tools. Or one tool that routes based on the type
parameter. Same functionality, way less context waste.
Why This Actually Works
1. Context is Expensive, Spend It Wisely
When you go from 18 tools to 6, you're not just saving space—you're fundamentally changing how the AI thinks about your system.
With 18 tools, the AI is doing decision tree traversal: "I want to list objects... do I need the with_limit version? Or the with_filter version? Or the with_limit_and_filter version?" That's cognitive overhead before it even starts solving the user's problem.
With 6 tools, the AI learns: "There's a list tool. It has parameters for limiting and filtering. I just set the ones I need." Clean mental model, faster decisions, more context left for actual data.
2. Descriptions Are Teaching Moments
Here's something most people miss: when the AI reads your tool definitions, it's learning. Every parameter description is a teaching moment.
Bad parameter description: limit: int - Maximum number of results
Good parameter description: limit: int - Maximum number of results (default: 10, max: 100). Use smaller values for exploration, larger for bulk operations.
The second one teaches when to use small vs. large values. The AI reads it once and internalizes the pattern. That's way more efficient than creating separate tools for different use cases.
3. Two Standards: Universal and Domain-Specific
This is the clever part. You're actually following two different design patterns in one server:
- Universal tools (
fetch
,search
): Single-string parameters, OpenAI-compatible, work with any AI that adopts the pattern - Domain tools (everything else): Rich parameters with detailed descriptions, optimized for your specific data source
You're not forced to cram everything into single-string parameters. You use that pattern where it matters (discovery and retrieval) and use rich parameters where you need control (browsing and mutations).
4. Unified Operations Reduce Mental Load
Think about how many decisions the AI has to make with separate insert/update tools:
- "Do I need to create or update this?"
- "Wait, does update require an ID parameter?"
- "Is there a different tool for batch operations?"
- "Can I mix creates and updates in one batch?"
With upsert
, those questions disappear. The AI just passes data. If there's an ID, you update. If there's no ID, you create. Array means batch. The server handles the routing.
It's like having a smart assistant who figures out what you meant instead of making you specify every detail.
5. One Tool, Many Workflows
This is what makes the pattern scale. With list_objects
and 8 parameters, you can handle dozens of workflows:
- Quick browse: just collection name, use defaults
- Filtered browse: add a WHERE clause
- Paginated results: add limit and offset
- Optimized response: specify which properties you need
- Sorted data: add sort_by and sort_order
Every combination works. One tool definition, dozens of use cases. And the AI figures out which parameters to use by reading your descriptions.
How to Actually Build This
Descriptions Should Teach, Not Just Define
Stop writing lazy parameter descriptions. Every description is an opportunity to teach the AI how to use your tool.
Don't write: limit: int - Number of results
Write: limit: int - Maximum objects per request (default: 10, max: 100). Use 10-20 for exploration, 50-100 for bulk operations.
See the difference? The second one explains defaults, constraints, AND gives guidance on when to use different values. The AI reads this and learns a pattern it can apply forever.
Type Hints Are Documentation
When you use Literal["asc", "desc"]
instead of just str
, you're telling the AI exactly what's valid. No guessing, no trial and error. The AI sees the type hint and knows those are the only two options.
This is especially important for parameters like sorting, operators, and resource types where there's a specific set of valid values.
Errors Should Guide, Not Just Reject
When the AI makes a mistake, your error message is a teaching moment. Don't just say "Invalid limit." Explain what went wrong and how to fix it:
{
"error": "LIMIT_EXCEEDED",
"message": "Maximum limit is 100, received 1000",
"hint": "Use pagination (offset + limit) to retrieve more results"
}
Next time the AI needs 1000 results, it'll paginate automatically because your error taught it the pattern.
Show Your Work in Responses
When you parse a structured search query like "collection:Product limit:20 wireless speakers", include what you extracted in your response:
{
"results": [...],
"parsed_filters": {
"collection": "Product",
"limit": 20,
"query": "wireless speakers"
}
}
The AI sees how its input was interpreted. This feedback loop helps it construct better queries over time.
Smart Defaults Reduce Friction
Make your most common use case work with minimal parameters. If 90% of queries need 10 results, make that the default. If vectors are huge and rarely needed, make include_vector=false
the default.
Good defaults mean the AI can call list_objects(collection="Product")
and get something useful without specifying 8 different parameters.
When to Add a Seventh Tool
Six tools cover most data sources, but sometimes you genuinely need more. Here's when:
Complex Domain Operations
If your system has workflows that don't map to the basic CRUD operations, add a tool. Like if you're building a vector store integration and you need "find similar objects based on this object's vector"—that's not quite search (you're not using a query string) and it's not quite fetch (you're getting multiple results). That's a seventh tool.
Batch Operations with Different Semantics
If your batch operation has fundamentally different behavior than just "do this operation N times," that's a separate tool. Like bulk import from CSV with schema validation and transaction rollback—that's not the same as calling upsert 1000 times. It has different error handling, different performance characteristics, different guarantees.
Specialized Read Operations
Sometimes you have a read pattern that doesn't fit search or list. Like "get the last 20 events for this user" or "show me the 5 most recent changes to this object." These have specific semantics that don't map cleanly to general-purpose search.
But be honest with yourself. Most of the time, you can handle the use case by adding a parameter to an existing tool. Don't create a seventh tool just because it feels cleaner—create it when the semantics are genuinely different.
Real-World Impact: What This Looks Like
We rebuilt our Weaviate MCP server using this pattern. Started with 12 tools, ended with 6.
Before: Separate tools for search vs. search_by_collection, get_object vs. get_schema, list_objects vs. filter_objects, insert vs. batch_insert, update vs. batch_update. You get the idea.
After:
Six tools total. fetch
and search
for OpenAI compatibility. list_collections
and list_objects
with rich parameters. upsert
and delete
for mutations.
What changed:
- Tool selection got faster—the AI spends less time deciding which variant to call
- Context consumption dropped by half—same functionality, way less overhead
- The AI started using advanced features automatically because it could read parameter descriptions
- Error rates dropped because there were fewer wrong tool combinations to choose
The functionality didn't change. We can still do everything we could before. But the interface got way more efficient.
How This Compares to Other Approaches
The REST API Trap
Most people start by mapping REST endpoints to MCP tools. GET, POST, PUT, PATCH, DELETE all become separate tools. That's 5-6 tools just for basic CRUD on one resource type.
This works for HTTP APIs where each endpoint is stateless and independent. But in MCP, the AI is building a mental model of your system. Having 6 ways to modify an object is cognitive overhead. Better to have one upsert
tool that handles create and update.
The GraphQL Approach
GraphQL solves flexibility by letting you write custom queries. Want specific fields? Write a query that requests those fields. Need filtering? Add a WHERE clause to your query string.
The problem with this in MCP: the AI has to learn your query language. It has to construct syntactically correct GraphQL strings and debug them when they're wrong.
The six-tool pattern gets you GraphQL-level flexibility without query language complexity. The AI just sets parameters to values. No syntax to learn, no strings to debug.
Why Six Is the Right Number
Two tools for universal discovery (fetch
, search
)—these work with any AI that follows OpenAI's pattern.
Two tools for domain-specific browsing (list_collections
, list_objects
)—these give you full control through parameters.
Two tools for mutations (upsert
, delete
)—these handle all write operations through smart routing.
That's the minimum set that gives you complete functionality without redundancy.
How to Migrate Existing Servers
Got an MCP server with 15 tools and you want to consolidate? Here's how:
Step 1: Group by Purpose
Write down all your tools and group them by what they actually do:
- Reading single items
- Reading multiple items
- Creating things
- Updating things
- Deleting things
You'll probably find you have 3-4 variations of each operation that differ only in parameters.
Step 2: Identify the Consolidation
Look for tools that do the same thing with minor variations:
search
andsearch_by_collection
→ onesearch
tool with optionalcollection
parameterlist_objects
andlist_objects_with_limit
→ onelist_objects
tool withlimit
parameterinsert
andupdate
→ oneupsert
tool that routes based on whether you provide an ID
Most servers can go from 12-18 tools down to 6-8 through this consolidation.
Step 3: Migrate the Documentation
All that per-tool documentation? Move it into parameter descriptions. The tool description explains what the tool does overall. Each parameter description explains how to use that specific feature.
This is actually better documentation because it's attached to the thing it documents. The AI reads the parameter description right when it's deciding what value to pass.
Step 4: Deprecation Path (If You Have Users)
If you have existing users, don't just break their integrations. Keep the old tools as thin wrappers that call the new tools:
@mcp.tool(description="[DEPRECATED] Use 'upsert' instead")
async def insert(data: dict) -> dict:
return await upsert(data=data)
Give people a migration period, then remove the wrappers.
Step 5: Show Examples
Update your docs with before/after examples so people understand the new pattern. Show how one tool with parameters replaces multiple separate tools.
The Bottom Line
Start with six tools. Not five, not twelve. Six.
Two tools follow OpenAI's single-string pattern for universal discovery. Two tools give rich, parameter-driven control for domain-specific browsing. Two tools handle all write operations through smart routing.
This isn't about being minimal for its own sake. It's about context efficiency. Every tool definition you add costs context tokens and cognitive overhead. The question isn't "can I add another tool?"—it's "does this operation have fundamentally different semantics than what I already have?"
Most of the time, the answer is no. Most of the time, you can handle the use case by adding a parameter with a good description to an existing tool.
The six-tool pattern works because it balances two things:
- OpenAI compatibility where it matters (discovery and retrieval)
- Rich control where you need it (browsing and mutations)
You get GraphQL-level flexibility without query language complexity. You get REST-level completeness without endpoint proliferation. You get full functionality with minimal context waste.
Build your next MCP server with six tools. The AI will thank you with faster tool selection, and your users will thank you when their queries actually work.
Want to see the code? Check out our Weaviate MCP implementation or read about OpenAI's search and fetch requirements that inspired this pattern.