Skip to main content

Turn Any API Into an MCP Tool — Automatically

· 6 min read
MCPBundles

Every AI platform supports MCP tools. But what happens when the tool you need doesn't exist yet? Maybe you're working with an internal API, a niche SaaS, or a service that nobody has wrapped in MCP. Until now, you'd write a custom server.

Now your AI can do it itself.

Cartoon illustration of an AI robot at a desk connecting to many cloud services

The Problem

MCP tools are powerful — your AI gets structured access to real services with real credentials. But the ecosystem has a gap: if there isn't already an MCP server for the API you need, you're stuck writing one. That's a Python project, a server process, authentication handling, deployment, and maintenance. For a single API endpoint.

Most APIs are straightforward. They have a base URL, they accept JSON, they return JSON, and they need an API key in a header. The infrastructure to wrap them shouldn't be harder than using them.

Two Commands and a Skill

pip install mcpbundles
mcpbundles init

That installs the CLI and authenticates it. But here's the key part: mcpbundles init also generates a skill file — a concise instruction set that teaches your AI agent how to use the CLI. It covers tool discovery, tool calling, REST API registration, and every auth pattern. Add the skill to your AI's context (Cursor rules, Claude Code AGENTS.md, system prompt — wherever your agent reads instructions) and it knows how to build an MCP tool for any API you point it at.

The AI doesn't need prior knowledge of the MCPBundles platform. The skill tells it everything: how to search for tools, how to register new APIs, how to call them, and how to handle errors. From that point, your AI agent — in Cursor, Claude Code, Codex, or any tool with terminal access — can create new API tools on the fly.

Here's what happens when you tell your AI "connect to the NASA API":

mcpbundles call upsert_provider -- \
base_url="https://api.nasa.gov" \
api_key="DEMO_KEY" \
auth_type="query_param" \
auth_header_name="api_key" \
name="NASA"

The CLI creates a provider, a bundle, a credential, and a tool — all in one step. The response tells the AI everything it needs:

{
"provider_slug": "nasa-2c85",
"bundle_slug": "nasa",
"tool_count": 1,
"mode": "rest_api",
"base_url": "https://api.nasa.gov",
"auth_type": "query_param"
}

From that point, the AI can call the API:

mcpbundles call nasa-2c85-api-request --bundle nasa -- \
method="GET" path="/planetary/apod"

And get back real data with full request metadata:

{
"title": "The Guardians of Rapa Nui beneath the Milky Way",
"url": "https://apod.nasa.gov/apod/image/2603/rapa_nui_milky_way_1024.jpeg",
"_request": {
"method": "GET",
"url": "https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY",
"status_code": 200,
"duration_ms": 681
}
}

The AI didn't need to know how NASA's API works. It just needed the base URL and the API key.

Any Auth Pattern

Not every API uses the same authentication. The tool handles all the common patterns:

  • Bearer token (default) — Authorization: Bearer {key}
  • Custom header — any header name, like X-API-Key or api-key
  • Query parameter — key appended to the URL, like ?api_key=xxx
  • Basic auth — username:password
  • No auth — public APIs

The AI picks the right one for the API it's connecting to. Authentication is injected automatically on every request — the AI never sees or handles the raw credential again.

What Makes This Different

The tool that gets created isn't a dumb HTTP wrapper. It's built to work well with any AI model calling it:

Structured errors. A 404 doesn't come back as a raw HTML page. It comes back as {"error": "not_found", "hint": "Endpoint not found. Double-check the path — base URL is https://api.nasa.gov."}. The AI knows what went wrong and what to try next.

Request metadata. Every response includes _request with the exact URL that was hit, the status code, the response time, and the content type. When something goes wrong, the AI can see exactly what it built and self-correct.

Response headers. Rate limits, pagination cursors, and request IDs from the API's response headers are surfaced automatically. The AI can follow Link headers to paginate, or back off when it sees rate limit warnings.

Auto-retry. If the API returns a 429 or 5xx, the tool retries once with backoff — respecting Retry-After headers. The AI doesn't waste a turn manually retrying transient failures.

Field selection. Large API responses burn context. The select parameter lets the AI extract just the fields it needs: select: ["id", "name", "address.city"]. A 50-field response becomes 3 fields.

Binary detection. If the API returns a PDF, image, or zip file, the tool returns metadata (content_type, size_bytes) instead of dumping binary garbage into the conversation.

Truncation with guidance. When a response is too large, the tool truncates it and tells the AI: "Response truncated. Use 'select' to pick specific fields, or query_params to paginate/filter." The AI learns how to handle it.

Full Platform Visibility

This is where it gets interesting. The API tool your AI creates through the CLI isn't just a CLI thing — it's a full MCP tool on the platform.

  • It shows up in your MCPBundles dashboard with the rest of your tools
  • Every execution is logged and visible in the tool history
  • The credential is encrypted and managed — not sitting in a .env file
  • Other team members in the same workspace can use the same tool
  • Any AI platform connected to your MCPBundles workspace can call it — Claude Desktop, Cursor, ChatGPT, or your own app

Your AI creates the tool through the CLI. You see it in the dashboard. Your team uses it everywhere.

The AI Manages Itself

The real shift here is that your AI agent can expand its own capabilities. It hits an API it doesn't have a tool for, creates one, and starts using it — all within the same conversation. No human needs to write a server, configure a provider, or deploy anything.

Tell your AI "get the weather in London" and it can:

  1. Register the Open Meteo API (no auth needed)
  2. Call the forecast endpoint with coordinates
  3. Return the temperature

Tell it "check our Brevo email campaigns" and it can:

  1. Register the Brevo API with your API key
  2. List campaigns
  3. Pull stats on the last send

Each time, a real tool is created on the platform — with encrypted credentials, execution logging, and access from any connected AI client.

Get Started

pip install mcpbundles
mcpbundles init

init authenticates the CLI and generates a skill file. Add that skill to your AI agent's instructions — as a Cursor rule, in your AGENTS.md, or wherever your agent reads context. That's it. Your AI now knows how to discover 10,000+ existing tools, create new ones for any REST API it encounters, and manage credentials and tools on the platform.

One install, one login, one skill. Every API.

The MCPBundles CLI docs cover setup, named connections for teams, and the full command reference.