Skip to main content

When AI Needs Hands: Crowdsourcing Human Workers via MCP

· 8 min read
MCPBundles

We ran into a problem a few weeks ago that none of our tools could solve. It wasn't a technical problem — the code was fine, the infra was fine. We just needed someone to go do a thing on a website. Sign up, click around, grab some information, paste it into a form. Repeat a bunch of times.

AI couldn't do it. The sites had captchas, email verification, multi-step flows. We tried browser automation and it broke immediately. We needed a person.

So we thought: what if our AI agent could just hire one?

Cartoon illustration of an AI robot reaching through a portal to hand tasks to human workers around the world

The Missing Tool Category

We've spent the last year giving AI agents access to everything — databases, payment processors, CRMs, analytics platforms, email, cloud infrastructure. Hundreds of APIs wrapped in MCP tools. Machine talks to machine, structured data flows back and forth.

But there's a whole category of work that doesn't fit that pattern. Tasks that need a human body, human judgment, or just a human sitting at a browser:

  • Signing up for a service that requires email verification and a captcha
  • Visiting a physical location and photographing a storefront
  • Judging whether a translation sounds natural or robotic
  • Confirming that a website loads correctly from a specific country
  • Navigating a government portal that was designed in 2003

These aren't edge cases. We kept bumping into them. And every time, the workflow was the same: the AI would hit a wall, stop, and someone on the team would have to go do the thing manually.

That seemed fixable.

Microworkers Has an API

Microworkers is a crowdsourcing marketplace — 2M+ workers in 190+ countries. You post a task with instructions, set a price, workers pick it up and complete it. It's been around since 2009 and it's the kind of platform that most people have never heard of but quietly processes millions of tasks.

The interesting part for us: they have a full REST API. Not a webhook or an embed — a proper API where you can create campaigns, monitor submissions, rate work, and manage payments programmatically.

We wrapped the entire thing — all 53 endpoints — into an MCP provider. Now an AI agent can post work for humans, wait for the results, review what came back, and decide whether to approve or reject it. All through standard tool calls.

The Weird Part

Most MCP tools follow a simple pattern. The AI calls a tool, gets data back, moves on. Stripe returns a payment. Postgres returns query results. HubSpot returns a contact record.

Microworkers is different. The AI posts a task and then... waits. For a person. Somewhere in the world, a human opens their browser, reads the instructions, does the work, and submits their answers. The AI polls for results, reviews what came back, and rates it — OK (worker gets paid), NOK (rejected, with a reason), or REVISE (sent back with instructions on what to fix).

The AI is the employer. It's delegating work to humans, evaluating quality, and managing a workforce. Through tool calls.

AI Agent                    Microworkers                 Human Workers
│ │ │
├─ Create template ─────────►│ │
├─ Launch campaign ──────────►│── Distribute task ────────►│
│ │ │
│ (wait for submissions) │◄── Submit work ────────────┤
│ │ │
├─ Poll for results ────────►│ │
├─ Review answers ──────────►│ │
├─ Rate: OK / NOK / REVISE ─►│── Pay or reject ──────────►│
│ │ │
└─ Use collected data ──────►│ │

How Templates Work

The quality of crowdsourced work lives or dies on the instructions. Microworkers uses HTML templates — workers see a rendered page and fill out form fields. The platform auto-detects standard HTML elements (<input>, <textarea>, <select>) and tracks them as structured questions.

Here's what a simple verification template looks like:

<h3>Check if this business is still open</h3>
<p>Visit <a href="https://example.com" target="_blank">this business website</a>
and answer the questions below.</p>

<label>Is the website working? (no error pages, loads fully)</label>
<select name="website_working" required>
<option value="">-- Select --</option>
<option value="yes">Yes</option>
<option value="no">No</option>
</select>

<label>What are their current business hours?</label>
<input name="business_hours" type="text" required />

<label>Screenshot of the homepage:</label>
<input name="screenshot" type="file" />

The AI creates the template, launches a campaign, and polls for submissions. Each one comes back with the worker's answers as structured data. No parsing, no scraping — just form responses.

What It Costs

Microworkers pricing is straightforward. You set the payment per task (minimums depend on category and region — typically $0.10–0.50 for simple tasks), and the platform adds a 7.5% fee on top. Campaigns need at least 30 positions, but you can stop early if you only need a handful of results.

A 30-task campaign at $0.30 per task costs $9.68 total. Results for simple tasks in popular regions start arriving within minutes.

You can also enforce unique answers per question across the campaign — useful when you need 30 distinct data points, not 30 copies of the same answer.

Use Cases That Actually Make Sense

Content verification across countries. You ship a new feature and want to know if it works in Brazil, Japan, Germany, and Nigeria. Post a task: "Visit this URL, take a screenshot, report any errors." Workers on real devices in real countries submit actual screenshots. The AI reviews them and flags issues. No VPN gymnastics, no simulated locales — real humans in real places.

Data labeling. Training a classifier and need labeled examples? Post images with a form: "Is this a cat or a dog? Rate your confidence 1–5." Workers label, the AI aggregates results, low-confidence labels get sent back for revision. This is the original crowdsourcing use case and it still works well.

Lead enrichment. You have a list of company names and need to fill in the blanks — who's the CTO, what's their tech stack, do they have an API. Workers visit company websites, LinkedIn pages, job postings, and fill out structured forms. The AI collects, deduplicates, and validates the data.

Competitive research. "Visit [competitor].com/pricing. List every plan name, price, and what's included." Workers navigate the actual site, read the actual page, and fill in structured data. The AI builds a comparison table from real, current information — not cached search results from six months ago.

User testing. "Sign up for our product, try to complete [task], and tell us where you got stuck." Workers walk through your flow on their own devices and report friction. Cheaper than a formal usability study, faster than recruiting testers, and the AI can synthesize the feedback.

Two Campaign Types

Basic campaigns are open to all workers in a geo zone — international, US/Canada/UK, Western Europe, Asia, Latin America. Post it and anyone qualified can pick it up.

Hire group campaigns target specific worker pools. Microworkers has built-in groups (verified workers, mobile-only, etc.) and you can build custom groups from workers who did well on your previous campaigns. Useful when you've found reliable people and want to keep working with them.

The Full Surface

The provider covers every Microworkers API endpoint:

AreaToolsWhat they do
Account1Balance and account info
Campaigns10Full lifecycle — create through delete
Categories3Browse categories and pricing
Templates6Create and manage task forms
Slots5Review submissions, rate, bonus
Tasks3Per-worker task data
Task Groups6Grouped task management
Hire Groups10Worker pool targeting
Geo Zones1Geographic targeting
Transactions2Payment history
Workers4Worker management
Job Queue1Async job tracking
Total53

Every tool has annotations — read-only hints, destructive flags, idempotent markers — so AI agents can reason about what's safe to call without asking first.

Getting Started

The Microworkers bundle is on MCPBundles. You need a Microworkers API key (get one here) and some balance in your account. Connect the credential, and your AI agent has access to the full 53-tool surface.

The bundle includes a skill that walks AI agents through the whole workflow — browsing categories, creating templates, launching campaigns, polling for results, rating submissions. Once you've connected the credential, the AI knows what to do.

Why This Matters

We've been building MCP tools for a year and they all follow the same shape: AI talks to machine, machine talks back. The Microworkers provider is the first one where the AI talks to people. It posts work, waits, evaluates what comes back, and pays for quality.

That's a genuinely new capability. When an AI agent hits something it can't do — because it needs a human in a browser, or a local perspective from another country, or a judgment call that requires common sense — it doesn't have to stop anymore. It can get help. Programmatically. For a few cents per task.

The boundary between what AI can handle and what requires a human hasn't gone away. But AI can now reach across it whenever it needs to.