MCP Hit 97 Million Downloads. Here Is What Every Developer Needs to Know.
Quick summary
Model Context Protocol went from 2 million to 97 million monthly downloads in 16 months. With 5,800+ servers and adoption by OpenAI, Google, and Microsoft, MCP has won the agent infrastructure war.
Read next
- Perplexity Personal Computer: Always-On AI Agent on Mac Mini
- OpenAI, Anthropic, and SSI All Say They Are Building Safe AI. They Disagree on What That Means.
The Model Context Protocol launched in November 2024 with around 2 million monthly SDK downloads. By March 2026 that number is 97 million. In 16 months, a protocol that most developers had not heard of became foundational infrastructure for AI agent development.
That growth rate — 4,750% in 16 months — mirrors the adoption curves of npm in its early years and REST APIs in the mid-2000s. Both became so universal they stopped being described as technologies and became assumed infrastructure. MCP is on the same path, and if you are building anything that connects AI models to external tools, data, or services, understanding it is no longer optional.
What MCP Actually Is
MCP stands for Model Context Protocol. It is an open standard — created by Anthropic, now maintained cross-industry — that defines how AI models communicate with external tools and data sources.
Before MCP, every AI integration was bespoke. If you wanted Claude to query your database, you wrote a custom function. If you then wanted GPT-4o to do the same thing, you rewrote it in OpenAI's function calling format. If you wanted to add Gemini, you wrote it again. Every model had its own tool format, its own JSON schema expectations, its own way of invoking external functions and receiving results.
MCP solves this with a single protocol layer. You build one MCP server that exposes your database, your API, your file system, or your SaaS tool. Any MCP-compatible AI model — Claude, GPT-4o, Gemini, Grok — can connect to that server and use those capabilities without modification.
The analogy that actually fits: USB-C for AI tools. One connector, any device.
Why MCP Won
The protocol war for AI agent infrastructure was real. In 2025, at least four competing approaches had meaningful adoption: OpenAI's function calling format, Anthropic's tool use format, LangChain's tool abstraction, and various custom implementations from enterprise vendors.
MCP won for three reasons.
Anthropic open-sourced it immediately. Rather than maintaining MCP as a proprietary Anthropic format, they published the specification, the TypeScript SDK, and the Python SDK under open licenses on day one. This removed the primary objection enterprises have to adopting vendor-specific protocols.
OpenAI committed to MCP support. This was the decisive moment. When OpenAI announced MCP compatibility in early 2025, the format fragmentation effectively ended. A developer building an MCP server no longer had to choose between Claude and GPT-4o support — they got both from a single implementation.
Google and Microsoft followed. With all four major AI providers supporting MCP, the network effects kicked in. Every MCP server built for one provider works with all providers. The 5,800+ servers that exist today were built because that guarantee held.
The 5,800 MCP Servers: What They Cover
The MCP server ecosystem is broader than most developers realise. The 5,800+ community and enterprise servers as of March 2026 cover:
Developer tools: GitHub (create PRs, review code, manage issues), GitLab, Jira, Linear, Notion, Confluence. These are the highest-adoption servers — developers integrating Claude or GPT-4o into engineering workflows reach for these first.
Databases: PostgreSQL, MySQL, MongoDB, Redis, Supabase, PlanetScale. Read and write access, schema introspection, query execution. The database MCP servers are what enable agents to actually interact with production data rather than hallucinating about it.
Cloud providers: AWS (S3, EC2, Lambda, CloudWatch), Google Cloud, Azure, Vercel, Fly.io. Infrastructure management through natural language is the use case — "show me all Lambda functions with errors in the last hour" becomes a single agent instruction rather than a console navigation exercise.
Communication and CRM: Slack, Gmail, Outlook, Salesforce, HubSpot. These are the servers that enterprise deployments reach for — connecting AI agents to the tools where actual business communication happens.
Productivity: Google Docs, Google Sheets, Airtable, Notion. Document creation and editing through agents.
Web and search: Brave Search, Exa, Firecrawl, Puppeteer. These give agents the ability to browse, scrape, and search in real time.
How MCP Works: The Technical Layer
MCP operates on a client-server model with three core primitives.
Resources are data that the server exposes for the model to read. A file system MCP server exposes files as resources. A database server exposes table schemas and query results. Resources are read-only from the protocol perspective — they give the model context without giving it write access.
Tools are functions that the model can call to take actions. A GitHub MCP server exposes tools like create_pull_request, merge_branch, add_comment. When the model decides to create a PR, it calls the tool with the appropriate parameters and receives a structured response.
Prompts are reusable instruction templates that the server can provide to the model. These allow server authors to package domain-specific knowledge — a database MCP server might include a prompt template for safe query construction that prevents SQL injection patterns.
The transport layer uses either stdio (for local servers running as subprocesses) or SSE over HTTP (for remote servers). Most developer-facing MCP servers use stdio in development and SSE in production deployments.
Setting Up MCP: A Practical Starting Point
The fastest path to a working MCP integration is the Claude Desktop app with a local MCP server. Here is the minimal setup for connecting Claude to your file system and a GitHub repo.
Install the Claude Desktop app. Open the config file at ~/Library/Application Support/Claude/claude_desktop_config.json on Mac. Add your MCP server configurations:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/yourname/projects"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your_token_here"
}
}
}
}Restart Claude Desktop. You now have Claude with direct access to your local project files and your GitHub repositories. You can ask it to read your codebase, create branches, review open PRs, and push changes — all within a single conversation.
For production deployments with Claude Code, MCP servers are configured in the project's .claude/ directory and run as persistent processes rather than on-demand subprocesses.
MCP vs Function Calling: The Actual Difference
Developers already familiar with OpenAI function calling or Anthropic tool use sometimes ask why MCP is needed — they already have a way to give models tools.
The difference is portability and ecosystem.
Function calling definitions live in your application code. If you define a query_database tool for your GPT-4o integration, that definition is specific to your project and your provider. You cannot share it, you cannot reuse it across providers, and when you switch models you rewrite the integration.
An MCP server is a standalone service. You build it once, you publish it (optionally), and any MCP-compatible model can use it. The 5,800 community servers exist because the ecosystem compounds — each server benefits every developer using any MCP-compatible model.
The other difference is statefulness. MCP maintains a persistent connection between the model and the server during a session. The server can push updates to the model (new resources, changed tool availability) without the model polling. Function calling is stateless — each call is independent.
For simple tool use in a single application with a single provider, function calling is fine. For anything that spans multiple models, multiple sessions, or that you want to share with other developers, MCP is the right architecture.
What the 2026 MCP Roadmap Adds
The MCP specification team published a 2026 roadmap in February. The three additions most relevant to developers:
Multi-agent coordination — the current spec assumes a single model connecting to servers. The 2026 roadmap adds a formal mechanism for one MCP server to spawn and coordinate sub-agents, each with their own server connections. This enables proper hierarchical agent architectures without custom orchestration code.
Authentication standardisation — current MCP implementations handle auth ad-hoc. The roadmap includes a standard OAuth 2.0 flow for MCP servers, which will significantly reduce the friction of building enterprise-grade MCP integrations.
Streaming tool responses — tools currently return complete responses synchronously. The roadmap adds streaming, which matters for tools that generate large outputs (database queries returning thousands of rows, file operations on large files) and for tools that need to report progress.
The Developer Opportunity Right Now
97 million downloads means the ecosystem is large but not yet saturated. The MCP servers that exist cover the obvious integrations — GitHub, Slack, databases. The servers that do not yet exist or exist only in low-quality implementations:
Domain-specific APIs — every vertical SaaS has an API that does not yet have a well-maintained MCP server. Healthcare records systems, logistics platforms, financial data providers, ERP systems. Building and publishing a high-quality MCP server for a specific industry API is currently a real opportunity.
Enterprise internal tools — most enterprise internal tools have no MCP server. The developer who builds an MCP server for their company's internal systems becomes the person who made every AI tool in the company dramatically more useful.
Monitoring and observability — Datadog, PagerDuty, Grafana, Sentry all have APIs. MCP servers that give AI agents read and triage access to production monitoring data are genuinely useful and not yet commoditised.
Key Takeaways
- MCP reached 97 million monthly SDK downloads in March 2026, up from 2 million at launch in November 2024 — 4,750% growth in 16 months
- The format war is over: OpenAI, Google, Microsoft, and Anthropic all support MCP — one server works with all major models
- 5,800+ community servers cover developer tools (GitHub, Linear, Jira), databases (PostgreSQL, MongoDB, Redis), cloud providers (AWS, GCP, Azure), and communication tools (Slack, Gmail)
- Three primitives: Resources (read-only data), Tools (callable functions), Prompts (reusable instruction templates)
- MCP vs function calling: MCP is portable across providers and composable across the ecosystem; function calling is project-specific and provider-specific
- 2026 roadmap adds multi-agent coordination, OAuth 2.0 standardisation, and streaming tool responses
- The opportunity: domain-specific API servers, enterprise internal tool integrations, and observability tooling are underbuild relative to current demand
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
More on AI
All posts →Perplexity Personal Computer: Always-On AI Agent on Mac Mini
Perplexity AI's Personal Computer runs 24/7 on a Mac Mini, managing files, apps, and tasks automatically. Here's what it means for developers and AI agents.
OpenAI, Anthropic, and SSI All Say They Are Building Safe AI. They Disagree on What That Means.
Three companies, three completely different theories of how to build powerful AI responsibly. OpenAI ships fast and figures out safety later. Anthropic wants to understand before deploying. SSI refuses to launch any product until safety is solved. Only one approach can be right.
Dario Amodei's Most Honest Interview: What the Anthropic CEO Actually Thinks
In February 2026, Anthropic CEO Dario Amodei sat down with Dwarkesh Patel for his most candid conversation yet — on the end of the scaling exponential, a country of geniuses in a data center, and whether frontier AI labs can survive economically.
OpenAI Signed a Pentagon AI Deal Hours After Anthropic Was Blacklisted. What "Same Safeguards" Actually Means.
OpenAI will put its models on classified US military networks. Sam Altman says the Pentagon agreed to the "same safeguards" Anthropic refused to lower — mass surveillance and autonomous weapons. Here is the contrast and why it matters.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Software Engineer based in Delhi, India. Writes about AI models, semiconductor supply chains, and tech geopolitics — covering the intersection of infrastructure and global events. 355+ posts cited by ChatGPT, Perplexity, and Gemini. Read in 121 countries.