MCP (Model Context Protocol) in 2026: What It Is, Why It Hit 97M Downloads, and How to Use It

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that standardizes how AI systems connect to external tools, data sources, and services. Instead of writing custom integrations for every AI-tool pair, MCP provides a universal connector: one AI, any tool. By March 2026, MCP hit 97 million monthly SDK downloads (up from 100K at launch) and 78% of enterprise AI teams have at least one MCP-backed agent in production. OpenAI, Google, Microsoft, and Salesforce all added MCP support within 13 months.
In November 2024, Anthropic released Model Context Protocol (MCP) as an open standard. In March 2026, it crossed 97 million monthly SDK downloads — up from 100,000 at launch. That is a 970x increase in 18 months.
More telling: 67% of CTOs surveyed named MCP their default agent-integration standard. OpenAI, Google, Microsoft, and Salesforce all shipped MCP support within 13 months of launch.
MCP is not hype. It is becoming the integration standard for AI agents the way REST became the standard for web APIs.
What MCP Solves
Before MCP, connecting an AI to your company's tools required custom code for every combination:
Claude → Salesforce: custom integration
Claude → PostgreSQL: custom integration
Claude → Slack: custom integration
GPT-5.5 → Salesforce: different custom integration
GPT-5.5 → PostgreSQL: different custom integration
This created an integration matrix problem. Ten AI models × ten tools = one hundred integrations to build and maintain.
MCP collapses this to a single integration per tool:
Salesforce MCP Server ← Claude, GPT-5.5, Gemini (all use same server)
PostgreSQL MCP Server ← Claude, GPT-5.5, Gemini (all use same server)
Slack MCP Server ← Claude, GPT-5.5, Gemini (all use same server)
Build the MCP server once. Every MCP-compatible AI uses it automatically.
How MCP Works (Technical Architecture)
MCP uses a client-server model:
- MCP Host: The AI application (Claude Desktop, your custom agent, Cursor)
- MCP Client: The protocol client inside the host that manages connections
- MCP Server: A lightweight server you build that exposes your tools and data
Your AI Agent (MCP Client)
│
├──→ Salesforce MCP Server (your CRM tools)
├──→ PostgreSQL MCP Server (your database)
├──→ GitHub MCP Server (your code repos)
└──→ Slack MCP Server (your team comms)
Three primitives MCP servers expose:
| Primitive | What It Is | Example |
|---|---|---|
| Tools | Functions the AI can call (causes side effects) | create_ticket(title, priority), send_email(to, body) |
| Resources | Data the AI can read (no side effects) | customers/{id}/profile, documents/policy.pdf |
| Prompts | Reusable prompt templates | /summarize-support-ticket |
Building Your First MCP Server (Python)
# pip install mcp
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
import asyncio
import json
import psycopg2
server = Server("postgres-mcp-server")
# Tool: query the database
@server.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="query_database",
description="Execute a read-only SQL query against the application database",
inputSchema={
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "The SQL SELECT query to execute. Must be read-only."
}
},
"required": ["sql"]
}
),
types.Tool(
name="get_customer",
description="Get customer details by ID",
inputSchema={
"type": "object",
"properties": {
"customer_id": {"type": "integer"}
},
"required": ["customer_id"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
conn = psycopg2.connect(DATABASE_URL)
cur = conn.cursor()
if name == "query_database":
sql = arguments["sql"]
# Safety: only allow SELECT statements
if not sql.strip().upper().startswith("SELECT"):
return [types.TextContent(type="text", text="ERROR: Only SELECT queries are allowed")]
cur.execute(sql)
rows = cur.fetchall()
return [types.TextContent(type="text", text=json.dumps(rows))]
elif name == "get_customer":
cur.execute("SELECT * FROM customers WHERE id = %s", (arguments["customer_id"],))
row = cur.fetchone()
return [types.TextContent(type="text", text=json.dumps(row))]
# Resource: expose database schema
@server.list_resources()
async def list_resources() -> list[types.Resource]:
return [
types.Resource(
uri="db://schema",
name="Database Schema",
description="The full database schema with table and column definitions",
mimeType="application/json"
)
]
@server.read_resource()
async def read_resource(uri: str) -> str:
if uri == "db://schema":
conn = psycopg2.connect(DATABASE_URL)
cur = conn.cursor()
cur.execute("""
SELECT table_name, column_name, data_type
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position
""")
return json.dumps(cur.fetchall())
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())
Connecting to Claude Code
Once your MCP server is running, add it to your Claude config:
// ~/.claude/settings.json
{
"mcpServers": {
"postgres": {
"command": "python",
"args": ["/path/to/your/postgres_mcp_server.py"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost/mydb"
}
}
}
}
Claude Code now has access to your database tools. You can ask: "Query the database and find all customers who signed up in the last 30 days but haven't made a purchase" and Claude will call query_database with the right SQL.
The MCP Ecosystem in 2026
The registry crossed 12,000 servers by Q2 2026. Ready-to-use MCP servers for:
| Category | Popular Servers |
|---|---|
| Databases | PostgreSQL, MySQL, MongoDB, Supabase, PlanetScale |
| CRM / Sales | Salesforce, HubSpot, Pipedrive |
| Code / Dev | GitHub, GitLab, Jira, Linear, Sentry |
| Communication | Slack, Gmail, Microsoft Teams, Notion |
| Cloud | AWS, GCP, Azure, Vercel |
| Data | Snowflake, BigQuery, dbt, Airbyte |
| Documents | Google Drive, Confluence, SharePoint |
| Observability | Grafana, Datadog, PagerDuty |
For most enterprise use cases, you do not need to build an MCP server — you configure an existing one and grant the AI access. Our LLM integration team handles this configuration for enterprise deployments.
Enterprise Security Checklist
Before deploying MCP in production:
- Principle of least privilege: each MCP server only exposes the tools the agent actually needs
- Input validation: validate all tool arguments before executing (prevent prompt injection)
- Read-only by default: only grant write tools when explicitly required for the workflow
- Audit logging: log every tool invocation with timestamp, tool name, arguments, and caller identity
- Rate limiting: limit tool call frequency to prevent runaway agent loops
- OAuth 2.1: use enterprise SSO for remote MCP servers (part of the 2026 MCP roadmap)
- Sandboxed execution: run MCP servers with minimal OS permissions (Docker containers recommended)
2026 MCP Roadmap
The official MCP roadmap for 2026 focuses on three areas:
- Enterprise authentication: OAuth 2.1 + SAML + enterprise IdP integration (replacing API keys)
- Multi-agent coordination: agent-to-agent tool calling — one AI agent can invoke another AI agent's MCP tools
- MCP registry: curated, verified server directory with security ratings and compliance certifications
The multi-agent coordination feature is the most significant: it enables hierarchical AI agent architectures where an orchestrator agent delegates subtasks to specialist agents, each exposing capabilities via MCP.
Ortem Technologies builds custom MCP servers for enterprise AI deployments — connecting AI agents to internal databases, CRMs, ERPs, and proprietary APIs without exposing raw database access. Explore our LLM integration services → | AI agent development → | Talk to our team →
About Ortem Technologies
Ortem Technologies is a premier custom software, mobile app, and AI development company. We serve enterprise and startup clients across the USA, UK, Australia, Canada, and the Middle East. Our cross-industry expertise spans fintech, healthcare, and logistics, enabling us to deliver scalable, secure, and innovative digital solutions worldwide.
Get the Ortem Tech Digest
Monthly insights on AI, mobile, and software strategy - straight to your inbox. No spam, ever.
Sources & References
- 1.MCP Hits 97M Downloads - Digital Applied
- 2.2026: The Year for Enterprise-Ready MCP Adoption - CData
- 3.The 2026 MCP Roadmap - Model Context Protocol Blog
About the Author
Director – AI Product Strategy, Development, Sales & Business Development, Ortem Technologies
Praveen Jha is the Director of AI Product Strategy, Development, Sales & Business Development at Ortem Technologies. With deep expertise in technology consulting and enterprise sales, he helps businesses identify the right digital transformation strategies - from mobile and AI solutions to cloud-native platforms. He writes about technology adoption, business growth, and building software partnerships that deliver real ROI.
Frequently Asked Questions
- MCP (Model Context Protocol) is an open standard introduced by Anthropic in November 2024 that defines how AI systems connect to external tools and data sources. Before MCP, connecting an AI to a database required custom code per model per database. With MCP, you build one MCP server for your database, and any MCP-compatible AI (Claude, GPT-5.5, Gemini, etc.) can connect to it using the same protocol. MCP uses a client-server architecture: the AI (client) connects to MCP servers that expose tools and resources. Tools are functions the AI can call; resources are data the AI can read.
- MCP adoption accelerated because all major AI providers adopted it within 13 months. OpenAI, Google, Microsoft, and Salesforce all shipped MCP support by early 2026. Once adoption became multi-provider, the ecosystem value compounded: any MCP server (there are 12,000+ as of Q2 2026) works with any MCP-compatible AI. For enterprises, MCP solves the integration fragmentation problem — instead of separate integrations for each AI tool and each data source, MCP provides a single integration layer that works with all of them.
- An MCP server exposes tools (functions the AI can call) and resources (data the AI can read) over a standardized protocol. In Python: install the mcp SDK, define your tools as decorated functions, and run the server. Example: `@server.tool("query_database") async def query_db(sql: str) -> str: return db.execute(sql)`. The MCP SDK handles all protocol communication. The AI sends a tool_call request with arguments; your server executes the function and returns the result. MCP servers can run locally (stdio transport) or remotely (HTTP/SSE transport) for multi-user or cloud deployments.
- MCP tools are functions the AI can call to perform actions: query a database, call an API, send an email, execute code, write a file. Tools are the AI's actuators — they cause side effects. MCP resources are data sources the AI can read: a file, a database table, a web page, a document. Resources are the AI's sensors — they provide information without causing side effects. The practical distinction: use resources when the AI needs to read context before deciding what to do. Use tools when the AI needs to take action. Most production MCP servers expose both.
- MCP's 2026 roadmap addresses the main enterprise security concerns: OAuth 2.1 authentication for enterprise identity provider integration (replacing simple API keys), audit logging for all tool calls (required for SOC 2 and HIPAA), and permission scoping (tools can be restricted to specific roles or user groups). The main risks in current MCP deployments: (1) overly permissive tool definitions that give the AI more access than needed — use principle of least privilege; (2) prompt injection through malicious resource content — validate all inputs; (3) missing audit trails — log all MCP tool invocations to your SIEM.
Stay Ahead
Get engineering insights in your inbox
Practical guides on software development, AI, and cloud. No fluff — published when it's worth your time.
Ready to Start Your Project?
Let Ortem Technologies help you build innovative solutions for your business.
You Might Also Like

