The Model Context Protocol (MCP) has done in eighteen months what most standards take a decade to do: become the assumed integration layer for AI agents. Introduced by Anthropic on 25 November 2024 with open-source SDKs in Python, TypeScript, C#, and Java, MCP gave the industry one answer to the question every enterprise AI team was asking — how do agents talk to the tools and data they need without rewriting integrations for every model.
This guide covers what MCP is, the architecture in plain terms, how it differs from function calling, the production deployment patterns for the enterprise, the security controls that must be in place before you expose internal systems to an agent, and how MCP fits with BYOM and the broader agentic stack.
What MCP Is, in One Paragraph
MCP is an open protocol that defines how a host (your AI application) connects to an MCP server (a process that exposes capabilities). The host runs an MCP client per server it talks to. The server advertises three primitives — tools (callable functions), resources (read-only data fetched by URI), and prompts (reusable templates) — and the host decides which to invoke. The transport is JSON-RPC over stdio for local servers and HTTP/SSE for remote ones. The result is one standard wire protocol between agents and the systems they need.
Why MCP Won
Three reasons every major model provider adopted MCP within a year:
- It solved a coordination problem. Before MCP, every agent integrated with every tool through bespoke wrappers. The combinatorial explosion was untenable. MCP turns N×M into N+M.
- It is genuinely open. The specification is governed in the open, the SDKs are MIT-licensed, and the reference servers ship with the spec. No vendor controls the standard.
- It is model-agnostic. An MCP server works with Claude, GPT, Gemini, Llama, Mistral, or any client implementation. This made it safe for vendors to adopt — they were not handing customers to a competitor.
The community has built thousands of MCP servers. The official registry covers the integrations every enterprise needs first — file systems, GitHub, Git, Slack, Postgres, Drive, web search, browser automation. SDKs and reference servers ship maintained by Anthropic with contributions from Microsoft and others.
MCP vs Function Calling vs Plugins
Function calling is the per-vendor mechanism where an LLM is taught to emit JSON that names a function in your application. It works inside one vendor's API. Plugins (the now-deprecated ChatGPT plugin model) was an earlier attempt at standardisation that never escaped one platform. MCP is what survived: a protocol, not a feature, with the trust boundary between host and server made explicit.
| Dimension | Function calling | MCP |
|---|---|---|
| Scope | In-process, one application | Cross-process, reusable |
| Portability | Per vendor | Any MCP-aware client |
| Discovery | Hard-coded in app | Server advertises capabilities |
| Trust boundary | Implicit in app | Explicit between host and server |
| Best for | Tightly coupled in-app tools | Reusable, shared, third-party integrations |
Function calling is not going away — it is still the right pattern for tools tightly coupled to one application's logic. MCP is for everything you want to share, audit, or substitute.
The Enterprise Deployment Pattern
The reference deployment for an enterprise MCP estate has four layers:
- Internal MCP servers — one per significant internal system (CRM, billing, ticketing, data warehouse, document management). Each runs in your VPC, behind your IdP, with scoped permissions per consumer.
- Trusted external servers — vendor-published servers for SaaS you already use (GitHub, Slack, Drive, observability). Connected through a controlled gateway with audit logging.
- An MCP gateway — a policy enforcement point between agents and servers. Handles authentication, authorization, rate limiting, content classifiers on incoming resources, redaction of sensitive outputs, and the audit trail.
- Agent hosts — your AI applications consuming the gateway, with per-agent allow-lists of which servers and which tools they may call.
Without the gateway layer, every agent is a sprawling integration with direct access to internal systems — exactly the architecture security teams have spent twenty years dismantling. The gateway is non-optional for any enterprise with a meaningful agentic estate.
Security Risks That Demand Attention
Over-broad tool permissions
An agent connected to a CRM MCP server with full read-write access can, on a single bad reasoning step, modify or delete records it should never touch. The fix is per-tool grants, scoped credentials, and write operations behind explicit user confirmation in high-stakes contexts.
Indirect prompt injection through resources
MCP resources are content fetched by URI — documents, web pages, database rows. If that content can be authored by an attacker, it becomes a prompt-injection vector. Treat retrieved content as data, not instructions; classify before passing to the model; apply the same prompt-injection defences you would apply to any RAG pipeline. See our companion guide on prompt injection defence for LLMs.
Third-party server trust
Connecting your enterprise agent to a community MCP server is connecting your data to whoever maintains that server. Inventory every server, pin versions, run them in isolated environments, and treat them like any other vendor in your TPRM regime.
Identity propagation gaps
The current MCP authorization specification implements OAuth 2.1 patterns and the community has flagged that some implementation details conflict with modern enterprise SSO and IdP practices. Production deployments add identity propagation so the user's identity reaches the MCP server, not just the agent's, and short-lived scoped tokens for every server interaction.
MCP, BYOM, and Agent Skills
MCP is the integration layer that makes BYOM practical. If your application uses MCP for tools and data access, swapping the underlying model from Claude to GPT to Llama leaves your integrations intact — only the model client changes. This is how mature enterprise AI estates avoid lock-in without paying the cost of bespoke integration per model.
And MCP is the natural packaging for agent skills. A skill — say, "verify a vendor invoice against its PO" — exposed as an MCP server becomes discoverable, versioned, evaluable, and portable across every agent that speaks MCP. Internal skill libraries published as private MCP servers are the emerging pattern for sharing capabilities across enterprise teams.
What to Build First
The natural sequence for an enterprise adopting MCP:
- Stand up an MCP gateway with auth, rate limiting, and audit logging
- Connect three or four high-value internal systems via MCP servers in your VPC
- Adopt one or two trusted external servers (GitHub, Slack) through the gateway
- Refactor your existing function-calling integrations to MCP where the tools are reused across agents
- Publish your first internal skills as MCP servers, with version pinning and per-agent allow-lists
By the time you have done this, your agents can swap models freely, your security team has a single audit point, and your skill library is reusable across teams. That is the payoff that justified the standardisation in the first place.
What to Watch Through 2026
Three areas where the standard and ecosystem are evolving fast: tighter enterprise authorization patterns (better identity propagation, per-tool scopes, IdP integration); richer resource semantics (typed schemas, streaming, partial reads); and content provenance signals (so agents can distinguish trusted from untrusted MCP content). Indian enterprises building on MCP today should track the spec quarterly and plan to absorb minor breaking changes in the protocol layer through 2026.