Model context protocol: the next big step in generating value from AI

You are going to start hearing a lot more about model context protocol (MCP) in the coming months. Here’s why.

The Model Context Protocol (MCP) is an open-source, application-layer communication standard originally developed by Anthropic to facilitate seamless interaction between large language models (LLMs) and various data sources, tools, and applications. It aims to provide a standardized method for integrating AI systems with external resources, enabling more efficient and context-aware AI-driven workflows.​

With this kind of potential, it’s not a surprise that it’s starting to get a lot of attention. In a recent blog post, Colin Masson, Director of Research for Industrial AI at ARC Advisory Group, calls MCP a “universal translator” that replaces the need for custom-built connections between AI models and industrial systems. Last month, Jim Zemlin, Executive Director at Linux Foundation said in a LinkedIn post  that MCP  is “emerging as a foundational communications layer for AI systems” and compared its potential impact to what HTTP did for the Internet.

Key features of model context protocol

MCP serves as a bridge between AI models and the environments they operate in, allowing models to access and interact with external data sources, APIs, and tools in a structured and secure manner. By standardizing the way AI systems communicate with external resources, MCP simplifies the integration process and enhances the capabilities of AI applications.​ Here are some of the reasons it is expected to improve AI functionality:


Modular and Message-Based Architecture: MCP follows a client-server model over a persistent stream, typically mediated by a host AI system. It uses JSON-RPC 2.0 for communication, supporting requests, responses, and notifications.​

Transport Protocols: Supports standard input/output (stdio), HTTP with Server-Sent Events (SSE), and optionally extended via WebSockets or custom transports.​

Data Format: Utilizes UTF-8 encoded JSON, with alternative binary encodings like MessagePack supported by custom implementations.​

Security and Authentication: Employs a host-mediated security model, process sandboxing, HTTPS for remote connections, and optional token-based authentication (e.g., OAuth, API keys).​

Developer SDKs: Provides SDKs in Python, TypeScript/JavaScript, Rust, Java, C#, and Swift, maintained under the Model Context Protocol GitHub organization.​

MCP has been applied across various domains. In software development it’s integrated into IDEs like Zed, platforms like Replit, and code intelligence tools such as Sourcegraph to provide coding assistants with real-time code context.​ Companies in many industries are using it to help internal assistants retrieve information from proprietary documents, CRM systems, and company knowledge bases.​ Applications like AI2SQL leverage MCP to connect models with SQL databases, enabling plain-language queries.​ In manufacturing, it supports agentic AI workflows involving multiple tools (e.g., document lookup and messaging APIs), enabling chain-of-thought reasoning over distributed resources.​

MCP adoption and ecosystem

  • OpenAI announced support for MCP across its Agents SDK and ChatGPT desktop applications on March 26, 2025.​
  • Google DeepMind confirmed MCP support in the upcoming Gemini models and related infrastructure.​
  • Dozens of MCP server implementations have been released, including community-maintained connectors for Slack, GitHub, PostgreSQL, Google Drive, and Stripe.​
  • Platforms like Replit and Zed have integrated MCP into their environments, providing developers with enhanced AI capabilities.​

Comparing MCP to other systems

MCP differs from other AI integration frameworks in several ways:​

OpenAI Function Calling: While function calling lets LLMs invoke user-defined functions, MCP offers a broader, model-agnostic infrastructure for tool discovery, access control, and streaming interactions.​

OpenAI Plugins and “Work with Apps”: These rely on curated partner integrations, whereas MCP supports decentralized, user-defined tool servers.​

Google Bard Extensions: Limited to internal Google products, MCP allows arbitrary third-party integrations.​

LangChain / LlamaIndex: While these libraries orchestrate tool-use workflows, MCP provides the underlying communication protocol they can build upon.​

MCP represents a significant step forward in AI integration, offering a standardized and secure method for connecting AI systems with external tools and data sources. Its growing adoption across major AI platforms and developer tools underscores its potential to transform AI-driven workflows.​

Written by

Michael Ouellette

Michael Ouellette is a senior editor at engineering.com covering digital transformation, artificial intelligence, advanced manufacturing and automation.