Tuesday, March 24, 2026

The Model Context Protocol (MCP): Bridging AI and Actionable Data

 My day job has recently introduced a new concept for me to understand in my daily life (like I need more new concepts to understand). The Model Context Protocol (MCP)...

What is MCP?

MCP is an open standard designed to unify how AI assistants and large language models (LLMs) connect with external data sources, tools, and environments. An MCP Server acts as a secure gateway or bridge between the AI application (the client) and external systems, such as databases, file systems, or APIs. It is frequently compared to a "USB-C port for AI," as it provides a universal, standardized interface for plugging external capabilities into AI systems.

For example this is incredibly useful if you are building a chatbot for your organisation and want your AI Assistant to have access to your internal customer support database to use as a knowledge base for resolving common issues and answer questions based on your company (i.e. it’s providing context to your AI service). Without it you would somehow have to expose all that information to the larger LLM which is just not gonna happen.

An MCP server exposes three core primitives to AI applications:

  • Tools: Executable functions that the AI can actively call to perform actions, such as writing to a database, executing a web search, or modifying a file.

  • Resources: Passive, read-only data sources that provide the AI with context, such as database schemas, API documentation, or your customer support database.

  • Prompts: Reusable instruction templates that help structure interactions and guide the AI through specific workflows. Prompts that are refined by the architect of the MCP server to provide you with more meaningful responses - saving you time in generating and testing these from scratch.

Why You Would Need an MCP Server?

  • To Eliminate Fragmented Integrations: Before MCP, developers had to write custom API integrations for every single external tool or system an AI needed to access. By implementing an MCP server, developers can build an integration once and grant the AI access to a vast, standardized ecosystem of resources without maintaining dozens of custom codebases.

  • To Enable Safe Action and Execution: LLMs are limited to the data they were trained on and lack built-in environments to safely execute code or make network requests on their own. An MCP server acts as a controlled execution layer. It keeps sensitive elements like API keys hidden from the model while the server handles the actual safe execution of tasks.

  • For Dynamic Tool Discovery: Unlike static API specifications (like OpenAPI) that must be pre-loaded into an LLM, MCP allows AI applications to query servers at runtime to dynamically discover what tools and resources are currently available.

  • To Ensure Security and Access Control: MCP servers are designed with enterprise security in mind, utilizing OAuth 2.1 for authentication and centralizing permissions management. This ensures that AI applications only interact with authorized data and that user-specific contexts are strictly respected so data does not leak between users.

  • For Portability Across Applications: Because MCP is vendor-agnostic and model-agnostic, you can build a toolset once via an MCP server and plug it into any compatible AI application or IDE—such as Claude Desktop, Cursor, Windsurf, or LangChain—without needing to rewrite the integration.

  • To Support Agentic Workflows: MCP facilitates conversational, multi-turn interactions through real-time updates and streaming (using Server-Sent Events). This allows AI agents to dynamically interact with multiple data sources, handle intermediate steps, and maintain persistent context over complex, multi-step tasks.

Why use MCP rather than an API?

While both APIs and MCPs aid in communication between systems, their core audiences, mechanisms, and philosophies differ significantly. A helpful way to frame the difference is that APIs connect machines, whereas MCP connects intelligence to machines.

Here are the primary differences between the two:

Target Audience and Optimization

  • APIs are built for human developers to write code against, optimizing software-to-software communication.

  • MCP is built specifically for AI models to streamline agentic interactions where an AI needs to reason about the data it receives.

Static vs. Dynamic Discovery

  • APIs rely on static contracts that must be pre-loaded, read, and manually interpreted to formulate requests.

  • MCP features dynamic discovery. An AI agent can query an MCP server at runtime to ask, "What tools can you offer?", and the server will automatically respond with a structured list of available tools, their descriptions, and parameter schemas. This means the AI always has an up-to-date view of its capabilities without needing manually updated documentation.

Security and Execution

  • APIs are exposed over the network and assume the caller can securely manage tokens, headers, and request formatting. However, AI models do not have built-in execution environments and cannot safely hold secrets like API keys.

  • MCP introduces a secure intermediary layer. The AI model never sees API keys or sensitive URLs. Instead, the AI asks the MCP server to use a specific tool, and the MCP server validates the input, securely executes the API call using its own hidden credentials, and returns only the safe results. MCP also standardizes security governance, utilizing protocols like OAuth 2.1 to ensure the AI only accesses data the user has explicitly authorized.

Granularity and Abstraction

  • APIs typically expose granular, entity-based endpoints (e.g., /users or /weather).

  • MCPs are less granular and focus on driving broader use cases. An MCP server exposes high-level capabilities (e.g., get_weather or get_open_supportIssues). A single MCP tool might execute several underlying REST API calls to gather all the necessary context for the AI.

LLM-Native Features

  • APIs are generally stateless request-response mechanisms.

  • MCP supports multi-turn, long-lived sessions (often using Server-Sent Events) that allow an AI agent to have back-and-forth interactions with a tool. Furthermore, MCP includes AI-specific features like sampling, which allows the MCP server to leverage the LLM's reasoning abilities. For example, an MCP server could fetch open issues and then use sampling to ask the LLM to filter them by "highest security impact"—a subjective analysis that a traditional REST API cannot natively perform.

Output Formatting

  • APIs return machine-readable data, such as raw JSON payloads and database entity IDs.

  • MCP is designed to return data optimized for an LLM's context window, often formatting responses as human-readable Markdown with fully hydrated entity names instead of raw IDs.

How secure is my data behind an MCP Server?

Because the Model Context Protocol (MCP) acts as a bridge between untrusted, model-generated inputs and sensitive external systems, a single weak point can turn that bridge into a pathway for exploitation. Securing an MCP deployment requires a "shared responsibility" model, where the server stands as a fortified wall protecting resources, and the client acts as a vigilant gatekeeper ensuring the AI does not overstep its bounds.

Academic research breaks down MCP threats into four main categories: malicious developers, external attackers, malicious users, and security flaws. In practice, these manifest as prompt injection, command execution, token theft, excessive permissions, and unverified endpoints.

To protect yourself, you must implement strict safeguards across both MCP servers and MCP clients and quite honestly 99.9% of it goes straight over my head. It can be “dead secure” is what I’ll say on the matter.

How does MCP make my life easier?

So let’s list out a few scenarios where an MCP Server would make sense. At the end of the day it sits in the background and makes interaction with AI more meaningful as it has access to more capabilities and context of the organisation you are talking to.

Software Development and Debugging

The AI coding assistant is greatly enhanced by using MCP to connect directly to local filesystems and version control systems like Git or GitHub. Instead of manually pasting code snippets into a chat, the AI can securely browse your local files, read repository code, search codebases, review pull requests, and even commit changes directly within environments like Cursor or Claude Desktop.

Automated Travel Planning

The true power of multi-server MCP architecture shines here by combining multiple disparate services into one workflow. By connecting a Travel Server, a Weather Server, and a Calendar Server, an AI agent can autonomously read your calendar to find available dates, check destination weather forecasts, search and book flights, and automatically add the itinerary to your schedule while emailing you a confirmation.

Workflow and Communication Automation

AI can connect seamlessly to platforms like Slack, Gmail, or Google Drive. An AI assistant can search through your team's Slack history to pull project context, summarize past decisions, and automatically draft and send emails based on a simple natural language request, all without you needing to switch tabs.

Data Analysis and Visualization

MCP allows AI models to connect directly to SQL databases, Google Sheets, or financial APIs. The AI can read raw data like customer feedback or stock market history, execute complex queries, and instantly generate interactive charts or analytical dashboards. For instance, an AI can use the Alpha Vantage MCP server to fetch 10 years of historical coffee prices and immediately plot an interactive visual graph for you.

Enterprise Knowledge Management

A multi-agent MCP setup can be entirely automated for a Training Management System that can use specialized MCP agents to automatically ingest uploaded PDF documents, extract key learning objectives, generate structured course modules, and create custom multiple-choice assessments without manual human intervention.

Ultimately, the core benefit of MCP in these scenarios is that it transforms AI from a passive text generator into an active, context-aware participant. By utilizing standardized tools, resources, and prompts, you gain a modular, secure way to grant AI access to your personal and business data without needing to write custom integrations for every single application.

That was a big chunk of stuff I learned this week… What next?


No comments:

Post a Comment

The End of Prompt Sorcery: Why We Are Engineering Systems, Not Sentences in 2026

  Now this post might seem like a complete contradiction! Previously, I have been waxing lyrical on all sorts of prompting techniques from Z...