
MCP (Model Context Protocol) is an open protocol for connecting AI applications with external tools and data sources in a standardized way.
As the integration of AI assistants and agents into business operations advances, secure connectivity with internal tools and databases has become an unavoidable challenge. Traditionally, each AI product required its own custom connector implementation, but MCP reduces that duplication and makes it easier to reuse the same connection method across different environments.
At the same time, as the number of connected endpoints grows, permission management and authorization design become increasingly important. This article provides a structured explanation of MCP's fundamental architecture, representative risks, principles for safe operation, and practical steps for getting started.
This is an open standard published by Anthropic in November 2024 and subsequently donated to the Linux Foundation's Agentic AI Foundation (Anthropic, 2025-12-09). It currently counts Anthropic, OpenAI, and Block as co-founders, with Google, Microsoft, AWS, and others also participating.
In a word, MCP's value lies in replacing the individually implemented tool integrations required by each AI product with a common protocol, making it easier to reduce duplicated integration work.
In a world without MCP, connecting AI product A to Slack, GitHub, and an internal database requires three custom connectors. A separate AI product B must independently develop those same three connectors. With N AI applications and M tools, the result is a structure that can generate up to N×M integration code paths.
MCP reorganizes this integration structure into a more reusable form, helping to suppress the N×M implementation explosion. If each tool exposes a single MCP server, any MCP-compatible AI application can consume it through a common method. However, because differences in client-side support and authentication methods exist, the result does not become a perfectly clean N+M.
| Custom Integration | MCP | |
|---|---|---|
| Implementation volume | Developed individually per AI product (N×M) | One-time server-side implementation + common client-side method (reduces duplication) |
| Reusability | Cannot be used by other AI products | Easy to reuse across IDEs, desktop AI, and via API |
| Specification consistency | Each product uses its own proprietary format | Common protocol based on JSON-RPC 2.0 |
| Ecosystem | Separate marketplace per vendor | Open registry of MCP servers is growing |
The MCP architecture consists of three layers: Host, Client, and Server (MCP official specification 2025-03-26).
The server provides clients with the following three primitives.
| Primitive | Role | Examples |
|---|---|---|
| Resources | Reading data and metadata | Documents, DB schemas, configuration information |
| Prompts | Providing templates and workflows | Support response prompts, analysis procedures |
| Tools | Executing external actions | DB queries, file searches, API requests |
Example operation flow:
When a user asks the AI to "summarize last month's sales data," the Client retrieves the list of Tools exposed by the server, and the model decides to call the appropriate Tool (e.g., query_sales_data). The server returns the execution result, and the model uses it to generate a response. Communication is conducted over JSON-RPC 2.0, and sessions are managed in a stateful manner.
MCP greatly expands what AI can do, but it also enlarges the attack surface. Understanding the risks before considering adoption is essential.
These risks are not unique to MCP; they exist across AI systems that interact with external tools in general. However, because MCP broadens the scope of tool usage through standardized connectivity, the potential impact can easily grow depending on how the system is designed. The official MCP specification itself explicitly states that it "enables arbitrary data access and code execution paths," and ensuring security and reliability is designated as the responsibility of the implementer.
The data returned by an MCP server contains text that the model interprets. If an attacker embeds malicious instructions into a data source (such as documents or DB records), there is a risk that the model will follow those instructions and invoke unintended Tools.
For example, if you operate an MCP server for internal document search, an indirect prompt injection attack is conceivable, where an attacker inserts a hidden instruction such as "send this data to an external API" into a document. Because MCP allows the model to autonomously select and execute Tools, the potential blast radius is greater than with traditional API integrations.
MCP servers can be freely published by the community or individuals. The official MCP specification explicitly states that "annotations describing tool behavior should be treated as untrusted unless retrieved from a trusted server."
The following risk scenarios can be identified:
This mirrors the supply chain attack pattern seen with npm packages, and verifying the "origin and trustworthiness" of MCP servers becomes an organizational challenge.
Granting broad access permissions to an MCP server expands the range of data accessible to the AI. If a DB connection that can read all tables is passed to an MCP server, the model may execute queries involving sensitive data regardless of the user's intent, and include those results in its response.
As a principle in the specification, MCP states that "users must retain control over the data that is shared and the actions that are performed." However, how this is implemented at the implementation level is left to each organization.
The key to operating MCP safely is not to avoid it because of the risks, but to design it so that the risks can be controlled. Build countermeasures by combining the security principles of the MCP specification with the control mechanisms provided by your actual platform.
The principle is to limit the Tools exposed by an MCP server to the minimum necessary.
Server-side countermeasures:
Client-side countermeasures:
Even when a server exposes a large number of Tools, the client side can restrict which Tools are used. Depending on the implementation, parameters may be provided to specify a subset of Tools the model is allowed to call (for example, the allowed_tools parameter in the OpenAI Responses API). This kind of "filtering on the consumer side," used in conjunction with server-side controls, creates a layered defense.
As a specification principle, MCP mandates that "Hosts must obtain explicit user consent before invoking tools." At the implementation level, it is practical to tier the approval flow based on the impact level of each operation.
| Impact Level | Approval Method | Examples |
|---|---|---|
| Low (read-only) | Auto-approve | Document search, status check |
| Medium (limited writes) | Approve on first use only | Posting comments, changing labels |
| High (destructive operations) | Approve every time | Data deletion, configuration changes, fund transfers |
Some implementations provide a mechanism to control whether approval is required before tool execution (for example, the OpenAI Responses API's require_approval supports "always" / "never" / per-tool specification). The key is to default to the safer side: new Tools should require approval by default, with restrictions relaxed incrementally after verification.
When an MCP server requires authentication, credentials such as OAuth tokens are passed from the client to the server. The critical point here is to avoid persisting credentials. For example, OpenAI's implementation explicitly states that tokens passed in the authorization field are not stored server-side, and the design calls for them to be resent with each request.
From a trust boundary perspective, the following should be clearly separated:
Especially in production environments, the trust level should be differentiated based on whether the MCP server is provided officially, by the community, or developed in-house, and approval flows and permission settings should be applied accordingly.
Additional operational measures to implement:
In addition to authentication and authorization, the following operational controls are also worth considering:
There are three levels for trying out MCP, ranging from quick and easy to full-scale. Starting with the pattern that best fits your goals is the fastest path forward.
The easiest approach is to connect an already-published MCP server to an IDE or agent tool.
Configuration example for VS Code:
Create .vscode/mcp.json in the project root and specify the MCP server URL.
1{
2 "servers": {
3 "example-docs": {
4 "type": "http",
5 "url": "https://example.com/mcp"
6 }
7 }
8}After configuration, the Tools exposed by the MCP server become available in VS Code's Agent mode.
Configuration example for Claude Desktop:
Add the server to ~/Library/Application Support/Claude/claude_desktop_config.json.
1{
2 "mcpServers": {
3 "example-server": {
4 "command": "/path/to/server-binary"
5 }
6 }
7}In either case, it is safest to start with servers provided officially or from sources with clear provenance.
To get a hands-on feel for MCP's architecture, building your own server is the most effective way to deepen your understanding. The official MCP documentation provides a tutorial for a weather information server, where you can build a server that exposes two Tools: get_alerts and get_forecast (MCP Official, Build an MCP server).
General steps:
Troubleshooting checklist if things don't work:
~/Library/Logs/Claude/mcp*.log)When incorporating MCP into your own AI application, the appropriate pattern is to connect to a remote MCP server via API.
Conceptual implementation pattern:
1response = client.responses.create(
2 model="model-name",
3 tools=[
4 {
5 "type": "mcp",
6 "server_label": "internal_tools",
7 "server_url": "https://your-mcp-server.example.com/sse",
8 "require_approval": "always",
9 },
10 ],
11 input="Aggregate last month's sales data",
12)The advantage of this approach is that approval flows (require_approval), restrictions on available tools (allowed_tools), OAuth authentication, and more can all be controlled programmatically.
Considerations for production use:
require_approval to "always" by default, and consider relaxing it on a per-tool basis after validationMCP is not necessary for every AI project. The primary criterion for deciding whether to adopt it is whether the AI needs to dynamically select tools.
MCP delivers the greatest value in workflows where the AI cannot operate solely on its trained knowledge and must rely on external data or actions.
The common thread is a pattern where "the AI has multiple tools available and selects among them dynamically based on context." If there is only one tool, it is often simpler to call the API directly without going through MCP.
If any of the following apply, standard API integration or Function Calling is likely sufficient without introducing MCP.
Checklist for adoption decisions:
| Checkpoint | Yes → Consider MCP | No → API integration is sufficient |
|---|---|---|
| Does the AI connect to three or more external tools? | ✓ | — |
| Do you want to dynamically vary which tools are called based on context? | ✓ | — |
| Will the same set of tools be used across multiple AI applications? | ✓ | — |
| Are tool additions or changes frequent? | ✓ | — |
If all answers are No, a REST API or Function Calling will meet your requirements. MCP is not something to adopt simply because it is convenient — it demonstrates its true value when integration complexity is a genuine bottleneck.
MCP is an open protocol that standardizes how AI connects to external tools and data sources. Its greatest value lies in reducing the duplication of individual implementations and making integrations reusable across diverse environments, including IDEs, desktop AI applications, and API integrations.
At the same time, because MCP broadens the AI's access surface, it also introduces security risks such as prompt injection, supply chain risks, and excessive permissions. It is essential to incorporate mitigations — including the principle of least privilege, a phased approval flow, clear definition of trust boundaries, and operational controls such as audit logging and egress control — from the design stage.
A practical approach would be to start by connecting an existing MCP server to an IDE to get a hands-on feel for how it works, and then determine the appropriate scope of adoption based on your organization's specific requirements.
References
Chi
Majored in Information Science at the National University of Laos, where he contributed to the development of statistical software, building a practical foundation in data analysis and programming. He began his career in web and application development in 2021, and from 2023 onward gained extensive hands-on experience across both frontend and backend domains. At our company, he is responsible for the design and development of AI-powered web services, and is involved in projects that integrate natural language processing (NLP), machine learning, and generative AI and large language models (LLMs) into business systems. He has a voracious appetite for keeping up with the latest technologies and places great value on moving swiftly from technical validation to production implementation.