Enison
Contact
  • Home
  • Services
    • AI Hybrid BPO
    • AR Management Platform
    • MFI Platform
    • RAG Implementation Support
  • About
  • Blog
  • Recruit

Footer

Enison

エニソン株式会社

🇹🇭

Chamchuri Square 24F, 319 Phayathai Rd Pathum Wan,Bangkok 10330, Thailand

🇯🇵

〒104-0061 2F Ginza Otake Besidence, 1-22-11 Ginza, Chuo-ku, Tokyo 104-0061 03-6695-6749

🇱🇦

20 Samsenthai Road, Nongduang Nua Village, Sikhottabong District, Vientiane, Laos

Services

  • AI Hybrid BPO
  • AR Management Platform
  • MFI Platform
  • RAG Development Support

Support

  • Contact
  • Sales

Company

  • About Us
  • Blog
  • Careers

Legal

  • Terms of Service
  • Privacy Policy

© 2025-2026Enison Sole Co., Ltd. All rights reserved.

🇯🇵JA🇺🇸EN🇹🇭TH🇱🇦LO
What is MCP? Basics and Getting Started with the Protocol That Standardizes AI and External Tool Integration | Enison Sole Co., Ltd.
  1. Home
  2. Blog
  3. What is MCP? Basics and Getting Started with the Protocol That Standardizes AI and External Tool Integration

What is MCP? Basics and Getting Started with the Protocol That Standardizes AI and External Tool Integration

April 8, 2026
What is MCP? Basics and Getting Started with the Protocol That Standardizes AI and External Tool Integration

Lead

MCP (Model Context Protocol) is an open protocol for connecting AI applications with external tools and data sources in a standardized way.

As the integration of AI assistants and agents into business operations advances, secure connectivity with internal tools and databases has become an unavoidable challenge. Traditionally, each AI product required its own custom connector implementation, but MCP reduces that duplication and makes it easier to reuse the same connection method across different environments.

At the same time, as the number of connected endpoints grows, permission management and authorization design become increasingly important. This article provides a structured explanation of MCP's fundamental architecture, representative risks, principles for safe operation, and practical steps for getting started.

What is MCP? An Open Protocol Connecting AI and External Tools

This is an open standard published by Anthropic in November 2024 and subsequently donated to the Linux Foundation's Agentic AI Foundation (Anthropic, 2025-12-09). It currently counts Anthropic, OpenAI, and Block as co-founders, with Google, Microsoft, AWS, and others also participating.

In a word, MCP's value lies in replacing the individually implemented tool integrations required by each AI product with a common protocol, making it easier to reduce duplicated integration work.

Differences from Traditional Custom Integrations

In a world without MCP, connecting AI product A to Slack, GitHub, and an internal database requires three custom connectors. A separate AI product B must independently develop those same three connectors. With N AI applications and M tools, the result is a structure that can generate up to N×M integration code paths.

MCP reorganizes this integration structure into a more reusable form, helping to suppress the N×M implementation explosion. If each tool exposes a single MCP server, any MCP-compatible AI application can consume it through a common method. However, because differences in client-side support and authentication methods exist, the result does not become a perfectly clean N+M.

Custom IntegrationMCP
Implementation volumeDeveloped individually per AI product (N×M)One-time server-side implementation + common client-side method (reduces duplication)
ReusabilityCannot be used by other AI productsEasy to reuse across IDEs, desktop AI, and via API
Specification consistencyEach product uses its own proprietary formatCommon protocol based on JSON-RPC 2.0
EcosystemSeparate marketplace per vendorOpen registry of MCP servers is growing

Key Components and the Client-Host-Server Workflow

The MCP architecture consists of three layers: Host, Client, and Server (MCP official specification 2025-03-26).

  • Host — The application the user interacts with (Claude Desktop, an IDE, a chat UI, etc.)
  • Client — The connector within the Host responsible for communicating with MCP servers
  • Server — The service that exposes external tools and data to the AI

The server provides clients with the following three primitives.

PrimitiveRoleExamples
ResourcesReading data and metadataDocuments, DB schemas, configuration information
PromptsProviding templates and workflowsSupport response prompts, analysis procedures
ToolsExecuting external actionsDB queries, file searches, API requests

Example operation flow: When a user asks the AI to "summarize last month's sales data," the Client retrieves the list of Tools exposed by the server, and the model decides to call the appropriate Tool (e.g., query_sales_data). The server returns the execution result, and the model uses it to generate a response. Communication is conducted over JSON-RPC 2.0, and sessions are managed in a stateful manner.

What Security Risks Does MCP Carry?

MCP greatly expands what AI can do, but it also enlarges the attack surface. Understanding the risks before considering adoption is essential.

These risks are not unique to MCP; they exist across AI systems that interact with external tools in general. However, because MCP broadens the scope of tool usage through standardized connectivity, the potential impact can easily grow depending on how the system is designed. The official MCP specification itself explicitly states that it "enables arbitrary data access and code execution paths," and ensuring security and reliability is designated as the responsibility of the implementer.

Tool Abuse via Prompt Injection

The data returned by an MCP server contains text that the model interprets. If an attacker embeds malicious instructions into a data source (such as documents or DB records), there is a risk that the model will follow those instructions and invoke unintended Tools.

For example, if you operate an MCP server for internal document search, an indirect prompt injection attack is conceivable, where an attacker inserts a hidden instruction such as "send this data to an external API" into a document. Because MCP allows the model to autonomously select and execute Tools, the potential blast radius is greater than with traditional API integrations.

Malicious Servers and Supply Chain Risks

MCP servers can be freely published by the community or individuals. The official MCP specification explicitly states that "annotations describing tool behavior should be treated as untrusted unless retrieved from a trusted server."

The following risk scenarios can be identified:

  • A popular MCP server is compromised and the behavior of its Tools is tampered with
  • A rogue server disguised as a legitimate server collects user data
  • Vulnerabilities in dependency libraries propagate to AI applications via the server

This mirrors the supply chain attack pattern seen with npm packages, and verifying the "origin and trustworthiness" of MCP servers becomes an organizational challenge.

Data Leakage from Excessive Permission Grants

Granting broad access permissions to an MCP server expands the range of data accessible to the AI. If a DB connection that can read all tables is passed to an MCP server, the model may execute queries involving sensitive data regardless of the user's intent, and include those results in its response.

As a principle in the specification, MCP states that "users must retain control over the data that is shared and the actions that are performed." However, how this is implemented at the implementation level is left to each organization.

Measures for Operating MCP Securely

The key to operating MCP safely is not to avoid it because of the risks, but to design it so that the risks can be controlled. Build countermeasures by combining the security principles of the MCP specification with the control mechanisms provided by your actual platform.

Least Privilege and Execution Scope Control

The principle is to limit the Tools exposed by an MCP server to the minimum necessary.

Server-side countermeasures:

  • Use a read-only account for DB connections, and isolate write access to a separate server
  • Restrict file access to specific directories only
  • Do not expose Tools that perform destructive operations (such as DELETE or DROP) in the first place

Client-side countermeasures: Even when a server exposes a large number of Tools, the client side can restrict which Tools are used. Depending on the implementation, parameters may be provided to specify a subset of Tools the model is allowed to call (for example, the allowed_tools parameter in the OpenAI Responses API). This kind of "filtering on the consumer side," used in conjunction with server-side controls, creates a layered defense.

Approval Flows and Human-in-the-Loop Design

As a specification principle, MCP mandates that "Hosts must obtain explicit user consent before invoking tools." At the implementation level, it is practical to tier the approval flow based on the impact level of each operation.

Impact LevelApproval MethodExamples
Low (read-only)Auto-approveDocument search, status check
Medium (limited writes)Approve on first use onlyPosting comments, changing labels
High (destructive operations)Approve every timeData deletion, configuration changes, fund transfers

Some implementations provide a mechanism to control whether approval is required before tool execution (for example, the OpenAI Responses API's require_approval supports "always" / "never" / per-tool specification). The key is to default to the safer side: new Tools should require approval by default, with restrictions relaxed incrementally after verification.

Authentication, Authorization, and Trust Boundary Management

When an MCP server requires authentication, credentials such as OAuth tokens are passed from the client to the server. The critical point here is to avoid persisting credentials. For example, OpenAI's implementation explicitly states that tokens passed in the authorization field are not stored server-side, and the design calls for them to be resent with each request.

From a trust boundary perspective, the following should be clearly separated:

  • Between Host and Client: User authentication and authorization are managed here
  • Between Client and Server: Communication at the MCP protocol level. Server identity verification is required
  • Between Server and external systems: The scope of permissions held by the server should be limited

Especially in production environments, the trust level should be differentiated based on whether the MCP server is provided officially, by the community, or developed in-house, and approval flows and permission settings should be applied accordingly.

Additional operational measures to implement:

In addition to authentication and authorization, the following operational controls are also worth considering:

  • Audit logs: Record which Tool was called, when, and by whose action. This forms the foundation for detecting abnormal repeated calls or unexpected Tool executions
  • Rate limiting: Set an upper limit on the frequency of Tool calls to contain damage in the event of runaway behavior or abuse
  • Egress control (restricting network destinations): Limit the range of external communications available to the MCP server at the network level. This prevents unintended exfiltration of data
  • Sandboxing: Isolate the Tool execution environment to minimize impact on the host system

Three Practical Patterns for Getting Started with MCP

There are three levels for trying out MCP, ranging from quick and easy to full-scale. Starting with the pattern that best fits your goals is the fastest path forward.

Connecting to an Existing Server as a Tool (Fastest)

The easiest approach is to connect an already-published MCP server to an IDE or agent tool.

Configuration example for VS Code:

Create .vscode/mcp.json in the project root and specify the MCP server URL.

json
1{ 2 "servers": { 3 "example-docs": { 4 "type": "http", 5 "url": "https://example.com/mcp" 6 } 7 } 8}

After configuration, the Tools exposed by the MCP server become available in VS Code's Agent mode.

Configuration example for Claude Desktop:

Add the server to ~/Library/Application Support/Claude/claude_desktop_config.json.

json
1{ 2 "mcpServers": { 3 "example-server": { 4 "command": "/path/to/server-binary" 5 } 6 } 7}

In either case, it is safest to start with servers provided officially or from sources with clear provenance.

Building and Testing a Local Server from Scratch

To get a hands-on feel for MCP's architecture, building your own server is the most effective way to deepen your understanding. The official MCP documentation provides a tutorial for a weather information server, where you can build a server that exposes two Tools: get_alerts and get_forecast (MCP Official, Build an MCP server).

General steps:

  1. Implement the server using an MCP SDK (TypeScript / Python / Rust, etc.)
  2. Confirm the build or startup command
  3. Add the server path to the Claude Desktop configuration file
  4. Restart Claude and verify that the tools are recognized

Troubleshooting checklist if things don't work:

  • JSON syntax errors in the configuration file (e.g., trailing commas)
  • Whether the server path is specified as an absolute path
  • Whether Claude was fully closed before restarting
  • Whether any errors appear in the log files (macOS: ~/Library/Logs/Claude/mcp*.log)

Using a Remote Server via API

When incorporating MCP into your own AI application, the appropriate pattern is to connect to a remote MCP server via API.

Conceptual implementation pattern:

python
1response = client.responses.create( 2 model="model-name", 3 tools=[ 4 { 5 "type": "mcp", 6 "server_label": "internal_tools", 7 "server_url": "https://your-mcp-server.example.com/sse", 8 "require_approval": "always", 9 }, 10 ], 11 input="Aggregate last month's sales data", 12)

The advantage of this approach is that approval flows (require_approval), restrictions on available tools (allowed_tools), OAuth authentication, and more can all be controlled programmatically.

Considerations for production use:

  • Set require_approval to "always" by default, and consider relaxing it on a per-tool basis after validation
  • If the server requires OAuth, pass tokens per request rather than persisting them
  • Prioritize officially provided servers, and introduce third-party servers only after a security review

Use Cases Where MCP Fits—and Where It Doesn't

MCP is not necessary for every AI project. The primary criterion for deciding whether to adopt it is whether the AI needs to dynamically select tools.

Suitable Tasks and Conditions for Effective Adoption

MCP delivers the greatest value in workflows where the AI cannot operate solely on its trained knowledge and must rely on external data or actions.

  • Internal knowledge assistant — Generates answers by combining document search and database lookups
  • Customer support copilot — Assists with responses by spanning ticket systems, FAQs, and CRM
  • IDE assistant — Supports development by accessing the codebase, documentation, and CI/CD tools
  • Workflow automation — Enables the AI to use multiple business tools (Slack, GitHub, databases, etc.) as the situation demands
  • Agentic systems — Autonomously handles task decomposition, tool selection, and result integration

The common thread is a pattern where "the AI has multiple tools available and selects among them dynamically based on context." If there is only one tool, it is often simpler to call the API directly without going through MCP.

Cases Where Standard API Integration Suffices and Key Decision Checkpoints

If any of the following apply, standard API integration or Function Calling is likely sufficient without introducing MCP.

  • The external services the AI calls are fixed at one or two
  • There is no need to delegate tool selection to the model, as it can be determined by logic
  • The primary use case is backend processing that does not involve AI (e.g., cron jobs)
  • Security requirements do not permit dynamic tool invocation by AI in the first place

Checklist for adoption decisions:

CheckpointYes → Consider MCPNo → API integration is sufficient
Does the AI connect to three or more external tools?✓—
Do you want to dynamically vary which tools are called based on context?✓—
Will the same set of tools be used across multiple AI applications?✓—
Are tool additions or changes frequent?✓—

If all answers are No, a REST API or Function Calling will meet your requirements. MCP is not something to adopt simply because it is convenient — it demonstrates its true value when integration complexity is a genuine bottleneck.

Conclusion

MCP is an open protocol that standardizes how AI connects to external tools and data sources. Its greatest value lies in reducing the duplication of individual implementations and making integrations reusable across diverse environments, including IDEs, desktop AI applications, and API integrations.

At the same time, because MCP broadens the AI's access surface, it also introduces security risks such as prompt injection, supply chain risks, and excessive permissions. It is essential to incorporate mitigations — including the principle of least privilege, a phased approval flow, clear definition of trust boundaries, and operational controls such as audit logging and egress control — from the design stage.

A practical approach would be to start by connecting an existing MCP server to an IDE to get a hands-on feel for how it works, and then determine the appropriate scope of adoption based on your organization's specific requirements.


References

  1. Model Context Protocol — Official Specification (2025-03-26) https://modelcontextprotocol.io/specification/2025-03-26
  2. Anthropic — "Introducing the Model Context Protocol" (2024-11-25) https://www.anthropic.com/news/model-context-protocol
  3. Anthropic — "Donating the Model Context Protocol" (2025-12-09) https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation
  4. OpenAI Developers — "MCP and Connectors" https://developers.openai.com/api/docs/guides/tools-connectors-mcp
  5. MCP Official — "Build an MCP server" https://modelcontextprotocol.io/docs/develop/build-server

Author & Supervisor

Chi
Enison

Chi

Majored in Information Science at the National University of Laos, where he contributed to the development of statistical software, building a practical foundation in data analysis and programming. He began his career in web and application development in 2021, and from 2023 onward gained extensive hands-on experience across both frontend and backend domains. At our company, he is responsible for the design and development of AI-powered web services, and is involved in projects that integrate natural language processing (NLP), machine learning, and generative AI and large language models (LLMs) into business systems. He has a voracious appetite for keeping up with the latest technologies and places great value on moving swiftly from technical validation to production implementation.

Contact Us

Recommended Articles

5 Barriers Facing Laos DX Promotion Field Staff and Steps to Break Through Them
Updated: April 7, 2026

5 Barriers Facing Laos DX Promotion Field Staff and Steps to Break Through Them

What is Laos DX National Strategy 2021-2030? Overview of Digital Economy Development and Its Impact on Businesses
Updated: April 6, 2026

What is Laos DX National Strategy 2021-2030? Overview of Digital Economy Development and Its Impact on Businesses

Categories

  • Laos(4)
  • AI & LLM(3)
  • DX & Digitalization(2)
  • Security(2)
  • Fintech(1)

Contents

  • Lead
  • What is MCP? An Open Protocol Connecting AI and External Tools
  • Differences from Traditional Custom Integrations
  • Key Components and the Client-Host-Server Workflow
  • What Security Risks Does MCP Carry?
  • Tool Abuse via Prompt Injection
  • Malicious Servers and Supply Chain Risks
  • Data Leakage from Excessive Permission Grants
  • Measures for Operating MCP Securely
  • Least Privilege and Execution Scope Control
  • Approval Flows and Human-in-the-Loop Design
  • Authentication, Authorization, and Trust Boundary Management
  • Three Practical Patterns for Getting Started with MCP
  • Connecting to an Existing Server as a Tool (Fastest)
  • Building and Testing a Local Server from Scratch
  • Using a Remote Server via API
  • Use Cases Where MCP Fits—and Where It Doesn't
  • Suitable Tasks and Conditions for Effective Adoption
  • Cases Where Standard API Integration Suffices and Key Decision Checkpoints
  • Conclusion