MCP & AI InfrastructureDeep Dives

The MCP Pattern: Giving AI Tools It Can Actually Use

Model Context Protocol from first principles.

The Prompt Engineering Project March 16, 2025 12 min read

Quick Answer

The Model Context Protocol is a standardized interface that connects AI language models to external tools, data sources, and services. MCP defines a client-server architecture where models can discover available tools, understand their schemas, and invoke them through a consistent protocol. This eliminates custom integration code and creates a universal plug-in system for AI applications.

Language models are remarkably capable at reasoning about information. They are remarkably incapable at interacting with the systems that contain that information. A model can analyze a database schema with expert precision, but it cannot query the database. It can reason about a deployment pipeline with sophisticated understanding, but it cannot trigger a deployment. It can draft the perfect API request, but it cannot send it.

This is the tool gap -- the structural disconnect between what models can think about and what they can act upon. Closing this gap is the central problem of AI infrastructure, and the Model Context Protocol (MCP) is the most promising pattern to emerge for solving it. MCP provides a standardized protocol for registering tools with AI models, invoking those tools during generation, and integrating tool results back into the model's reasoning process.

This article explains MCP from first principles: what it solves, how it works architecturally, and the design patterns that determine whether your tools actually help the model or just add complexity. We will ground the discussion in real TypeScript implementations, including patterns from the PEP project's own MCP server.

The Tool Gap

Before MCP, the pattern for giving AI models access to external systems was ad hoc and fragmented. Each AI provider had its own function-calling format. Each integration required custom serialization. Error handling was inconsistent. Tool discovery was nonexistent -- the client application had to know in advance exactly which tools to register with exactly which parameter schemas.

The consequences were predictable. Teams building AI agents spent more time on integration plumbing than on the actual intelligence. Tools that worked with one model provider had to be rewritten for another. Error messages from failed tool calls were often swallowed or misinterpreted by the model, leading to hallucinated results that the user could not distinguish from real data.

1

No standard tool format

Every provider defined tools differently. OpenAI used one JSON schema. Anthropic used another. Local models used yet another. Building tools meant building multiple adapters.

2

No tool discovery

Clients had to hardcode which tools were available. There was no mechanism for a tool server to advertise its capabilities to clients dynamically.

3

No context delivery

Tools returned raw data with no guidance on how the model should interpret or present it. Context about the tool result -- its freshness, completeness, reliability -- was lost.

4

No error protocol

When a tool call failed, there was no standard way to communicate the failure type, whether to retry, or what fallback behavior to adopt.

The MCP Architecture

MCP solves these problems by introducing a clean separation between three concerns: the AI client (which hosts the model and manages the conversation), the MCP server (which owns the tools and the systems they interact with), and the transport layer (which handles communication between them).

The architecture follows a client-server model that will be familiar to anyone who has worked with Language Server Protocol (LSP) in code editors. In fact, MCP was explicitly inspired by LSP's success in standardizing the interface between editors and language tooling. The bet is that the same architectural pattern -- a protocol-level contract between a capability consumer and a capability provider -- will work equally well for AI tool use.

mcp-architecture.txt
MCP ARCHITECTURE
================

+-------------------+         +-------------------+
|   AI CLIENT       |         |   MCP SERVER      |
|                   |         |                   |
| - Hosts the model |  JSON   | - Owns the tools  |
| - Manages context | <-----> | - Connects to     |
| - Renders output  |  RPC    |   external systems|
| - Routes tool     |         | - Validates input |
|   calls           |         | - Returns results |
+-------------------+         +-------------------+
        |                              |
        |                              |
  [User Interface]           [Databases, APIs,
                              File Systems,
                              Services]

The key insight of this architecture is that the MCP server is responsible for its own tools. It defines them, validates inputs to them, executes them, and formats their outputs. The AI client does not need to know how a tool works internally -- it only needs to know the tool's name, description, and parameter schema. This separation of concerns means tools can be developed, tested, and deployed independently of the AI client.

MCP supports multiple transport mechanisms: stdio (for local processes), HTTP with Server-Sent Events (for remote servers), and WebSocket (for persistent connections). The transport is abstracted from the protocol, so the same server implementation works across all transport types.

How Tool Definitions Work

A tool definition in MCP has three components: a name, a description, and a parameter schema defined using JSON Schema. The name is the identifier the model uses to invoke the tool. The description is the natural language explanation the model uses to decide when to invoke it. The parameter schema constrains and validates the inputs the model can provide.

Each of these components matters more than it appears. The name determines how naturally the model can reference the tool in its reasoning. The description is, effectively, a prompt -- it tells the model what the tool does, when to use it, and what to expect. The schema prevents an entire class of errors by rejecting malformed inputs before they reach your business logic.

tool-definition.ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { z } from 'zod'

const server = new McpServer({
  name: 'pep-tools',
  version: '1.0.0',
})

// Tool: get-prompt-by-id
// Follows the verb-noun naming convention
server.tool(
  'get-prompt-by-id',
  // Description acts as a prompt for the model
  'Retrieves a specific prompt template by its unique identifier. ' +
  'Use this when the user references a prompt by name or ID, or ' +
  'when you need to inspect a prompt before suggesting modifications. ' +
  'Returns the full prompt text, metadata, and version history.',
  {
    // Parameter schema using Zod (validated automatically)
    promptId: z.string()
      .describe('The unique identifier of the prompt (e.g., "sys-codereview-v3")'),
    includeHistory: z.boolean()
      .default(false)
      .describe('Whether to include previous versions of this prompt'),
  },
  async ({ promptId, includeHistory }) => {
    const prompt = await db.prompts.findById(promptId)

    if (!prompt) {
      return {
        content: [{
          type: 'text' as const,
          text: `No prompt found with ID "${promptId}". ` +
                `Use list-prompts to see available prompt IDs.`,
        }],
        isError: true,
      }
    }

    const result: Record<string, unknown> = {
      id: prompt.id,
      name: prompt.name,
      version: prompt.version,
      text: prompt.text,
      lastModified: prompt.updatedAt,
    }

    if (includeHistory) {
      result.history = await db.promptVersions.findByPromptId(promptId)
    }

    return {
      content: [{
        type: 'text' as const,
        text: JSON.stringify(result, null, 2),
      }],
    }
  }
)

Several patterns in this example are worth highlighting. The tool name follows a verb-noun convention: "get-prompt-by-id" rather than "promptRetrieval" or "fetchPrompt." Verb-noun naming reads naturally in model reasoning ("I need to get-prompt-by-id to inspect the template") and groups related tools by action when sorted alphabetically.

The description is written for the model, not the developer. It explains not just what the tool does, but when to use it and what it returns. This is tool-level prompt engineering -- the description is the instruction set for a specific capability.

Tool Naming Conventions

Tool naming is not a cosmetic concern. The name is the primary interface between the model's reasoning process and your tool's functionality. A poorly named tool gets invoked at the wrong times, with the wrong expectations, producing confusing results. A well-named tool is self-documenting -- the model understands from the name alone what the tool does and whether it is the right tool for the current task.

The Verb-Noun Pattern

The most effective naming convention for MCP tools is verb-noun: the action being performed followed by the resource being acted upon. This mirrors how both humans and models think about tool use -- "I need to [do something] to [some thing]."

naming-examples.ts
// GOOD: verb-noun, clear action and target
'get-prompt-by-id'        // Retrieves a specific prompt
'list-prompts'            // Lists available prompts
'update-prompt-version'   // Updates a prompt to a new version
'search-prompt-library'   // Searches across all prompts
'validate-prompt-syntax'  // Checks a prompt for common errors
'compare-prompt-versions' // Diffs two versions of a prompt

// BAD: ambiguous, noun-only, or inconsistent
'prompt'                  // What does this do? Get? Create? Delete?
'promptManager'           // Not an action, it is a class name
'doPromptThing'           // Meaningless verb
'fetch_and_validate'      // Two actions in one tool (split them)
'getPromptGetVersions'    // Overloaded (split into two tools)
When the model sees a list of available tools, it performs a selection task: which tool best matches the current need? Verb-noun naming makes this selection task trivially easy. The model scans the verbs to find the right action, then checks the nouns to confirm the right target. Ambiguous names force the model to read descriptions, which is slower and less reliable.

Parameter Design

Tool parameters are the input contract between the model and your server. Every parameter should be typed, constrained, and documented. Loose parameters -- untyped strings that accept anything -- are an invitation for the model to pass values your handler does not expect. Tight parameters -- specific types with documented constraints -- reduce errors and make the model's job easier.

parameter-design.ts
// GOOD: Typed, constrained, documented parameters
server.tool(
  'search-prompt-library',
  'Searches the prompt library by keyword, tag, or category. ' +
  'Returns matching prompts sorted by relevance. ' +
  'Use this when the user asks to find prompts or browse the library.',
  {
    query: z.string()
      .min(2)
      .max(200)
      .describe('Search query: keywords, prompt name, or natural language description'),
    category: z.enum(['system', 'user', 'few-shot', 'chain-of-thought'])
      .optional()
      .describe('Filter results to a specific prompt category'),
    limit: z.number()
      .int()
      .min(1)
      .max(50)
      .default(10)
      .describe('Maximum number of results to return'),
    sortBy: z.enum(['relevance', 'date', 'name'])
      .default('relevance')
      .describe('How to sort results'),
  },
  async ({ query, category, limit, sortBy }) => {
    // Parameters arrive validated and typed
    // No need for manual type checking or sanitization
    const results = await promptLibrary.search({
      query,
      category,
      limit,
      sortBy,
    })

    return {
      content: [{
        type: 'text' as const,
        text: JSON.stringify({
          total: results.total,
          returned: results.items.length,
          items: results.items,
        }, null, 2),
      }],
    }
  }
)

Notice the design choices in this parameter schema. The query string has length constraints that prevent both empty searches and context-window-filling inputs. The category uses an enum, not a free string -- the model cannot pass an invalid category. The limit has both min and max constraints. The sortBy has a sensible default. Each parameter has a .describe() call that tells the model what to pass.

These constraints are not just validation. They are documentation. When the model inspects the tool schema, it sees not just the parameter names but the types, the allowed values, the defaults, and the descriptions. A well-constrained schema reduces the model's uncertainty about how to call the tool, which reduces errors, which reduces retry loops, which reduces latency and cost.

Context Delivery Through Tools

Tools are not just action mechanisms. They are context delivery channels. Every tool response is an opportunity to inject structured, relevant information into the model's context at exactly the moment the model needs it. This is fundamentally more efficient than pre-loading context into the system prompt, because it delivers information on demand rather than consuming context budget upfront.

The pattern is straightforward: instead of stuffing everything the model might need into the system prompt, provide tools that let the model fetch specific context when its reasoning requires it. A code review tool does not need the entire codebase in the system prompt. It needs a tool that can retrieve specific files when the review process identifies a need for additional context.

Tools are not just functions the model can call. They are context delivery mechanisms that provide exactly the right information at exactly the right time.

context-delivery.ts
// Tool that delivers context, not just data
server.tool(
  'get-prompt-context',
  'Retrieves a prompt along with its related context: the prompts ' +
  'it references, the tools it mentions, and its evaluation results. ' +
  'Use this when you need to understand a prompt in its full operational context.',
  {
    promptId: z.string().describe('The prompt identifier'),
  },
  async ({ promptId }) => {
    const prompt = await db.prompts.findById(promptId)
    if (!prompt) {
      return {
        content: [{
          type: 'text' as const,
          text: `Prompt "${promptId}" not found. Use list-prompts to see available IDs.`,
        }],
        isError: true,
      }
    }

    // Deliver rich context, not just raw data
    const context = {
      prompt: {
        id: prompt.id,
        text: prompt.text,
        version: prompt.version,
      },
      // Related prompts this one references or chains with
      relatedPrompts: await db.prompts.findRelated(promptId),
      // Tools this prompt instructs the model to use
      referencedTools: extractToolReferences(prompt.text),
      // Recent evaluation scores
      evaluations: await db.evaluations.getRecent(promptId, 5),
      // Usage guidance for the model
      guidance: 'This prompt is part of the code review pipeline. ' +
        'Evaluation scores below 0.8 indicate areas needing improvement. ' +
        'Related prompts are chained in the order shown.',
    }

    return {
      content: [{
        type: 'text' as const,
        text: JSON.stringify(context, null, 2),
      }],
    }
  }
)

The "guidance" field in the response is a critical pattern. It is natural language context about the data -- telling the model not just what the data is, but how to interpret it. This is metadata for the model, embedded in the tool response. Without it, the model has to infer how to use the data from the data itself, which is less reliable than explicit guidance.

Error Handling Patterns

Error handling in MCP tools requires a different philosophy than traditional API error handling. In a traditional API, errors are consumed by code that can branch on error types. In MCP, errors are consumed by a language model that needs to understand what went wrong, whether to retry, and how to communicate the failure to the user. This means error responses must be informative, actionable, and written in natural language.

error-handling.ts
// Error handling pattern for MCP tools
server.tool(
  'update-prompt-version',
  'Creates a new version of an existing prompt. Use this when the user ' +
  'wants to modify a prompt and preserve its version history.',
  {
    promptId: z.string().describe('The prompt to update'),
    newText: z.string().min(1).describe('The updated prompt text'),
    changeNote: z.string().optional().describe('Description of what changed and why'),
  },
  async ({ promptId, newText, changeNote }) => {
    try {
      const existing = await db.prompts.findById(promptId)

      if (!existing) {
        return {
          content: [{
            type: 'text' as const,
            text: `Error: Prompt "${promptId}" does not exist. ` +
                  `To see available prompts, use the list-prompts tool. ` +
                  `To create a new prompt, use create-prompt instead.`,
          }],
          isError: true,
        }
      }

      if (newText === existing.text) {
        return {
          content: [{
            type: 'text' as const,
            text: `No changes detected. The new text is identical to version ` +
                  `${existing.version} of "${promptId}". If you intended to make ` +
                  `changes, please verify the new text differs from the current version.`,
          }],
          isError: true,
        }
      }

      const updated = await db.prompts.createVersion({
        promptId,
        text: newText,
        changeNote: changeNote || 'No change note provided',
        previousVersion: existing.version,
      })

      return {
        content: [{
          type: 'text' as const,
          text: JSON.stringify({
            success: true,
            promptId,
            previousVersion: existing.version,
            newVersion: updated.version,
            changeNote: updated.changeNote,
          }, null, 2),
        }],
      }
    } catch (error) {
      return {
        content: [{
          type: 'text' as const,
          text: `Failed to update prompt "${promptId}": ${error instanceof Error ? error.message : 'Unknown error'}. ` +
                `This may be a temporary issue. You can retry this operation, ` +
                `or use get-prompt-by-id to verify the current state of the prompt.`,
        }],
        isError: true,
      }
    }
  }
)
Every error response should answer three questions for the model: What went wrong? Is the error recoverable? What should the model do next? An error message that says "Not found" answers only the first question. An error message that says "Prompt not found. Use list-prompts to see available IDs." answers all three.

Building With MCP

MCP is not just a protocol specification. It is a design philosophy for how AI models should interact with the external world. The core principles -- standardized tool registration, typed parameter schemas, context-rich responses, informative error handling -- apply regardless of which specific MCP library or transport you use. They apply even if you are building custom function-calling integrations outside the MCP ecosystem.

The tools you expose to a model are, in a very real sense, the model's hands. The quality of those tools determines the quality of the model's actions. A model with well-designed tools -- clearly named, tightly parameterized, richly documented, gracefully failing -- will outperform a more capable model with poorly designed tools. Tool quality is a multiplier on model capability.

The PEP project's MCP server implements these patterns across its entire tool surface: prompt management, evaluation, library search, and deployment. Each tool was designed not just to perform a function, but to be a good collaborator with the model -- providing context, handling errors gracefully, and constraining inputs to the range of values that produce useful results.

A model with well-designed tools will outperform a more capable model with poorly designed tools. Tool quality is a multiplier on model capability.


Key Takeaways

1

MCP standardizes tool registration, invocation, and response handling between AI clients and tool servers, eliminating the integration plumbing that slows AI development.

2

Use verb-noun naming for tools (get-prompt-by-id, search-prompt-library). Models select tools by scanning names first, then reading descriptions -- clear names reduce selection errors.

3

Design parameters with specific types, constraints, enums, and descriptions. Tight parameter schemas reduce model errors and serve as self-documentation.

4

Tools are context delivery mechanisms, not just action functions. Include interpretive guidance in tool responses to help the model understand the data, not just receive it.

5

Error responses must answer three questions for the model: what went wrong, is it recoverable, and what should the model do next.

Frequently Asked Questions

Common questions about this topic

Why We Built a Design System for an AI ProjectContext Window Economics: A Mental Model

Related Articles

MCP & AI Infrastructure

Building an MCP Server: Architecture Decisions

Language choice, transport layer, tool registration, error handling, security model, testing, and deployment for a produ...

MCP & AI Infrastructure

MCP Tool Naming: Why Names Are Your Most Important API Decision

AI models use tool names to decide when and how to call them. Good naming reduces errors, bad naming causes cascading fa...

MCP & AI Infrastructure

Context Delivery Patterns: Feeding AI the Right Information

The information you feed an AI agent matters as much as the instructions you give it. Four patterns for getting context ...

All Articles