THE AI ENGINEERING REVIEW Model Context Protocol · Deep Dive 2025 Edition
Featured Article · Architecture & Implementation

Model Context
Protocol

The universal language that lets AI models talk to your tools, databases, and file systems — explained from first principles.

ELI18 · Plain English for the technically curious ELI10 · Analogies anyone can follow CONFIG · Real-world setup walkthroughs EXAMPLES · Filesystem + SQLite in action

What Is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard introduced by Anthropic that defines a universal way for AI models to connect to external tools, data sources, and services. Think of it as a common plug socket — just as all your appliances use the same wall socket regardless of who made them, MCP allows any AI model to connect to any tool using the same standard interface.

Before MCP, every integration between an AI and a tool was bespoke. A company wanting Claude to read their database would need to write custom code, different from the custom code needed to connect GPT-4 to that same database. MCP eliminates this duplication by defining a protocol that any model and any tool can implement once and then interoperate freely.

"MCP is to AI tools what USB is to computer peripherals — one standard, infinite possibilities." — Model Context Protocol Design Philosophy

At its core, MCP defines three primitives that any server can expose:

01
Tools

Functions the AI can call to take actions — like running a SQL query, reading a file, or sending an API request. Tools are the verbs of MCP.

02
Resources

Data sources the AI can read from — like a file, a database table, or a URL. Resources are the nouns of MCP — they provide context to the model.

03
Prompts

Pre-written instruction templates that guide the AI on how to use the server effectively. Prompts are the instructions that come in the box.

From Custom Integrations to a Universal Standard

The story of MCP is the story of a problem that every AI developer kept running into — and one team's decision to solve it properly rather than patch around it.

2022–2023
The Age of Custom Integrations
As large language models become practical, developers start building tool-use capabilities. Every integration is bespoke — OpenAI plugins, LangChain tools, custom function calling. The ecosystem is fragmented and duplicative.
Early 2024
Anthropic Identifies the Pattern
Teams at Anthropic repeatedly encounter the same integration friction. Writing a tool connector for one model means rewriting it for another. A standardised protocol is sketched internally — inspired by language server protocols from the developer tooling world (LSP).
November 2024
MCP Goes Public
Anthropic publicly releases the Model Context Protocol specification and open-sources a suite of reference server implementations including filesystem, SQLite, GitHub, Google Drive, Slack, PostgreSQL and more. The specification is published at modelcontextprotocol.io.
Early 2025
Ecosystem Explodes
The community builds hundreds of MCP servers. Major platforms including Replit, Sourcegraph, Zed, and others adopt the standard. Microsoft, Google and OpenAI begin evaluating compatibility. Community npm packages like mcp-sqlite emerge.
Mid 2025
Industry Standard Emerges
MCP becomes the de-facto standard for AI tool integration. Claude Desktop ships with MCP support built in. Enterprise adoption accelerates as organisations realise they can build one server and connect any AI to it.

How MCP Works Under the Hood

MCP follows a clean client–server architecture built on top of JSON-RPC 2.0. Every component has a defined role and communicates over a well-specified channel.

🖥️
HOST APPLICATION
Claude Desktop / Claude Code
JSON-RPC 2.0
stdio / SSE
🔌
MCP CLIENT
Embedded in the host — manages connections to servers
Tool calls
Resources
Prompts
⚙️
MCP SERVER
filesystem / sqlite / github / etc.
📁
LOCAL FILES
🗄️
DATABASES
🌐
APIS / WEB
☁️
CLOUD SERVICES
↑ MCP SERVERS SIT BETWEEN THE AI AND YOUR DATA SOURCES

The Communication Flow

When you ask Claude to "query my database", here is exactly what happens:

  1. Claude (the host) decides it needs to use a tool. It sends a tools/call request to the MCP client.
  2. The MCP client routes the request to the correct server process over stdin/stdout (for local servers) or HTTP+SSE (for remote servers).
  3. The MCP server receives the JSON-RPC message, executes the real operation (e.g. runs the SQL query), and returns a structured result.
  4. The result flows back through the client into Claude's context, and Claude uses it to formulate its response to you.

Transport Mechanisms

MCP supports two transport layers depending on where your server lives:

stdio (Standard Input/Output)
Used for local servers. The host launches the server as a child process and communicates via stdin/stdout pipes. This is what your npx commands use. Zero network overhead, inherits the user's local permissions.
HTTP + SSE (Server-Sent Events)
Used for remote servers. The server runs as a web service. Claude connects via HTTP and receives streaming events via SSE. This is how cloud-hosted MCP servers (Google Drive, Slack, etc.) work.

MCP for the Technically Curious

You already know what an API is. An API is a contract — here are my endpoints, here are the inputs I accept, here are the outputs I return. MCP is essentially an API specification, but it's designed specifically for the interaction pattern between an AI model and a tool.

The critical difference is that MCP is model-agnostic and tool-agnostic. Traditional APIs are point-to-point: you write a specific integration between System A and System B. MCP is hub-and-spoke: you write one MCP server that exposes your tool's capabilities, and any MCP-compatible AI client can use it immediately — no further integration work.

Without MCP
Write a custom integration for Claude + your DB
Write another integration for GPT-4 + your DB
Write another integration for Gemini + your DB
Each integration is bespoke, brittle, hard to maintain
Switching AI models means rewriting integrations
No standard for auth, errors, streaming, or discovery
vs
With MCP
Write one MCP server for your DB — done
Claude connects to it automatically
Any other MCP-compatible model connects too
Standard error handling, auth patterns, streaming
Switch AI models without touching your tool code
Discover available tools via standard capability listing

The JSON-RPC Layer

Under the hood, MCP uses JSON-RPC 2.0 — a lightweight remote procedure call protocol. When Claude wants to list your database tables, it sends something like this over stdin to the MCP server process:

JSON-RPC request from Claude → SQLite MCP server
{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "list_tables", "arguments": {} } }

The SQLite server processes this, queries sqlite_master, and returns:

JSON-RPC response from SQLite MCP server → Claude
{ "jsonrpc": "2.0", "id": 1, "result": { "content": [{ "type": "text", "text": "[{\"name\":\"bronze_raw_sales\"},{\"name\":\"silver_customers\"},...]" }] } }

MCP in Plain English

Imagine you have a very smart robot assistant. This robot is incredibly good at thinking and answering questions, but it lives inside a box and can't see or touch anything in the real world by itself.

🤖 The Robot (Claude)
The AI lives inside a box. It's brilliant at reasoning, but it can only work with information you directly hand to it. On its own it can't open your files or check your database.
🔧 The Helper Machines (MCP Servers)
These are little specialist machines that sit next to the robot. One knows how to read files. Another knows how to talk to databases. Each one is really good at one specific job.
📋 The Common Language (MCP Protocol)
MCP is like a walkie-talkie system all the machines use. Everyone speaks the same language — so the robot can ask any helper machine for help without needing to learn a different way to talk to each one.
💬 A Real Conversation
You ask: "What's in my database?" The robot says to the database helper: "List tables please." The helper looks inside and says: "There are 10 tables." The robot tells you the answer.
"You don't need to teach the robot how databases work. You teach the database helper machine to speak the robot's language — once — and then they can always work together."

The really clever part is that because all the helper machines speak the same language (MCP), you only have to build each helper machine once. After that, any robot — not just Claude, but any AI that speaks MCP — can use it. It's like building a LEGO connector piece that works with every LEGO set ever made.

3
Core Primitives
Tools · Resources · Prompts
2
Transport Layers
stdio · HTTP+SSE
1
Open Standard
modelcontextprotocol.io
Possible Integrations
Any tool, any model

Setting Up MCP for Claude Desktop & Claude Code

MCP servers are configured in a JSON file that Claude reads on startup. The file tells Claude which servers to launch, how to launch them, and what arguments to pass.

Claude Desktop — config file location

🪟 Windows
%APPDATA%\Claude\claude_desktop_config.json
🍎 macOS
~/Library/Application Support/Claude/claude_desktop_config.json

Anatomy of a Config File

claude_desktop_config.json
{ "mcpServers": { // ── Filesystem Server (Official — Anthropic) ────────────── "filesystem": { "command": "npx", // launch via Node.js package runner "args": [ "-y", // auto-accept install prompt "@modelcontextprotocol/server-filesystem", "/Users/yourname/Desktop" // root directory Claude can access ] }, // ── SQLite Server (Community — mcp-sqlite) ──────────────── "sqlite": { "command": "npx", "args": [ "-y", "mcp-sqlite", // community package "/path/to/your/database.db" ] }, // ── Remote Server via HTTP+SSE (e.g. Google Drive) ──────── "google-drive": { "type": "sse", "url": "https://drivemcp.googleapis.com/mcp/v1" } } }

Claude Code — CLI Setup

If you use Claude Code (the terminal-based agentic coding tool), you add MCP servers directly from the command line:

Terminal — Claude Code MCP setup
# Add the filesystem server claude mcp add filesystem -- npx -y @modelcontextprotocol/server-filesystem /your/path # Add a SQLite server claude mcp add sqlite -- npx -y mcp-sqlite /path/to/database.db # List all configured servers claude mcp list # Remove a server claude mcp remove sqlite

Prerequisites

For stdio-based MCP servers running via npx, you need Node.js 18+ installed on your machine. Verify with:

Terminal
node --version # should print v18.0.0 or higher npx --version # comes bundled with Node

The Filesystem & SQLite Servers in Action

The two most commonly used local MCP servers are the filesystem server (official, by Anthropic) and the SQLite server (community-built). Here's what they expose and how they differ.

Filesystem Server — Tools

Built by Anthropic · TypeScript · github.com/modelcontextprotocol/servers

Tool What it does Example use
read_file Read full contents of a file "Read my config.json"
write_file Create or overwrite a file "Save this code to main.py"
edit_file Make targeted partial edits "Fix line 42 in app.js"
list_directory Show files and folders in a directory "What's on my desktop?"
create_directory Make a new folder "Create a /reports folder"
move_file Move or rename a file "Rename draft.txt to final.txt"
search_files Recursive search by name pattern "Find all .csv files"
get_file_info Metadata: size, dates, type "When was report.pdf last modified?"

SQLite Server — Tools

Community-built · TypeScript · better-sqlite3 driver · npmjs.com/package/mcp-sqlite

Tool What it does Example use
query Execute any raw SQL statement "SELECT * FROM silver_sales"
list_tables List all user tables in the DB "What tables exist?"
get_table_schema Get column definitions for a table "Show me the schema of gold_customer_360"
create_record Insert a row via JSON object "Add a new product record"
update_records Update rows matching conditions "Mark PROD010 as inactive"
delete_records Delete rows matching conditions "Remove the test batch"
db_info File path, size, table count, modified date "How big is my database?"

MCP Works With Any Database

SQLite is the simplest entry point — it's a local file, zero infrastructure, perfect for learning. But the MCP database ecosystem covers the full spectrum of production-grade databases. Because every MCP server exposes the same three primitives (Tools, Resources, Prompts), swapping from SQLite to PostgreSQL or Snowflake requires only a config file change — your Claude prompts and workflows stay identical.

"Write your data workflows once in natural language. Point the MCP config at a different database. Everything still works."

Official & Community Database MCP Servers

Database Package / Server Type Best For
SQLite mcp-sqlite Community · Local Local files, prototyping, learning MCP
PostgreSQL @modelcontextprotocol/server-postgres Official · Local/Remote Production apps, full SQL, JSONB, extensions
MySQL / MariaDB mcp-mysql Community · Remote Web apps, legacy systems, LAMP stacks
Microsoft SQL Server mcp-mssql Community · Remote Enterprise, Azure, .NET ecosystems
MongoDB mcp-mongodb Community · Remote Document stores, flexible schemas, real-time apps
Snowflake mcp-snowflake Community · Cloud Data warehousing, analytics at scale, BI workloads
BigQuery mcp-bigquery Community · Cloud Google Cloud analytics, petabyte-scale queries
Redis mcp-redis Community · Remote Caching layers, session stores, pub/sub
DuckDB mcp-duckdb Community · Local In-process analytics, Parquet/CSV querying, OLAP

Migrating From SQLite to PostgreSQL — Just a Config Change

The medallion architecture we built in SQLite could be pointed at a production PostgreSQL database with one edit to the config file. The natural language instructions to Claude stay exactly the same.

claude_desktop_config.json — swap SQLite for PostgreSQL
{ "mcpServers": { // ── BEFORE: local SQLite file ───────────────────────────── // "sqlite": { // "command": "npx", // "args": ["-y", "mcp-sqlite", "/path/to/medallion.db"] // }, // ── AFTER: production PostgreSQL ────────────────────────── "postgres": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-postgres", "postgresql://user:password@localhost:5432/medallion" ] }, // ── OR: Snowflake data warehouse ─────────────────────────── "snowflake": { "command": "npx", "args": [ "-y", "mcp-snowflake", "--account", "myorg-myaccount", "--database", "MEDALLION_DB", "--warehouse", "COMPUTE_WH" ] } } }

Multi-Database Workflows

You can connect multiple MCP database servers simultaneously. Claude will route each query to the right server automatically based on context. A real example: Claude reads raw CSV files from the filesystem MCP, queries a staging table in the PostgreSQL MCP, and writes results into a Snowflake warehouse MCP — all in one conversation.

🏗️ Local Development
Use SQLite or DuckDB for fast local iteration. No server setup, no credentials, instant feedback. Build and test your entire schema and transform logic locally before touching production.
🏭 Production
Promote the exact same workflow to PostgreSQL, MySQL, or SQL Server — the databases your production apps already run on. Same Claude instructions, same layer logic, real data.
📊 Analytics Scale
For data warehouse workloads, connect Snowflake or BigQuery via their MCP servers. Claude can orchestrate medallion transforms across billions of rows in a cloud warehouse.
🔀 Hybrid Pipelines
Mix databases in a single workflow: ingest from MySQL → transform in PostgreSQL → archive to Snowflake. Claude coordinates the movement across all three through their respective MCP servers.

Building a Medallion Architecture with MCP

A powerful demonstration of MCP in practice: using the SQLite MCP server to design and populate a full Bronze → Silver → Gold medallion data architecture — entirely through natural language instructions to Claude, with no SQL IDE open.

Medallion Architecture · medallion.db · 10 Tables
🥉 Bronze
bronze_raw_sales
bronze_raw_customers
bronze_raw_products
TYPE CAST · DEDUP · DQ FLAGS · DERIVE COLUMNS
🥈 Silver
silver_sales
silver_customers
silver_products
AGGREGATE · JOIN · CALCULATE KPIs · RFM SIGNALS
🥇 Gold
gold_sales_daily
gold_customer_360
gold_product_performance
gold_sales_by_category

What Claude Did Through MCP

Phase 1 — Schema Design (filesystem + sqlite MCPs)

Using the SQLite MCP's query tool, Claude executed CREATE TABLE statements for all 10 tables with appropriate column types, constraints, default values, and indexes — all in response to plain English instructions like "create bronze layer tables for raw data ingestion with a load timestamp."

Phase 2 — Seeding Dirty Bronze Data

Claude inserted realistic but intentionally messy raw data: duplicate customer records, a sale with a negative quantity, a product where unit cost exceeded unit price, mixed-case text values. This simulated real-world ingestion problems.

Phase 3 — Bronze → Silver Transforms

Claude wrote INSERT INTO silver_* SELECT ... FROM bronze_* transforms that cast TEXT to proper types, derived computed columns (email domain, full name, margin %), normalised inconsistent values (ACTIVE/active/true/1 → 1), deduped records, and flagged bad rows with dq_is_valid = 0.

Phase 4 — Silver → Gold Aggregations

Claude joined and aggregated silver tables into four business-ready gold tables: daily sales with status pivot, customer 360 with RFM signals, product performance with stock coverage days, and category revenue with month-over-month growth — all verified to produce consistent revenue totals of $5,279.65 across every gold table.

"The entire medallion architecture — 10 tables, 3 layers, all transforms, all sample data — was built through natural language. Zero SQL IDE. Zero manual query writing."

CI/CD Pipelines Powered by AI Agents & MCP

One of the most powerful — and underexplored — applications of MCP is using it to give AI agents a role inside your CI/CD pipeline. Rather than just generating code or answering questions, an agent equipped with MCP tools can autonomously execute pipeline steps: run migrations, validate data quality, commit schema changes, and trigger deployments.

"An agent with MCP tools doesn't just tell you what to do — it does it, checks the result, and reports back. That's CI/CD."

What a Traditional CI/CD Pipeline Looks Like

📥
CODE PUSH
🧪
TESTS
🏗️
BUILD
🚀
DEPLOY
VERIFY
↑ AI AGENT WITH MCP CAN PARTICIPATE IN ANY OF THESE STAGES

How MCP Enables Agentic CI/CD

When Claude Code (or any MCP-compatible agent) runs inside your pipeline, it connects to the same MCP servers you use interactively. This means the exact same capabilities available in a chat session are available to the agent running headlessly in GitHub Actions, GitLab CI, or Jenkins.

Example 1 — Schema Migration Agent (GitHub Actions)

Triggered on every pull request that touches /migrations/, a Claude Code agent uses the PostgreSQL MCP to connect to a staging database, inspect the current schema, apply the migration SQL, verify the result, and post a structured report as a PR comment — all without a human manually running psql.

.github/workflows/db-migration-agent.yml
name: Database Migration Agent on: pull_request: paths: - 'migrations/**' jobs: migrate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Run Claude Code migration agent env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} DATABASE_URL: ${{ secrets.STAGING_DB_URL }} run: | # Claude Code picks up MCP config from .claude/mcp.json in the repo claude --mcp-config .claude/mcp.json \ "Review all SQL files in /migrations that are new in this PR, connect to the staging database, apply them in order, verify each table exists after migration, and output a markdown report of what changed."
Example 2 — Medallion Data Quality Gate

After each nightly ETL run, an agent connects via the SQLite or PostgreSQL MCP, queries all silver and gold tables, checks that dq_is_valid = 0 row counts are within acceptable thresholds, compares revenue totals across gold tables for consistency, and either approves the pipeline run or triggers a Slack alert via a third MCP server — no human reviewer needed for routine runs.

Example 3 — AI Code Review + Schema Drift Detection

An agent uses the filesystem MCP to read newly committed SQL migration files, the database MCP to compare the expected schema against what's actually in production, and the GitHub MCP to post a detailed PR comment flagging any drift, missing indexes, or type mismatches — acting as an automated database reviewer on every pull request.

The Agentic CI/CD Stack

⚡ Claude Code (Headless)
Runs non-interactively inside CI runners. Accepts a task as a command-line argument and executes it autonomously using whichever MCP servers are configured in the repo's .claude/mcp.json.
🔌 MCP Servers (In-Pipeline)
The same servers you run locally — filesystem, database, GitHub — run inside the CI environment. The agent has the same tool access it does on your laptop, but pointed at staging or production credentials.
🔁 Trigger Patterns
Run agents on: PR opened, file path changed, scheduled cron (nightly DQ check), deployment completed (post-deploy verify), or manual workflow dispatch for on-demand agentic tasks.
🛡️ Safety Controls
Scope MCP servers to read-only credentials in CI for DQ checks. Use separate write-enabled credentials only for migration agents. All agent actions are logged — the CI run log is your audit trail.
"The same MCP config file that powers your interactive Claude session can be committed to your repo and picked up by CI. Your agent's capabilities travel with your code."

Resources to Go Deeper

The MCP ecosystem is growing rapidly. These are the highest-signal resources for learning more.