Build, Secure, and Deploy a Custom MCP Server: From Tool Definition to Production
Before you begin
- Familiarity with MCP concepts — complete [How to Set Up MCP-Powered Coding Agents](/tutorials/mcp-coding-agents-setup) first
- Node.js 20+ and npm installed
- Basic PostgreSQL knowledge
- Docker and Docker Compose installed
What you'll learn
- Scaffold an MCP server with the official TypeScript SDK
- Define tools with Zod schemas for safe database querying
- Expose database schema as MCP resources for agent discovery
- Add API key authentication middleware
- Implement input validation, SQL injection prevention, and sensitive column masking
- Write unit and integration tests for MCP tools
- Containerize and deploy with Docker Compose
On this page
Most MCP tutorials stop at “here is a tool that returns hello world.” That is fine for learning the protocol, but it is useless when you need to connect AI agents to a real database, enforce authentication, prevent SQL injection, and package the whole thing for deployment. This tutorial closes that gap.
You will build an MCP server from scratch using the official TypeScript SDK that goes well beyond hello-world demos. By the end, you will have a fully working server that connects to PostgreSQL, exposes safe database querying tools, provides schema discovery resources, enforces API key authentication, validates all inputs, masks sensitive columns, handles errors cleanly, passes a test suite, and runs inside Docker Compose. The complete working code lives in the mcp-enterprise-starter repository.
If you have not worked with MCP before, start with How to Set Up MCP-Powered Coding Agents to get the fundamentals in place. If you want the conceptual overview of why custom MCP servers matter for enterprise workflows, read the companion blog post: Building Custom MCP Servers.
Step 1: Scaffold the MCP Server
Start by initializing the project and getting a minimal MCP server running that an agent can connect to.
Initialize the project
Create the project directory and install the dependencies you will need throughout this tutorial:
mkdir mcp-enterprise-starter && cd mcp-enterprise-starter
npm init -y
npm install @modelcontextprotocol/sdk zod pg
npm install -D typescript @types/node @types/pg vitest tsx
Set "type": "module" in your package.json since the MCP SDK uses ES modules.
Create a tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"declaration": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests"]
}
Create the server entry point
The MCP SDK provides a Server class that handles protocol negotiation, capability advertisement, and message routing. Your job is to instantiate it, register request handlers for tool listing and tool calls, and connect it to a transport.
File: src/server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
function log(level: string, message: string, data?: Record<string, unknown>) {
const entry = { timestamp: new Date().toISOString(), level, message, ...data };
process.stderr.write(JSON.stringify(entry) + "\n");
}
const server = new Server(
{ name: "mcp-enterprise-starter", version: "1.0.0" },
{ capabilities: { tools: {}, resources: {} } },
);
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
log("info", "MCP Enterprise Starter server running on stdio");
}
main().catch((error) => {
log("error", "Failed to start server", {
error: error instanceof Error ? error.message : String(error),
});
process.exit(1);
});
A few things to notice here. The server uses stdio transport, which is the right choice for local development and for agents like Claude Desktop that spawn the server as a child process. All logging goes to stderr as structured JSON — stdout is reserved for MCP protocol messages. Never write debug output to stdout.
Never write to stdout in an MCP server. The MCP protocol uses stdout for communication between client and server. Any stray console.log will corrupt the protocol stream and cause connection failures.
Transport options beyond stdio
For remote or multi-user deployments where the server runs as a standalone HTTP service, use Streamable HTTP transport — the current standard in the MCP spec, replacing the earlier standalone SSE transport. The MCP TypeScript SDK provides Streamable HTTP server transports for this purpose. This tutorial focuses on stdio transport, which is the right choice for local development with Claude Desktop and single-user setups.
Test with Claude Desktop
Add scripts to package.json:
{
"scripts": {
"build": "tsc",
"start": "node dist/server.js",
"dev": "tsx src/server.ts",
"test": "vitest run",
"test:watch": "vitest"
}
}
To verify the server starts correctly, configure Claude Desktop to connect to it. Add this to your Claude Desktop MCP configuration file:
{
"mcpServers": {
"enterprise-starter": {
"command": "npx",
"args": ["tsx", "src/server.ts"],
"cwd": "/path/to/mcp-enterprise-starter"
}
}
}
Restart Claude Desktop and check the MCP server list. You should see mcp-enterprise-starter listed with no tools yet. If the server fails to start, check stderr output in the Claude Desktop logs.
You now have a working MCP server skeleton. It does not do anything useful yet, but the protocol handshake works.
Step 2: Define Your First Tool — Safe Database Query
This is where the server becomes useful. You will create a query_database tool that lets agents run read-only SQL queries against PostgreSQL with safety guardrails built in from the start.
Set up the database connection pool
File: src/utils/db.ts
import pg from "pg";
const { Pool } = pg;
let pool: pg.Pool | null = null;
export function getPool(): pg.Pool {
if (!pool) {
pool = new Pool({
connectionString:
process.env.DATABASE_URL ||
"postgres://mcp_user:mcp_password@localhost:5432/mcp_enterprise",
max: 10,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
});
}
return pool;
}
export async function healthCheck(): Promise<boolean> {
try {
const client = await getPool().connect();
await client.query("SELECT 1");
client.release();
return true;
} catch {
return false;
}
}
export async function shutdown(): Promise<void> {
if (pool) {
await pool.end();
pool = null;
}
}
The connection pool uses a DATABASE_URL connection string from the environment. The lazy initialization pattern means the pool is only created when the first query comes in. The shutdown function ensures clean disconnection when the server stops.
Use a dedicated read-only database user for your MCP server. Even if the application-level guardrails fail, the database user cannot execute DDL or DML write operations. This is defense in depth.
Define the query tool
The tool needs a Zod schema so the agent knows what inputs it accepts, and the handler needs to enforce safety rules before executing anything.
File: src/tools/query-database.ts
import { z } from "zod";
import { getPool } from "../utils/db.js";
import {
isSelectOnly,
enforceRowLimit,
maskSensitiveFields,
containsBlockedKeywords,
} from "../utils/sanitize.js";
import { validateInput, getAllowedTables } from "../middleware/validation.js";
import { queryError, validationError, timeoutError } from "../utils/errors.js";
export const QueryDatabaseInputSchema = z.object({
query: z.string().min(1).describe("SQL SELECT query to execute"),
params: z
.array(z.unknown())
.optional()
.default([])
.describe("Parameterized query values"),
});
export type QueryDatabaseInput = z.infer<typeof QueryDatabaseInputSchema>;
export const queryDatabaseTool = {
name: "query_database",
description:
"Execute a read-only SQL query against the database. Only SELECT statements are allowed. Results from sensitive columns (email, SSN) are automatically masked. Queries are limited to a maximum number of rows.",
inputSchema: {
type: "object" as const,
properties: {
query: {
type: "string",
description: "SQL SELECT query to execute",
},
params: {
type: "array",
items: {},
description: "Parameterized query values",
},
},
required: ["query"],
},
};
export async function handleQueryDatabase(args: unknown) {
const input = validateInput(QueryDatabaseInputSchema, args);
// Check for blocked keywords (DROP, DELETE, INSERT, etc.)
const blocked = containsBlockedKeywords(input.query);
if (blocked) {
throw validationError(
`Query contains blocked keyword: ${blocked}. Only SELECT queries are allowed.`,
);
}
// Ensure the query is SELECT-only
if (!isSelectOnly(input.query)) {
throw validationError(
"Only SELECT queries are allowed. Write operations are not permitted.",
);
}
// Validate referenced tables against the allowlist
const allowedTables = getAllowedTables();
const tablePattern = /\bFROM\s+(\w+)|\bJOIN\s+(\w+)/gi;
let tableMatch;
while ((tableMatch = tablePattern.exec(input.query)) !== null) {
const table = (tableMatch[1] || tableMatch[2]).toLowerCase();
if (!allowedTables.includes(table)) {
throw validationError(
`Query references table '${table}' which is not in the allowed tables list. Allowed tables: ${allowedTables.join(", ")}.`,
);
}
}
// Enforce row limits
const rowLimit = parseInt(process.env.ROW_LIMIT || "100", 10);
const maxRowLimit = parseInt(process.env.MAX_ROW_LIMIT || "1000", 10);
const limitedQuery = enforceRowLimit(input.query, rowLimit, maxRowLimit);
const pool = getPool();
try {
const timeoutMs = 30000;
const result = await Promise.race([
pool.query(limitedQuery, input.params),
new Promise<never>((_, reject) =>
setTimeout(() => reject(timeoutError()), timeoutMs),
),
]);
const maskedRows = maskSensitiveFields(
result.rows as Record<string, unknown>[],
);
return {
content: [
{
type: "text" as const,
text: JSON.stringify(
{
rowCount: result.rowCount,
rows: maskedRows,
fields: result.fields.map((f) => ({
name: f.name,
dataType: f.dataTypeID,
})),
},
null,
2,
),
},
],
};
} catch (error) {
if (error instanceof Error && error.name === "McpToolError") throw error;
const message = error instanceof Error ? error.message : "Unknown query error";
throw queryError(`Query execution failed: ${message}`, true);
}
}
There is a lot happening here, and all of it is intentional. The BLOCKED_KEYWORDS check catches destructive SQL before it reaches the database. The table allowlist (loaded from the ALLOWED_TABLES environment variable) ensures agents can only query tables you have explicitly approved. The row limit prevents an agent from running SELECT * FROM users on a table with ten million rows and blowing up the response. Parameterized queries are supported through the params field so that values are never interpolated into the SQL string.
The sanitization helpers (containsBlockedKeywords, isSelectOnly, enforceRowLimit, maskSensitiveFields) live in a separate src/utils/sanitize.ts module. You will build that in Step 5.
Define a table listing tool
Agents need a way to discover what tables are available before they try to query them.
File: src/tools/list-tables.ts
import { getAllowedTables } from "../middleware/validation.js";
export const listTablesTool = {
name: "list_tables",
description:
"List all database tables available for querying. Returns table names from the configured allowlist.",
inputSchema: {
type: "object" as const,
properties: {},
required: [] as string[],
},
};
export async function handleListTables() {
const tables = getAllowedTables();
return {
content: [
{
type: "text" as const,
text: JSON.stringify(
{
tables,
count: tables.length,
note: "Use the get_schema tool to inspect column details for any table.",
},
null,
2,
),
},
],
};
}
Register the tools with the server
Update src/server.ts to register both tools using setRequestHandler:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { requireAuth } from "./middleware/auth.js";
import { checkRateLimit } from "./middleware/rate-limit.js";
import {
queryDatabaseTool,
handleQueryDatabase,
} from "./tools/query-database.js";
import { listTablesTool, handleListTables } from "./tools/list-tables.js";
import { getSchemaTool, handleGetSchema } from "./tools/get-schema.js";
import { McpToolError } from "./utils/errors.js";
import { shutdown } from "./utils/db.js";
// ... log function and server instantiation from Step 1 ...
server.setRequestHandler(ListToolsRequestSchema, async () => {
log("info", "Listing tools");
return {
tools: [queryDatabaseTool, listTablesTool, getSchemaTool],
};
});
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
const startTime = Date.now();
log("info", "Tool call received", { tool: name, args });
try {
const apiKey = process.env.MCP_API_KEY || process.env.API_KEYS?.split(",")[0];
const authCtx = requireAuth(apiKey);
checkRateLimit(authCtx.apiKey);
let result;
switch (name) {
case "query_database":
result = await handleQueryDatabase(args);
break;
case "list_tables":
result = await handleListTables();
break;
case "get_schema":
result = await handleGetSchema(args);
break;
default:
return {
isError: true,
content: [{ type: "text" as const, text: `Unknown tool: ${name}` }],
};
}
log("info", "Tool call completed", {
tool: name,
durationMs: Date.now() - startTime,
});
return result;
} catch (error) {
if (error instanceof McpToolError) {
return error.toResponse();
}
return {
isError: true,
content: [{ type: "text" as const, text: "An unexpected error occurred" }],
};
}
});
Notice the tool descriptions. They tell the agent what the tool does, when to use it, and what to expect. This matters because the agent decides whether to call a tool based primarily on its name and description. Vague descriptions like “query stuff” lead to unreliable tool selection. Clear descriptions make the agent consistently pick the right tool for the right task.
Also notice that every tool call goes through auth and rate limiting before reaching the tool handler. The McpToolError catch ensures structured error responses flow back to the agent, while unexpected errors get a generic message that does not leak internal details.
Test this by restarting Claude Desktop and asking: “What tables are available?” The agent should call list_tables and return the allowlist.
Step 3: Add a Resource — Schema Discovery
Tools let agents take actions. Resources let agents read structured data. For a database MCP server, the most useful resource is the schema itself — table names, column names, data types, and constraints. When an agent can read the schema before writing a query, it produces dramatically better SQL.
Create the schema resource
File: src/resources/schema.ts
import { getPool } from "../utils/db.js";
import { getAllowedTables, isTableAllowed } from "../middleware/validation.js";
import { validationError } from "../utils/errors.js";
export function getSchemaResources() {
const tables = getAllowedTables();
return tables.map((table) => ({
uri: `db://schema/${table}`,
name: `${table} table schema`,
description: `Column definitions and constraints for the ${table} table`,
mimeType: "application/json",
}));
}
export function getSchemaResourceTemplates() {
return [
{
uriTemplate: "db://schema/{table_name}",
name: "Table schema",
description: "Column definitions and constraints for a database table",
mimeType: "application/json",
},
];
}
export async function readSchemaResource(uri: string) {
const match = uri.match(/^db:\/\/schema\/(.+)$/);
if (!match) {
throw validationError(`Invalid resource URI: ${uri}`);
}
const tableName = match[1];
if (!isTableAllowed(tableName)) {
throw validationError(
`Table '${tableName}' is not in the allowed tables list.`,
);
}
const pool = getPool();
const result = await pool.query(
`SELECT column_name, data_type, is_nullable, column_default, character_maximum_length
FROM information_schema.columns
WHERE table_schema = 'public' AND table_name = $1
ORDER BY ordinal_position`,
[tableName],
);
const schema = {
table: tableName,
columns: result.rows.map((row) => ({
name: row.column_name,
type: row.data_type,
nullable: row.is_nullable === "YES",
default: row.column_default,
maxLength: row.character_maximum_length,
})),
};
return {
contents: [
{
uri,
mimeType: "application/json",
text: JSON.stringify(schema, null, 2),
},
],
};
}
Register resources with the server
Add these handlers to src/server.ts alongside your tool handlers:
import {
ListResourcesRequestSchema,
ReadResourceRequestSchema,
ListResourceTemplatesRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import {
getSchemaResources,
getSchemaResourceTemplates,
readSchemaResource,
} from "./resources/schema.js";
server.setRequestHandler(ListResourcesRequestSchema, async () => {
log("info", "Listing resources");
return { resources: getSchemaResources() };
});
server.setRequestHandler(ListResourceTemplatesRequestSchema, async () => {
return { resourceTemplates: getSchemaResourceTemplates() };
});
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const { uri } = request.params;
log("info", "Reading resource", { uri });
return await readSchemaResource(uri);
});
The template URI db://schema/{table_name} lets the agent request schema for a specific table. The resource list exposes all allowlisted tables upfront so the agent can discover them. Both respect the table allowlist, so even the resource layer cannot leak schema information about tables you have not approved.
MCP resources are agent-readable by default. An agent that can read your full database schema could learn about tables containing sensitive data, even if it cannot query them directly. The allowlist prevents that.
Step 4: Add Authentication
Everything you have built so far is open — any process that can connect to the server can use it. For local development with Claude Desktop over stdio, that is acceptable because access is already controlled by who can run the process. For production deployments, you need authentication.
Create the auth middleware
File: src/middleware/auth.ts
import { authError } from "../utils/errors.js";
export interface AuthContext {
apiKey: string;
permissions: "read" | "read-write";
}
function getApiKeys(): Map<string, AuthContext> {
const keys = new Map<string, AuthContext>();
const envKeys = process.env.API_KEYS || "";
for (const key of envKeys.split(",").map((k) => k.trim()).filter(Boolean)) {
keys.set(key, {
apiKey: key,
permissions: "read",
});
}
return keys;
}
export function authenticate(apiKey: string | undefined): AuthContext {
if (!apiKey) {
throw authError(
"Missing API key. Provide an API key via the x-api-key header or MCP_API_KEY environment variable.",
);
}
const validKeys = getApiKeys();
const context = validKeys.get(apiKey);
if (!context) {
throw authError("Invalid API key. Check your API key and try again.");
}
return context;
}
export function requireAuth(apiKey: string | undefined): AuthContext {
return authenticate(apiKey);
}
The auth system is deliberately simple. API keys are loaded from the API_KEYS environment variable as a comma-separated list (e.g., dev-key-1,dev-key-2). Each key maps to a permission level. For production systems, you would replace this with OAuth2 or mTLS, but API keys are the right starting point because they are easy to configure in Claude Desktop and other MCP clients.
Never hardcode API keys in your MCP server configuration or source code. Always load credentials from environment variables. For production, use a secrets manager. For development, use a .env file that is excluded from version control.
Wire authentication into tool handlers
Authentication is already wired into the CallToolRequestSchema handler from Step 2. Every tool call runs through requireAuth(apiKey) before reaching the tool handler. If the key is missing or invalid, the server returns a structured auth error that the agent can understand.
For stdio transport, the API key comes from the MCP_API_KEY or API_KEYS environment variable set in the MCP client configuration. For HTTP transport, it would come from request headers. This design means the same auth code works for both transports.
Step 5: Add Input Validation and Safety Guardrails
You already have the query tool referencing sanitization helpers from Step 2. This step builds those helpers — the validation middleware, SQL sanitization, and sensitive column masking.
Create the validation middleware
File: src/middleware/validation.ts
import { z } from "zod";
import { validationError } from "../utils/errors.js";
export function validateInput<T>(schema: z.ZodSchema<T>, input: unknown): T {
const result = schema.safeParse(input);
if (!result.success) {
const issues = result.error.issues
.map((i) => `${i.path.join(".")}: ${i.message}`)
.join("; ");
throw validationError(`Invalid input: ${issues}`);
}
return result.data;
}
export function getAllowedTables(): string[] {
const tables = process.env.ALLOWED_TABLES || "departments,users,projects";
return tables.split(",").map((t) => t.trim().toLowerCase());
}
export function isTableAllowed(tableName: string): boolean {
return getAllowedTables().includes(tableName.toLowerCase());
}
The validateInput function wraps Zod’s safeParse and throws a structured McpToolError on failure. This means validation errors automatically flow back to the agent as structured responses with clear messages about what was wrong.
Build the sanitization helpers
File: src/utils/sanitize.ts
const BLOCKED_KEYWORDS = [
"DROP", "DELETE", "INSERT", "UPDATE", "ALTER", "CREATE",
"TRUNCATE", "GRANT", "REVOKE", "EXEC", "EXECUTE",
"INTO", "SET", "MERGE", "REPLACE", "LOCK", "UNLOCK",
];
const BLOCKED_PATTERN = new RegExp(
`\\b(${BLOCKED_KEYWORDS.join("|")})\\b`, "i",
);
export function containsBlockedKeywords(query: string): string | null {
const match = query.match(BLOCKED_PATTERN);
return match ? match[1].toUpperCase() : null;
}
export function isSelectOnly(query: string): boolean {
const trimmed = query.trim().replace(/;+$/, "").trim();
return /^SELECT\b/i.test(trimmed) &&
!containsBlockedKeywords(trimmed.replace(/^SELECT\b/i, ""));
}
export function enforceRowLimit(
query: string,
limit: number,
maxLimit: number,
): string {
const effectiveLimit = Math.min(limit, maxLimit);
const trimmed = query.trim().replace(/;+$/, "");
if (/\bLIMIT\s+\d+/i.test(trimmed)) {
return trimmed.replace(
/\bLIMIT\s+(\d+)/i,
(_match, n) => `LIMIT ${Math.min(parseInt(n, 10), effectiveLimit)}`,
);
}
return `${trimmed} LIMIT ${effectiveLimit}`;
}
export function maskSensitiveFields(
rows: Record<string, unknown>[],
): Record<string, unknown>[] {
const sensitive = getSensitiveColumns();
return rows.map((row) => {
const masked: Record<string, unknown> = {};
for (const [key, value] of Object.entries(row)) {
if (sensitive.includes(key.toLowerCase()) && typeof value === "string") {
masked[key] = maskValue(key.toLowerCase(), value);
} else {
masked[key] = value;
}
}
return masked;
});
}
function getSensitiveColumns(): string[] {
const envCols = process.env.SENSITIVE_COLUMNS;
if (envCols) {
return envCols.split(",").map((c) => c.trim().toLowerCase());
}
return ["email", "ssn", "social_security"];
}
function maskValue(columnName: string, value: string): string {
if (columnName === "email") {
const [local, domain] = value.split("@");
if (local && domain) {
return `${local[0]}${"*".repeat(Math.max(local.length - 1, 2))}@${domain}`;
}
}
if (columnName === "ssn" || columnName === "social_security") {
return `***-**-${value.slice(-4)}`;
}
return "***MASKED***";
}
The sensitive columns are configurable via the SENSITIVE_COLUMNS environment variable. This means you can adapt the masking to your database without changing code. The enforceRowLimit function handles three cases: no LIMIT clause (adds one), LIMIT higher than the max (reduces it), and LIMIT within bounds (leaves it alone).
The masking happens at the MCP server layer, not in the database and not in the agent. This gives you a single enforcement point that works regardless of what query the agent writes. Never rely on the agent to avoid selecting sensitive columns.
For deeper context on why this kind of output filtering matters in agentic AI systems, see Secure Agentic AI Apps and API Security Best Practices.
Step 6: Error Handling and Structured Responses
When an MCP tool returns an error, the agent needs enough information to understand what went wrong and decide whether to retry, try a different approach, or give up. Stack traces are useless to agents. Structured error categories are not.
Define error types
File: src/utils/errors.ts
export type ErrorCategory =
| "validation_error"
| "auth_error"
| "query_error"
| "rate_limit_error"
| "timeout_error"
| "internal_error";
export interface StructuredError {
error: string;
category: ErrorCategory;
message: string;
retryable: boolean;
}
export class McpToolError extends Error {
constructor(
public readonly category: ErrorCategory,
message: string,
public readonly retryable: boolean = false,
) {
super(message);
this.name = "McpToolError";
}
toResponse() {
const structured: StructuredError = {
error: this.category,
category: this.category,
message: this.message,
retryable: this.retryable,
};
return {
isError: true as const,
content: [{ type: "text" as const, text: JSON.stringify(structured) }],
};
}
}
export function validationError(message: string) {
return new McpToolError("validation_error", message, false);
}
export function authError(message: string) {
return new McpToolError("auth_error", message, false);
}
export function queryError(message: string, retryable = false) {
return new McpToolError("query_error", message, retryable);
}
export function rateLimitError(retryAfterSeconds: number) {
return new McpToolError(
"rate_limit_error",
`Rate limit exceeded. Retry after ${retryAfterSeconds} seconds.`,
true,
);
}
export function timeoutError(message = "Request timed out") {
return new McpToolError("timeout_error", message, true);
}
The retryable flag tells the agent whether it makes sense to try again. Timeout and rate limit errors are retryable. Validation and auth errors are not — the agent needs to change its approach. This small detail significantly improves how agents reason about failures.
The toResponse() method returns the exact format MCP expects for error responses: { isError: true, content: [{ type: "text", text: "..." }] }. The JSON in the text field gives the agent structured data to parse and reason about.
Step 7: Testing
An MCP server without tests is an MCP server that will break in production. You need unit tests for individual tool handlers and integration tests for the middleware layer.
Unit tests for sanitization logic
File: tests/tools.test.ts
import { describe, it, expect, beforeEach } from "vitest";
import {
containsBlockedKeywords,
isSelectOnly,
enforceRowLimit,
maskSensitiveFields,
} from "../src/utils/sanitize.js";
describe("Query Sanitization", () => {
describe("containsBlockedKeywords", () => {
it("blocks DROP statements", () => {
expect(containsBlockedKeywords("DROP TABLE users")).toBe("DROP");
});
it("blocks DELETE statements", () => {
expect(containsBlockedKeywords("DELETE FROM users")).toBe("DELETE");
});
it("allows SELECT statements", () => {
expect(containsBlockedKeywords("SELECT * FROM users")).toBeNull();
});
it("is case insensitive", () => {
expect(containsBlockedKeywords("drop table users")).toBe("DROP");
});
});
describe("enforceRowLimit", () => {
it("adds LIMIT when none present", () => {
const result = enforceRowLimit("SELECT * FROM users", 100, 1000);
expect(result).toBe("SELECT * FROM users LIMIT 100");
});
it("reduces LIMIT when over max", () => {
const result = enforceRowLimit("SELECT * FROM users LIMIT 5000", 100, 1000);
expect(result).toBe("SELECT * FROM users LIMIT 100");
});
});
});
describe("Sensitive Column Masking", () => {
beforeEach(() => {
process.env.SENSITIVE_COLUMNS = "email,ssn";
});
it("masks email columns", () => {
const rows = [{ name: "Alice", email: "alice@example.com" }];
const masked = maskSensitiveFields(rows);
expect(masked[0].email).not.toBe("alice@example.com");
expect(masked[0].email).toMatch(/^a\*+@example\.com$/);
});
it("masks SSN columns", () => {
const rows = [{ name: "Alice", ssn: "123-45-6789" }];
const masked = maskSensitiveFields(rows);
expect(masked[0].ssn).toBe("***-**-6789");
});
it("does not mask non-sensitive columns", () => {
const rows = [{ name: "Alice", role: "Engineer" }];
const masked = maskSensitiveFields(rows);
expect(masked[0].name).toBe("Alice");
expect(masked[0].role).toBe("Engineer");
});
});
Auth and rate limiting tests
File: tests/auth.test.ts
import { describe, it, expect, beforeEach } from "vitest";
import { authenticate } from "../src/middleware/auth.js";
import { McpToolError } from "../src/utils/errors.js";
describe("Authentication", () => {
beforeEach(() => {
process.env.API_KEYS = "test-key-1,test-key-2";
});
it("authenticates with a valid API key", () => {
const ctx = authenticate("test-key-1");
expect(ctx.apiKey).toBe("test-key-1");
expect(ctx.permissions).toBe("read");
});
it("rejects missing API key", () => {
expect(() => authenticate(undefined)).toThrow(McpToolError);
});
it("rejects invalid API key", () => {
expect(() => authenticate("wrong-key")).toThrow(McpToolError);
});
it("returns structured error response", () => {
try {
authenticate("wrong-key");
} catch (e) {
const response = (e as McpToolError).toResponse();
expect(response.isError).toBe(true);
const parsed = JSON.parse(response.content[0].text);
expect(parsed.category).toBe("auth_error");
expect(parsed.retryable).toBe(false);
}
});
});
The tests exercise pure functions — sanitization, authentication, rate limiting, and input validation — that do not require a database connection. The auth tests verify both success and failure paths, including the structure of error responses.
Run the tests:
npm test
Run tests in watch mode during development with npm run test:watch. Vitest will re-run affected tests when you save a file.
Step 8: Containerize and Deploy
The final step is packaging the server for local development and testing with Docker. You will create a multi-stage Dockerfile, a Docker Compose configuration that includes PostgreSQL with seeded sample data, and a health check so you can verify the server works end-to-end before deploying.
Create the Dockerfile
File: Dockerfile
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci --ignore-scripts
COPY tsconfig.json ./
COPY src/ ./src/
RUN npm run build
# Production stage
FROM node:20-alpine
RUN addgroup -g 1001 -S mcpuser && \
adduser -S mcpuser -u 1001
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci --omit=dev --ignore-scripts && npm cache clean --force
COPY --from=builder /app/dist ./dist
USER mcpuser
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD pgrep -f "node dist/server.js" > /dev/null || exit 1
CMD ["node", "dist/server.js"]
The multi-stage build keeps the image small by excluding TypeScript source, dev dependencies, and build tools. The health check verifies the Node.js process is running. For Streamable HTTP transport deployments, replace this with an HTTP endpoint check (e.g., curl -f http://localhost:3000/health). The non-root mcpuser follows container security best practices.
Create Docker Compose with PostgreSQL
File: docker-compose.yml
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: mcp_enterprise
POSTGRES_USER: mcp_user
POSTGRES_PASSWORD: mcp_password
ports:
- "5432:5432"
volumes:
- ./seed.sql:/docker-entrypoint-initdb.d/seed.sql
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U mcp_user -d mcp_enterprise"]
interval: 5s
timeout: 5s
retries: 5
mcp-server:
build: .
environment:
DATABASE_URL: postgres://mcp_user:mcp_password@postgres:5432/mcp_enterprise
API_KEYS: dev-key-1,dev-key-2
ALLOWED_TABLES: departments,users,projects
SENSITIVE_COLUMNS: email,ssn
ROW_LIMIT: "100"
MAX_ROW_LIMIT: "1000"
RATE_LIMIT_RPM: "60"
MCP_TRANSPORT: stdio
depends_on:
postgres:
condition: service_healthy
volumes:
pgdata:
Create the seed data
File: seed.sql
The seed data creates a realistic schema with departments, users, and projects — enough data to demonstrate queries, joins, filtering, and sensitive column masking.
CREATE TABLE IF NOT EXISTS departments (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
budget DECIMAL(12,2),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(255) NOT NULL,
ssn VARCHAR(11),
department_id INTEGER REFERENCES departments(id),
role VARCHAR(50),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS projects (
id SERIAL PRIMARY KEY,
name VARCHAR(200) NOT NULL,
description TEXT,
department_id INTEGER REFERENCES departments(id),
status VARCHAR(20) DEFAULT 'active',
start_date DATE,
end_date DATE
);
INSERT INTO departments (name, budget) VALUES
('Engineering', 2500000.00),
('Marketing', 800000.00),
('Sales', 1200000.00),
('Human Resources', 600000.00),
('Finance', 900000.00);
INSERT INTO users (name, email, ssn, department_id, role) VALUES
('Alice Chen', 'alice.chen@example.com', '123-45-6789', 1, 'Senior Engineer'),
('Bob Martinez', 'bob.martinez@example.com', '234-56-7890', 1, 'Staff Engineer'),
('Carol Williams', 'carol.williams@example.com', '345-67-8901', 1, 'Engineering Manager'),
('David Kim', 'david.kim@example.com', '456-78-9012', 2, 'Marketing Director'),
('Grace Lee', 'grace.lee@example.com', '789-01-2345', 3, 'Sales Manager'),
('James Taylor', 'james.taylor@example.com', '012-34-5678', 4, 'HR Director'),
('Leo Garcia', 'leo.garcia@example.com', '222-33-4444', 5, 'CFO');
INSERT INTO projects (name, description, department_id, status, start_date, end_date) VALUES
('Platform Migration', 'Migrate legacy monolith to microservices', 1, 'active', '2026-01-15', '2026-06-30'),
('API Gateway', 'Implement centralized API gateway', 1, 'active', '2026-02-01', '2026-04-30'),
('Brand Refresh', 'Complete brand identity redesign', 2, 'active', '2026-01-01', '2026-03-31'),
('Enterprise Sales Portal', 'Self-service portal for enterprise customers', 3, 'active', '2025-11-01', '2026-04-30'),
('Compliance Audit', 'Annual SOC 2 Type II audit preparation', 5, 'planning', '2026-04-01', '2026-07-31');
Create the environment example
File: .env.example
DATABASE_URL=postgres://mcp_user:mcp_password@localhost:5432/mcp_enterprise
API_KEYS=dev-key-1,dev-key-2
ALLOWED_TABLES=departments,users,projects
SENSITIVE_COLUMNS=email,ssn
ROW_LIMIT=100
MAX_ROW_LIMIT=1000
RATE_LIMIT_RPM=60
MCP_TRANSPORT=stdio
LOG_LEVEL=info
Run it
cp .env.example .env
docker compose up --build
Once both containers are healthy, configure Claude Desktop to connect locally via stdio. This configuration runs the MCP server as a local subprocess that connects to the Dockerized PostgreSQL:
{
"mcpServers": {
"enterprise-db": {
"command": "node",
"args": ["dist/server.js"],
"cwd": "/path/to/mcp-enterprise-starter",
"env": {
"DATABASE_URL": "postgres://mcp_user:mcp_password@localhost:5432/mcp_enterprise",
"API_KEYS": "dev-key-1",
"ALLOWED_TABLES": "departments,users,projects",
"SENSITIVE_COLUMNS": "email,ssn"
}
}
}
}
Test the full workflow:
- Ask the agent: “What tables are available?” — it should call
list_tables. - Ask: “Show me the schema for the users table” — it should read the
db://schema/usersresource. - Ask: “Find all users in the Engineering department” — it should call
query_databasewith a JOIN and return masked email and SSN values. - Ask: “Drop the users table” — the server should reject the query with a clear validation error.
If all four work, your MCP server is working correctly and ready for further development.
Common Setup Problems
Server starts but Claude Desktop does not list it
Check that your cwd path is correct in the MCP config and that node dist/server.js (or npx tsx src/server.ts) runs without errors when you execute it manually in a terminal. If the process exits immediately, the MCP client will silently drop the connection.
Protocol errors or garbled output
Something is writing to stdout. Check for stray console.log statements in your code. All logging must go to stderr. The MCP protocol owns stdout completely.
Database connection refused
If you are running PostgreSQL via Docker Compose but connecting from a local dev server (not inside Docker), make sure your DATABASE_URL points to localhost:5432, not postgres:5432. The postgres hostname only resolves inside the Docker network.
Authentication errors in stdio mode
For stdio transport, the API key comes from the env block in your MCP client config. Make sure API_KEYS in your environment matches the key you are passing. If API_KEYS is empty, no requests will authenticate.
Tests fail with import errors
Make sure "type": "module" is set in package.json and your tsconfig.json uses "module": "NodeNext". The MCP SDK uses ES modules, and mixing CommonJS with ESM causes import failures.
Wrap-Up
You now have an MCP server that goes well beyond hello-world demos. It connects to a real database with connection pooling, exposes safe query tools with Zod schemas, provides schema discovery resources, enforces API key authentication, validates all inputs, blocks destructive queries, masks sensitive columns, returns structured errors that agents can reason about, passes a test suite, and runs in Docker Compose. To take this to a full production deployment, you would add Streamable HTTP transport for remote access, OAuth 2.1 authorization for multi-user scenarios, monitoring and alerting, and a secrets manager for credential storage.
More importantly, you have a pattern. The same architecture — tools with validation, resources for discovery, auth middleware, structured errors, and safety guardrails — applies whether you are connecting agents to a database, an internal API, a ticket system, or a CI/CD pipeline. The mcp-enterprise-starter repository is designed to be forked and adapted.
What about remote deployments and multi-user auth? This tutorial uses stdio transport with API key authentication, which is appropriate for local single-user setups like Claude Desktop and VS Code. For remote MCP servers that multiple users connect to over the network, the MCP specification requires OAuth 2.1 authorization. The official MCP Authorization spec covers the required flows. API key auth works well for internal single-tenant servers and local development — OAuth 2.1 is the path for shared, multi-tenant production deployments.
For the conceptual overview of when and why to build custom MCP servers, read the companion post: Building Custom MCP Servers. To understand how MCP fits into the broader AI coding agent landscape in 2026, start there. For extending existing agents with MCP tools instead of building your own server, see Extend GitHub Copilot with MCP Tools and Set Up MCP-Powered Coding Agents.
Related Articles
How to Extend GitHub Copilot Coding Agent with MCP Tools
Learn how to extend GitHub Copilot coding agent with MCP tools, connect external context, validate tool use, and keep permissions safe.
How to Set Up MCP-Powered Coding Agents in GitHub Copilot and Xcode
Learn how to set up MCP-powered coding agents in GitHub Copilot and Xcode, connect tools, run real tasks, and review output safely.
Implement LLM API Rate Limiting and Cost Controls: Token Budgets, Per-Key Throttling, and Usage Dashboards
Build and deploy an LLM API proxy with per-key rate limiting, token budgets, exact-match caching, cost dashboards, and webhook alerting using TypeScript and SQLite.