Skip to main content

A live MCP server that lets AI systems query Hannah's professional data directly.

Ask Hannah MCP

Most portfolios are static websites. Ask Hannah MCP is a live API that AI systems can call directly. Recruiters and hiring managers who connect it to Claude can query her background, projects, metrics, voice statements, and a role-focused hiring brief the same way they would query any other data source. The product demonstrates in its own structure what it claims about the builder: she understands how AI systems connect to each other, how to design tools for agentic workflows, and how to ship infrastructure that works. Ten structured tools. The structured data in this server is the canonical fact source for Hannah's professional content.

Product Lead, API Designer, Full-Stack ImplementationApril 2026
mcpagentic-aiinfrastructuredeveloper-tooling
Status: Live

The proof point

Ask Hannah MCP is registered as a live connector in Claude.ai. A hiring manager can connect it, open Claude, and query Hannah's background from structured data she controls—not a nav bar, a queryable layer inside tools they already use. That demonstrates agentic infrastructure instinct: tools downstream models can call reliably.

The proof point is structural. The medium is the message.

The problem

The stakes are not abstract

0

live MCP tools

Deployed to Railway

0

Claude.ai connector

Publicly registered

0

databases required

Stateless by design

Live MCP

Use Ask Hannah MCP in Claude

  1. Open Claude.ai and sign in.
  2. Add a custom connector (in Claude's settings or connectors area — labels vary by version).
  3. Paste this MCP server URL when prompted:
https://ask-hannah-mcp-production.up.railway.app/mcp

Start a new chat and ask Claude to use the Ask Hannah tools — for example profile, projects, metrics, hiring brief, or resume generation from your structured data.

Optional: open /health in a new tab for raw JSON status. Claude must use the streamable /mcp URL above.

Process

How it was built

STEP 01 — TOOL SCHEMA

Tool schema design

Ten tools, each scoped to a single job. Profile, voice, projects, metrics, skills, FAQ answers, hiring brief, and generation. Schema clarity was the first decision because downstream AI synthesis is only as good as the structure it receives. Stack: TypeScript, Zod, MCP SDK

Pivots

What changed and why

The first working build used stdio transport because that is the simplest MCP pattern and most documentation assumes local use. It compiled, passed local tests, and produced correct tool outputs—then failed to register as a Claude.ai connector. Claude.ai expects a public HTTP MCP endpoint. The entire transport layer was rebuilt around Express and streamable HTTP before the server could go public.

When building for an external platform's integration spec, read the integration spec first, not the SDK quickstart. They are not the same document.

The first version of hannah_generate_resume used a loosely structured prompt; outputs were good but inconsistent—section ordering drifted, metric phrasing varied. The fix enforces a strict contract: model returns document body only from verified data; empty responses and API failures map to explicit error codes with retry guidance.

Generation tools in production need a contract, not a suggestion. The model does what the prompt allows.

Early design considered a tool that would score Hannah's fit against a job description in real time. It was cut: a fit-scoring tool that can return a low score is a liability in a job search context. Resume and cover letter generation already handle tailoring more usefully.

Scope is a product decision. Not every technically interesting feature belongs in the product.

Once trust and structure were solid, conversion was the bottleneck. The hiring brief gained compact summary mode, role-specific CTA wording, and explicit contact fallback order: Calendly first, then LinkedIn, then email. Outbound links gained consistent UTM tagging so MCP traffic is attributable.

A portfolio MCP is still a product surface. If the next step is fuzzy, the artifact does not ship its full value.

What shipped

Every layer, production-ready

Ten structured tools

Profile, project list and detail, metrics (with trust metadata), skills, first-person voice, FAQ by topic, recruiter hiring brief, resume and cover letter generation—outputs as Zod-validated document JSON with fact verification and ATS-oriented plain modes.

Generation pipeline (Phases 0–4)

Configurable models, retries, privacy-safe stderr telemetry, optional POST /mcp rate limit; long JDs get a validated JSON extract to JOB SIGNALS; machine-validated JSON then deterministic Markdown or ATS plain text; corpus-backed fact drift errors; profile-owned resume header.

HTTP MCP and Railway

Streamable Express transport for Claude.ai, Zod on all inputs, /health and diagnostics, auto-deploy on push—stateless, no database, facts in hannah-data.ts at compile time.

Modular codebase and tests

Handlers in src/tool-handlers, shared logic in src/lib, npm test for regressions, sample JSON shapes documented in-repo.

MCP SDK (@modelcontextprotocol/sdk) — Claude.ai connector compatibilityHTTP streamable transport (Express) — required for public registration; stdio only locallyZod — schema enforcement on all tool inputs before handler executionAnthropic Messages API — document JSON, Phase 3 fact checks, Phase 4 atsMode; optional JD extractNode.js 20TypeScripthannah-data.ts (compile-time data layer)Railway (auto-deploy, HTTPS)src/lib, src/tool-handlers

What this demonstrates

For every audience

Built a live MCP server, deployed it to production, registered it as a Claude.ai connector—functional proof she builds for the agentic AI layer, not just chat UIs.

Strict generation contracts, Zod-validated document JSON, deterministic rendering, corpus-backed fact verification, ATS modes, privacy-safe telemetry—without logging job posting bodies.

Closed the static-portfolio gap for AI PM roles; cut liability features; hiring brief and conversion path so screening does not dead-end; scoped for hiring managers already inside Claude.

Field names and return shapes are design decisions: profile facts versus first-person voice shape how synthesized output reads to humans.

A job-search artifact and portfolio proof point: value is that using it demonstrates the claims about the builder—not commercial revenue.

The honest summary

Three ways to understand this work

TECHNICAL UNDERSTANDING

Technical understanding

Understands MCP transport architecture well enough to identify and fix the stdio versus public HTTP mismatch that blocked registration. Applied strict generation contracts and standardized error handling to resume and cover letter tools. Phase 0 generation operations add env-configurable models, bounded retries on transient API failures, privacy-safe stderr telemetry (lengths and timings, not job description text), and an optional /mcp rate limit behind Railway-friendly proxy trust settings. Phase 1 adds a validated JSON extraction pass for long job descriptions so the main generator sees compact JOB SIGNALS instead of raw posting walls, with excerpt fallback and separate jd_extract logs. Phase 2 makes resume and cover letter outputs machine-validated JSON first, then deterministic Markdown, with the resume header and contact lines owned by the server from hannah-data so the model cannot rewrite identity fields. Phase 3 adds a deterministic verification layer on model-owned prose and bullets so invented dollar amounts, percentages, banned references, and the wrong experience-year framing fail closed before the user sees output. Phase 4 adds ATS-oriented plain-text modes with separate prompt rules and structured metadata so the same JSON can be emitted as human Markdown or parser-friendly text without changing the fact floor. Refactored into modules with tests so changes do not regress silently. Stateless by design was a conscious tradeoff, not a gap.
PRODUCT UNDERSTANDING

Product understanding

Scoped out a fit-scoring tool that would have been technically interesting but created product liability in a job search context. Made the voice tool distinct from the profile tool because the downstream synthesis use case for each is different. Added a hiring brief and conversion path so screening does not dead-end after reading. Designed for a specific user journey: hiring manager opens Claude, connects the server, asks questions, gets grounded answers, moves to outreach.
DESIGN UNDERSTANDING

Design understanding

The UX of an MCP tool is its schema. Field names, return shape, and the structure of voice answers are all design decisions that affect how the synthesized output reads to a human. "She has 17 years of experience" is a third-person summary. "I did not come to AI from a whiteboard. I came from the floor." is a voice statement. Same fact, different tool, different output. That distinction was designed, not accidental.
Any hiring manager with a Claude.ai account can connect this server right now and ask it anything. It will answer from structured data Hannah controls. That is a different kind of leave-behind.
Next Project