A live MCP server that lets AI systems query Hannah's professional data directly.
Ask Hannah MCP
Most portfolios are static websites. Ask Hannah MCP is a live API that AI systems can call directly. Recruiters and hiring managers who connect it to Claude can query her background, projects, metrics, voice statements, and a role-focused hiring brief the same way they would query any other data source. The product demonstrates in its own structure what it claims about the builder: she understands how AI systems connect to each other, how to design tools for agentic workflows, and how to ship infrastructure that works. Ten structured tools. The structured data in this server is the canonical fact source for Hannah's professional content.
The proof point
Ask Hannah MCP is registered as a live connector in Claude.ai. A hiring manager can connect it, open Claude, and query Hannah's background from structured data she controls—not a nav bar, a queryable layer inside tools they already use. That demonstrates agentic infrastructure instinct: tools downstream models can call reliably.
The proof point is structural. The medium is the message.
The problem
The stakes are not abstract
live MCP tools
Deployed to Railway
Claude.ai connector
Publicly registered
databases required
Stateless by design
Live MCP
Use Ask Hannah MCP in Claude
- Open Claude.ai and sign in.
- Add a custom connector (in Claude's settings or connectors area — labels vary by version).
- Paste this MCP server URL when prompted:
Start a new chat and ask Claude to use the Ask Hannah tools — for example profile, projects, metrics, hiring brief, or resume generation from your structured data.
Optional: open /health in a new tab for raw JSON status. Claude must use the streamable /mcp URL above.
Process
How it was built
STEP 01 — TOOL SCHEMA
Tool schema design
Ten tools, each scoped to a single job. Profile, voice, projects, metrics, skills, FAQ answers, hiring brief, and generation. Schema clarity was the first decision because downstream AI synthesis is only as good as the structure it receives. Stack: TypeScript, Zod, MCP SDK
Pivots
What changed and why
The first working build used stdio transport because that is the simplest MCP pattern and most documentation assumes local use. It compiled, passed local tests, and produced correct tool outputs—then failed to register as a Claude.ai connector. Claude.ai expects a public HTTP MCP endpoint. The entire transport layer was rebuilt around Express and streamable HTTP before the server could go public.
When building for an external platform's integration spec, read the integration spec first, not the SDK quickstart. They are not the same document.
The first version of hannah_generate_resume used a loosely structured prompt; outputs were good but inconsistent—section ordering drifted, metric phrasing varied. The fix enforces a strict contract: model returns document body only from verified data; empty responses and API failures map to explicit error codes with retry guidance.
Generation tools in production need a contract, not a suggestion. The model does what the prompt allows.
Early design considered a tool that would score Hannah's fit against a job description in real time. It was cut: a fit-scoring tool that can return a low score is a liability in a job search context. Resume and cover letter generation already handle tailoring more usefully.
Scope is a product decision. Not every technically interesting feature belongs in the product.
Once trust and structure were solid, conversion was the bottleneck. The hiring brief gained compact summary mode, role-specific CTA wording, and explicit contact fallback order: Calendly first, then LinkedIn, then email. Outbound links gained consistent UTM tagging so MCP traffic is attributable.
A portfolio MCP is still a product surface. If the next step is fuzzy, the artifact does not ship its full value.
What shipped
Every layer, production-ready
Ten structured tools
Profile, project list and detail, metrics (with trust metadata), skills, first-person voice, FAQ by topic, recruiter hiring brief, resume and cover letter generation—outputs as Zod-validated document JSON with fact verification and ATS-oriented plain modes.
Generation pipeline (Phases 0–4)
Configurable models, retries, privacy-safe stderr telemetry, optional POST /mcp rate limit; long JDs get a validated JSON extract to JOB SIGNALS; machine-validated JSON then deterministic Markdown or ATS plain text; corpus-backed fact drift errors; profile-owned resume header.
HTTP MCP and Railway
Streamable Express transport for Claude.ai, Zod on all inputs, /health and diagnostics, auto-deploy on push—stateless, no database, facts in hannah-data.ts at compile time.
Modular codebase and tests
Handlers in src/tool-handlers, shared logic in src/lib, npm test for regressions, sample JSON shapes documented in-repo.
What this demonstrates
For every audience
Built a live MCP server, deployed it to production, registered it as a Claude.ai connector—functional proof she builds for the agentic AI layer, not just chat UIs.
Strict generation contracts, Zod-validated document JSON, deterministic rendering, corpus-backed fact verification, ATS modes, privacy-safe telemetry—without logging job posting bodies.
Closed the static-portfolio gap for AI PM roles; cut liability features; hiring brief and conversion path so screening does not dead-end; scoped for hiring managers already inside Claude.
Field names and return shapes are design decisions: profile facts versus first-person voice shape how synthesized output reads to humans.
A job-search artifact and portfolio proof point: value is that using it demonstrates the claims about the builder—not commercial revenue.
The honest summary
Three ways to understand this work
Technical understanding
Product understanding
Design understanding
Any hiring manager with a Claude.ai account can connect this server right now and ask it anything. It will answer from structured data Hannah controls. That is a different kind of leave-behind.