Case Study: Production system

Career-Ops: How I Built My Own AI Job Search Tool

A multi-agent system that scores offers across 10 dimensions, works as an AI resume builder per listing, and automates job applications with HITL. 631 evaluations, 12 modes, one person.

Santiago Fernández de Valderrama
Mar 17, 202618 min read
Career Pipeline: tracker dashboard with 516 evaluated offers, scores 4.0-4.5, companies like Datadog, Langfuse, OpenAI, LangChain
Live production system — actively in use

631

Evaluations

302

Applications

12

Modes

10

Dimensions

680

URLs deduped

A multi-agent system built with Claude Code that automates the job search: scores offers across 10 dimensions (A-F), generates ATS-optimized PDFs per offer, fills forms via Playwright, and batch-processes with parallel workers. HITL design: AI analyzes, I decide.

The irony: the system demonstrates the exact competencies the target roles require — multi-agent architecture, automation, LLMOps, and HITL design. And no, it is not gaming the system: Career-Ops automates analysis, not decisions. I read every report and review every PDF before sending.

I no longer apply to jobs. A multi-agent system evaluates them, generates my personalized resume, and prepares the application. I review and decide. Week one of my AI job search was all manual: read JDs, map skills, fill forms. By week two I had stopped applying — I was building Career-Ops.

631 evaluations later, Career-Ops makes more filtering decisions than I do. An AI-powered job search tool built as a multi-agent system: reads job descriptions, scores them across 10 dimensions, generates AI resumes per listing, and automates job applications. I review and decide. The AI does the analytical work.

The Problem#

Searching for senior AI engineering roles is a full-time job in itself. Each offer requires reading the JD, mapping your skills against requirements, adapting the CV, writing personalized responses, and filling 15-field forms. Multiply that by 10 offers per day.

1

Repetitive reading.

70% of offers are a poor fit. You find out after reading 800 words of JD.

2

Generic CVs.

A static PDF cannot highlight the proof points relevant to each specific offer.

3

Manual forms.

Every platform asks the same questions in different formats. Copy-paste 15 times per application.

4

No tracking.

Without a system, you forget where you applied. Duplicate effort or lose follow-up entirely.

5

Zero feedback.

Apply, wait, and never know if the problem was fit, the CV, or timing.

6

Global market.

The AI sector moves internationally. Local referrals do not scale when you apply to companies across 6 different countries.

The work is not hard. It is repetitive. And repetitive work gets automated.

The 12 Modes — Multi-Agent System#

Career-Ops is not a script or an auto-apply bot. It is a multi-agent system with 12 operational modes, each a Claude Code skill file with its own context, rules, and tools. An agent that reasons about the problem domain and executes the right action.

Why Modes, Not One Prompt

Precise context.

Each mode loads only the information it needs. auto-pipeline skips contact rules. apply skips scoring logic.

Testability.

One mode gets tested in isolation. Changing PDF logic never touches evaluation.

Independent evolution.

Adding a new mode never breaks existing ones. Training mode shipped 3 weeks after first deploy.

auto-pipeline

Full pipeline: extract JD, evaluate A-F, generate report, PDF, and tracker entry.

oferta

Single-offer evaluation with 6 blocks: summary, CV match, level, compensation, personalization, interview.

ofertas

Multi-offer comparison and ranking.

pdf

ATS-optimized PDF personalized per offer with proof points and keywords.

pipeline

Batch URL processing from inbox.

scan

Offer discovery: navigates job boards and careers pages of target companies. Many offers never appear on aggregators.

batch

Parallel processing with conductor + workers. 122 simultaneous URLs in queue.

apply

Interactive form-filling with Playwright. Reads the page, retrieves cached evaluation, generates responses.

contacto

LinkedIn outreach helper.

deep

Deep company research.

tracker

Application status dashboard.

training

Evaluates courses and certifications against the North Star.

Scan mode in action: Claude Code agent launching DailyRemote search with 8 queries, reading pipeline.md and scan-history.tsv for dedup
Scan mode: background agent searching AI/LLM offers on DailyRemote with automatic dedup

10-Dimension Scoring#

Every offer runs through an evaluation framework with 10 weighted dimensions. The output: a numeric score (1-5) and an A-F grade. Not all dimensions carry equal weight — Role Match and Skills Alignment are gate-pass: if they fail, the final score drops.

DimensionWhat It MeasuresWeight
Role MatchAlignment between requirements and CV proof pointsGate-pass
Skills AlignmentTech stack overlapGate-pass
SeniorityStretch level and negotiabilityHigh
CompensationMarket rate vs targetHigh
GeographicRemote/hybrid/onsite feasibilityMedium
Company StageStartup/growth/enterprise fitMedium
Product-Market FitProblem domain resonanceMedium
Growth TrajectoryCareer ladder visibilityMedium
Interview LikelihoodCallback probabilityHigh
TimelineClosing speed and hiring urgencyLow
Real evaluation: Datadog Staff AI Engineer, MCP Services — Score 4.55/5, archetype AI Platform + Agentic Workflows, role summary with 7 dimensions
Real evaluation: Datadog Staff AI Engineer — score 4.55/5, detected archetype, structured role summary
CV Match: table of 6 JD requirements mapped against CV proof points with strength rating (Strong/Very Strong/Moderate)
Block B) CV Match: each JD requirement mapped against real CV proof points
CV Match (cont.) + Gaps and Mitigation: requirements 7-10 and gap analysis with severity and mitigation plan
CV Match (cont.) + Gaps: the system identifies gaps and proposes mitigation with severity
Gaps (cont.) + Level and Strategy: IC5 level detection, "Sell Staff without lying" plan with experience framing
Block C) Level and Strategy: seniority detection + honest positioning plan

Score Distribution

21

Score >= 4.5 (A)

52

Score 4.0-4.4 (B)

71

Score 3.0-3.9 (C)

51

Score < 3.0 (D-F)

74% of evaluated offers score below 4.0. Without the system, I would have spent hours reading JDs that never fit.

The Pipeline#

auto-pipeline is the flagship mode. A URL goes in, and out comes an evaluation report, a personalized PDF, and a tracker entry. Zero manual intervention until final review.

1

Extract JD.

Playwright navigates to the URL, extracts structured content from the offer.

2

Evaluate 10D.

Claude reads JD + CV + portfolio and generates scoring across all 10 dimensions.

3

Generate report.

Markdown with 6 blocks: executive summary, CV match, level, compensation, personalization, and interview probability.

4

Generate PDF.

HTML template + keyword injection + adaptive framing. Puppeteer renders to PDF.

5

Register tracker.

TSV with company, role, score, grade, URL. Auto-merge via Node.js script.

6

Dedup.

Checks scan-history.tsv (680 URLs) and applications.md. Zero re-evaluations.

Demo: auto-pipeline evaluating the Datadog Staff AI Engineer offer in real time

Batch Processing

For high volume, batch mode launches a conductor that orchestrates parallel workers. Each worker is an independent Claude Code process with 200K context. The conductor manages the queue, tracks progress, and merges results.

122

URLs in queue

200K

Context/worker

2x

Retries per failure

Fault-tolerant: a worker failure never blocks the rest. Lock file prevents double execution. Batch is resumable — reads state and skips completed items.

AI Resume Builder — Personalized#

A generic CV loses. Career-Ops works as an AI resume builder that generates a different ATS-optimized resume for each offer, injecting JD keywords and reordering experience by relevance. Not a template: a resume built from real CV proof points.

1

Extract 15-20 keywords from the JD.

Keywords land in the summary, first bullet of each role, and skills section.

2

Detect language.

English JD generates English CV. Spanish JD generates Spanish CV.

3

Detect region.

US company generates Letter format. Europe generates A4.

4

Detect archetype.

6 North Star archetypes. The summary shifts based on the profile.

5

Select projects.

Top 3-4 by relevance. Jacobo for agent roles. Business OS for ERP/automation.

6

Reorder bullets.

The most relevant experience moves up. The rest moves down — nothing disappears.

7

Render PDF.

Puppeteer converts HTML to PDF. Self-hosted fonts, single-column ATS-safe.

Personalized CV for Wave: rewritten summary, competencies adapted to Voice AI + Multi-Agent, bullets reordered by relevance
ATS-optimized CV
Cover letter for Wave: gradient header, Jacobo as voice + WhatsApp proof point, links to case studies and dashboard
Personalized cover letter

6 Archetypes

ArchetypePrimary Proof Point
AI Platform / LLMOpsSelf-Healing Chatbot (71 evals, closed-loop)
Agentic WorkflowsJacobo (4 agents, 80h/mo automated)
Technical AI PMBusiness OS (2,100 fields, 50 automations)
AI Solutions ArchitectpSEO (4,730 pages, 10.8x traffic)
AI FDEJacobo (sold, running in production)
AI Transformation LeadExit 2025 (16 years, buyer kept all systems)

Same CV. 6 different framings. All real — keywords get reformulated, never fabricated.

Before and After#

DimensionManualCareer-Ops
EvaluationRead JD, mental mapping10D automated scoring, A-F grade
CVGeneric PDFPersonalized PDF, ATS-optimized
ApplicationManual formPlaywright auto-fill
TrackingSpreadsheet or nothingTSV + automated dedup
DiscoveryLinkedIn alertsScanner: job boards + target company careers pages
BatchOne at a time122 URLs in parallel
DedupHuman memory680 URLs deduplicated

Results#

The system has been in production for 2 months. 631 reports across 516 unique offers (some re-evaluated after criteria changes). Live numbers — the tracker grows every day.

631

Reports generated

68

Applications sent

354

PDFs generated

0

Re-evaluations

Stack

Claude Code

LLM agent: reasoning, evaluation, content generation

Playwright

Browser automation: portal scanning and form-filling

Puppeteer

PDF rendering from HTML templates

Node.js

Utility scripts: merge-tracker, cv-sync-check, generate-pdf

tmux

Parallel sessions: conductor + workers in batch

Lessons#

1

Automate analysis, not decisions

Career-Ops evaluates 631 offers. I decide which ones get my time. HITL is not a limitation — it is the design. AI filters noise, humans provide judgment.

2

Modes beat a long prompt

12 modes with precise context outperform a 10,000-token system prompt. Each mode loads only what it needs. Less context means better decisions.

3

Dedup is more valuable than scoring

680 deduplicated URLs mean 680 evaluations I never had to repeat. Dedup saves more time than any scoring optimization.

4

A CV is an argument, not a document

A generic PDF convinces nobody. A CV that reorganizes proof points by relevance, injects the right keywords, and adapts framing to the archetype — that CV converts.

5

Batch over sequential, always

Batch mode with parallel workers processes 122 URLs while I do something else. The investment in parallel orchestration pays off on the first run.

6

The system IS the portfolio

Building a multi-agent system to search for multi-agent roles is the most direct proof of competence. I do not need to explain that I can do this — I am using it.

FAQ#

Is this gaming the system?

Career-Ops automates analysis, not decisions. I read every report before applying. I review every PDF before sending. Same philosophy as a CRM: the system organizes, I decide.

Why Claude Code and not a script pipeline?

A script cannot reason. Career-Ops adapts scoring based on company context, reformulates keywords without fabricating, and generates narrative reports — not filled templates.

What does it cost to run?

Zero marginal cost per evaluation. Career-Ops runs on my Claude Max 20x plan ($200/mo), which I use for everything: portfolio, chatbot, articles, and Career-Ops. 631 evaluations without a single extra invoice.

Does the apply mode fill forms automatically?

It reads the page with Playwright, retrieves the cached evaluation, and generates coherent responses matching the scoring. I review before submitting — always.

What happens when the scanner finds a duplicate?

scan-history.tsv stores 680 seen URLs. Dedup by exact URL match plus normalized company+role match against applications.md. Zero re-evaluations.

Is it replicable?

Requires Claude Code with Playwright access and a structured working directory. Skill files define the logic for each mode. Replicable, but not plug-and-play.

Career-Ops demonstrates the same competencies as these systems. Each one is a full case study with architecture, metrics, and lessons.

Ask

Open the chat and ask how I built Career-Ops. Or check the other systems that demonstrate the same competencies.

Santiago Fernández de Valderrama

Santiago Fernández de Valderrama

AI Product Manager · Solutions Architect · AI FDE · Teaching Fellow at AI Product Academy

Built and sold a 16-year business in 2025. Now bringing that same systems thinking to enterprise AI.

© 2026 Santiago Fernández de Valderrama. All rights reserved.|Privacy