Skip to main content
Case Study · Prior to Pelican · 2025

The estimate was six weeks.
It took eight days.

Before I took on client work, I needed to know this actually worked at production scale. This is the project that answered that question.

8
Days to complete
14
Production endpoints
80%
Time reduction
6 wk
Traditional estimate

A legacy API inside a monolith that couldn't stay there.

A legacy API had accumulated inside a monolith over several years. It touched shared database tables, carried implicit dependencies on adjacent services, and had no clear boundary that could be drawn around it. The team knew it needed to come out. Extracting it the traditional way meant a dedicated engineer working through each endpoint carefully, writing tests along the way, and validating the API contracts against every downstream consumer. The project estimate was six weeks.

This project predated Pelican. I was working in an engineering context where legacy extraction was familiar territory, and I wanted to answer a specific question: whether an AI-augmented workflow could hold up on a real production extraction, with real dependencies, real test requirements, and real downstream consumers that would surface any regression.

This was production code with a production test suite, and a passing test suite was the only acceptable definition of done.

Manual coordination, applied with the same discipline I formalized later.

The workflow I used on this project was a precursor to the orchestration system I now bring to every engagement. At the time, it was manual coordination: I directed Claude Code endpoint by endpoint, using file-based plan templates for each one, reviewing the output, running the tests, validating the contracts, then starting again with the next endpoint. I was the coordination layer, sequencing every phase and making every go/no-go decision.

The approach was the same one I formalized later: no endpoint moved forward until the previous phase was solid. I wrote the plan for each endpoint before writing any code. I kept implementation separate from review. Tests ran after implementation, and I validated the contracts after the tests passed. Only then did I move to the next endpoint.

This ran fourteen times.

The Result

Eight days.
All tests passing.

The extraction completed in eight days. I migrated, tested, and validated all fourteen endpoints, with API contracts confirmed against every downstream consumer. The test suite was passing with full coverage at handoff.

Traditional estimate
6 weeks  (42 days)
Actual completion
8 days
The Workflow

The same five phases, applied to each of fourteen endpoints.

The discipline of the workflow was as important as the tooling. Each endpoint moved through the same sequence with no phases skipped or abbreviated, and each required my explicit approval before the next phase began.

Plan

I wrote a plan for the endpoint being extracted before any code was written: what it did, what it depended on, what the tests needed to cover, and what success looked like. The plan was saved to disk and required my approval before implementation started.

Implement

Claude Code implemented the endpoint against the approved plan. I reviewed the output before any tests ran. Implementation did not start without an approved plan, and no output moved forward without a review.

Test

The test suite ran after implementation. If tests failed, the implementation loop continued until they passed. The criteria for moving forward was a passing test suite, not code that looked correct.

Validate

I validated the API contracts against all downstream consumers. A passing test suite and a validated contract were both required before an endpoint was considered complete.

Next endpoint

Once both criteria were met, I moved to the next endpoint and repeated the process from the plan phase. This ran fourteen times, in sequence, with the same discipline applied to each.

What This Proved

The workflow held up when the work was real.

The question was whether this approach would hold up when the work was real, the constraints were production, and the tests were the judge. It held up.

The speed came from restructuring what I had previously done sequentially in my own head. Planning, implementation, testing, and validation ran as distinct phases for each endpoint rather than as a blurred continuous process. The AI coordination meant I was spending my time on the decisions that required judgment rather than on the mechanics of sequencing work.

Every phase still required my review and approval before the next started. What changed was the pace at which that judgment could be applied to something well-structured and already passing tests.

This project is why the AI-augmented delivery claim on this site comes with specific numbers. I ran it on a production system, measured what happened, and those are the numbers referenced across this site.

Current workflow How the orchestration system works today
Bring It to Your Project

This is the workflow I bring
to every engagement.

If your project has similar complexity, a discovery call is the right place to start. Thirty minutes, no obligation.