From Executor to Orchestrator: My Journey Building with AI Agents
Thoughts for a Technologist: #6
"The pattern is becoming clear once again: the technology doesn't replace the skill, it elevates it to a higher level of abstraction." - Danilo Alonso
The Paradigm Shift
We're witnessing a fundamental transformation in how we build. In the past year, I've moved from being an executor—someone who writes every line of code—to becoming an orchestrator who conducts AI agents to support me in the way I work and build systems.
I've spent late nights obsessed with finding new ways to integrate AI into my developer workflow. After coming up for air, I've been thinking about what all this means. The excitement about AI in engineering isn't just about having smarter autocomplete or not having to code ever again. The excitement comes from reimagining what building means when intelligence1 is on tap.
This raises profound questions: When everyone has access to this intelligence, what becomes the differentiator? Is it agency? Grit? Money? What does a future with no barrier to intelligence and the ability to execute look like?
My Evolution Timeline
“With each passing day, technology feels a bit more boundless and I a bit more unbounded.”
The Gateway Drug (2022-2023)
ChatGPT arrived just as I was learning to code. I used it to supplement my CS50 coursework with ChatGPT explainers on concepts I wanted to grok better. Sometimes, I would have it explain things in the style of the CS50 tutor, David J. Malan. This supercharged my development and understanding.
The Deep Dive (2024-2025)
Claude evolved from being my debugging partner to my consultant. I started using it across a wider scope of tasks, including moving from reactive problem-solving to proactive system design, and eventually using it as a thinking partner to explore and refine my own ideas.
The Orchestration Era (2025-????)
Now I'm entering what I call my orchestration era. Yesterday's workflow: I used Claude to draft a refactoring spec, Gemini CLI to implement it, while orchestrating between them based on a meeting I'd processed through NotebookLM.
My use of AI has evolved from it being a knowledge base to becoming an executor of my will. With the recent developments in AI agent function calling, we're giving the proverbial monkey the tool. It's exciting and slightly scary.
The Technical Reality: What This Actually Looks Like
Case Study: Product Discovery
I'm currently building something to solve a problem my neighbour has at work. Here's how I orchestrated an entire product discovery flow using AI:
Meeting → PRD: I had a discovery session with him to understand his issue. I put that conversation into NotebookLM to extract themes from the conversation, identify pain points, and draft a PRD.
PRD → Architecture: I fed the complete PRD into Claude to generate a technical specification focusing on the core features for a differentiated MVP.
Architecture → Implementation: I ran an initial "one-shot" prompt to generate the code in V0, downloaded the source code to my IDE (Windsurf), and switched between generating tasks in Claude Desktop and giving them to Gemini CLI to implement, building up the various features of the app.
This is orchestration in action and it has changed how I experience building.
The Three Schools of AI Development
Through experimentation, I've identified three distinct approaches
A. The One-Shot Method:
Trade context for accuracy by giving the model comprehensive information to build it all at once. You can get something decently accurate and speed across to a more complete version of your app. But there's a hidden cost—you spend brain capacity trying to understand the generated codebase. I frequently found myself burnt out, trying to reverse-engineer why the AI made certain architectural decisions.
I've discovered two fascinating challenges that emerge from this method:
The Legacy Codebase Problem
This method feels exactly like joining a new company and inheriting a legacy codebase. You can work around the complexity without fully understanding it, but then you're relying on the model's accuracy for things you aren’t even aware of. This dependency gives me anxiety—almost like waiting for the other shoe to drop.
The Hyper-specificity Paradox
The more specific you are about certain aspects, the more you notice when other areas get left to the model's implicit reasoning (and how that might differ from yours!). You describe the authentication flow in detail, but forget to mention the error handling, so the AI fills in gaps with its own assumptions. I tried being more specific to fix this, but I’ve realised there’s an ironic threshold where the effort to be comprehensive starts approaching the effort of just building it yourself.
B. The Incremental Method: Maintaining Control
An alternative is building incrementally—pages, functions, enhancements. It's slower, but you maintain understanding and architectural control throughout. You're never more than a few steps away from code you fully comprehend2.
C. The Layering Method: My Hybrid Approach
I now use a mix of the two, depending on what I'm doing. I tend to one-shot to establish a solid MVP foundation quickly, then switch to incremental improvements. Sometimes I'll generate detailed specs in Claude, then feed simplified, actionable prompts to Gemini for execution. This gives me speed when I need it and control when I want it.
The Reality Check: These tools are powerful, but they require new forms of discipline and workflow design. For example, I recently had to deal with a weekend of untangling code Claude gave me for a feature and I felt so burnt out after that I nearly deleted the codebase3.
The Builder's Perspective
My thinking around how AI will impact software development is that it's now about how well we adapt to become better orchestrators. The new skill set of the orchestrator includes:
Prompt engineering as a core skill: Prompting is all about communicating your intent for the agent to execute. Until we have really smart models that can accurately extrapolate intent even with poorly written prompts, you’ll need to be able to describe what you are trying to achieve.
Understanding AI model strengths and limitations: This helps with properly separating concerns and assigning tasks. I haven't gone deep into formal model evaluation yet, but I'm developing patterns for learning which agent to use for what.
Maintaining technical depth while leveraging AI efficiency: Understanding technical systems enhances my ability to build with AI. It helps with prompting, judging the quality of what I receive, and reasoning about what how it’s been built (e.g. when debugging or extending a feature).
Recognising and developing orchestration patterns for complex workflows: This starts with being more mentally engaged than "vibe coding" might suggest, and actively looking at how best to coordinate your agents to get the most effective results.
What's Next?
I'm developing my intuition and technical skillset through building—seeing which patterns works, which patterns don't, hitting walls, getting successes, becoming more knowledgeable. It's in fully sinking my teeth into these tools that I get to explore what we're really working with. I'm super excited for the coming months.
My current experiments:
Using AI orchestration patterns to build a functioning product for my neighbour
Optimising my multi-agent workflows to be cost-effective
Discovering orchestration patterns and philosophies through real projects
Questions I'm investigating:
What does it look like to be in the next generation of "orchestrator" developers?
What is the optimal human-AI collaboration ratio to maintain product quality with AI speed?
How can I connect multiple AI agents to shared context while orchestrating? What would building context as a service look like? How would this differ from just another LLM API or orchestration framework?4
As always, I’ll keep you posted on what I learn and what I’m building.
Best,
Nkechi
Want to discuss AI-assisted building and orchestration? Building an AI product and looking for someone to experiment with it? Please reach out! I'm always experimenting with new approaches and would love to compare notes with fellow builders pushing these boundaries.
By intelligence, I don't just mean knowledge—I mean the ability to act on that knowledge and produce real-world effects. With agents gaining tool access, we're getting both the intellect and the execution
I think with an improvement in models understanding intent, we'll have less situations where we feel compelled to go slow in case the model gets something wrong
I didn’t. I just needed a nap
I find this very interesting because I’m developing ways to have continuity across different agents while building. I’d be interested in discussing this!