It has been one year since I wrote about how much faster AI can make a team.
Back then, the conversation was about speed. Today, it is about discipline.
If you are still coding the same way you did two years ago, you are underperforming. Not because you are not capable. But because the leverage has moved.
AI did not just make us faster. It changed what matters.
What changed in one year
One year ago, using Cursor or Claude Code already felt powerful. You describe a feature and get working code in minutes. That alone reshaped daily engineering work.
Today, multi-agent workflows are common. Structured reviews are expected. Running parallel agents in isolated worktrees no longer feels experimental.
You can let one agent draft a plan, another implement, and a third review. You can compare outputs from different models and choose deliberately instead of accepting the first answer.
In my day-to-day workflow with AI DevKit, I usually start by brainstorming with Claude Code using the /new-requirement command. Then I switch to Codex CLI to review both the requirement and the design through /review-requirement and through /review-design. Then, I move back to Claude Code for the implementation. It allows me to leverage the strengths of different tools and models, using each one where it performs best instead of forcing a single agent to handle everything.
I try to keep my agents busy. While I review the requirement with one agent, I let others handle implementation or review the design in parallel.
The tools evolved fast. But more importantly, the expectations evolved.
Speed is no longer the differentiator. Discipline is.
Engineer A vs Engineer B
Let me contrast two senior engineers.
Engineer A uses AI every day. He writes prompts, gets code, scans the diff, and merges. He is clearly faster than before. He ships more tickets per sprint. He feels productive.
Engineer B also uses AI every day. But his workflow looks different.
He starts from a helicopter view. The requirement is ambiguous at first. Instead of jumping to code, he lets the agent drive the clarification loop. He answers in chat. He pushes the agent to keep asking until edge cases, constraints, and trade-offs are explicit.
Engineer A edits the code directly when something looks off.
Engineer B communicates with the agent. He explains why it is wrong. He updates constraints. He adjusts the plan. He asks the agent to restate the decision before touching implementation. He treats the agent as a collaborator, not a code generator.
Engineer A trusts the output if it compiles and the happy path works.
Engineer B strengthens the tests. He asks the agent to generate a test suite, then adds edge cases around null inputs, boundary values, concurrency, and error handling. He forces the agent to explain why each test exists. He treats testing as the backbone, not a checkbox.
Both engineers are faster than they were two years ago.
This difference in behavior looks small. But it changes everything.
The discipline gap
AI amplifies your engineering discipline.
If your thinking is vague, AI produces vague structure at scale.
If your requirements are weak, AI generates confident but misaligned code.
If your review process is shallow, AI accelerates bugs.
The dangerous part is that both engineers think they are doing fine. Both are shipping. Both are faster than before. The difference only shows up months later.
The more I practice agentic workflows, the more I realize that testing becomes more important, not less.
When code generation becomes cheap, validation becomes the bottleneck.
In an agentic workflow, I recommend responding in chat. Let the agent drive the clarification loop. It will keep asking questions until the requirement is clear enough to move forward.
Requirements, design notes, and decisions are not documentation. They are execution inputs.
If the agent does something wrong, most of the time it is not about intelligence. It is about context.
Managing context is now part of the job.
Senior engineers must learn how to enrich context deliberately. And also how to keep it lean. Too little context creates mistakes. Too much context creates noise. Both require judgment.
From implementer to orchestrator
The role of a senior engineer quietly changed.
Before AI, your value often came from writing the hardest 500 lines in the system.
With AI, implementation cost drops. But the decision cost increases.
Now your value comes from defining the 50 lines that must never be wrong. The parts of the system where a small mistake creates long-term damage.
A senior engineer orchestrates both humans and machines.
You design the workflow. You define the phases. You decide when to branch into parallel agents. You decide when to compare multiple model outputs. You decide when to stop the agent and take over manually.
Multi-agent setups are powerful. You can run two models on the same task and pick the better solution. But this introduces cognitive load.
If you cannot clearly explain what each agent is responsible for, you are not orchestrating.
Context switching becomes a real skill. Energy management becomes part of engineering.
If you cannot hold the mental model of what each agent is doing and why, you lose control of the system.
The compounding effect
In the first month, the difference between Engineer A and Engineer B looks small.
Engineer A finishes a feature in one day. Engineer B also finishes in one day, maybe slightly slower because he invests in tests and structure.
After three months, Engineer B’s codebase is cleaner. Tests are stronger. Review loops are tighter. Decisions are documented as inputs, not afterthoughts.
After six months, Engineer B has reusable commands, templates, and structured phases. He can spin up new features with clarity. He can onboard agents faster. His feedback loops are consistent.
After twelve months, the gap compounds.
Engineer A is still faster than pre AI days. But he is fighting entropy. He can only focus on one problem at a time because his workflow is still single-threaded. He prompts, waits, reviews, merges, then moves to the next task. He was never trained to coordinate multiple agents running in parallel, so his output scales with his attention and hours, not with the number of agents he can control.
Engineer B is operating a system. He can scale the number of things he builds by increasing the number of agents he can clearly define, coordinate, and control.
The exponential effect does not come from typing speed. It comes from accumulated workflow leverage.
Why workflow must be portable
One more lesson I learned: do not couple your discipline to a single tool.
Cursor, Claude Code, Codex. They are interfaces. They will continue to evolve.
An orchestrator does not depend on one instrument.
If your process only works inside one UI, it is fragile. The moment the tool changes, your leverage resets.
Your workflow should be portable.
Structured phases. Reusable commands. Explicit review steps. Clear testing loops. Configurations that can move across environments.
Wherever the agent runs, your discipline should follow.
On your local machine. On a different laptop. On a teammate’s machine. Inside a CI pipeline. On a remote server. On the cloud.
This is one of the reasons I built AI DevKit. Not to lock into one model. But to standardize how I work with any agent.
The tool matters less than the workflow.
The uncomfortable truth
If you are still using AI as autocomplete, you are slowly left behind.
If you treat the agent as a faster junior engineer without redesigning your process, you are leaving leverage on the table.
Last year was about discovering speed.
This year is about building discipline.
Agentic engineering is not about clever prompts. It is about workflow design. It is about context management. It is about structured review. It is about testing as a first class citizen.
It is orchestration.
Rethink your workflow. Upgrade it.
Speed was only the beginning.
If this perspective resonates with you, subscribe to my blog. I share what I learn while building real systems with AI in the loop. You can also follow me on X or Threads for more thoughts and ongoing experiments.
Discover more from Codeaholicguy
Subscribe to get the latest posts sent to your email.