My Engineering Workflow in CursorAI

Every Software Engineer follows a similar rhythm:

Understand requirement → Design → Plan → Implement → Review → Test → Deploy → Monitor.

That hasn’t changed for a long time. When AI is involved in the workflow, what’s changed is the speed. I also mentioned this in my FREE eBook about adapting AI in Software Engineering.

Before AI, a typical cycle for mid-sized features often stretched over several weeks. Each step involved back-and-forth reviews, context switching, and waiting for feedback. But with tools like Cursor (and yes, even Claude Code), the same workflow can now run faster. I mentioned this in the post How much faster can AI actually make your team?.

I’ve been experimenting with this new way of working for a while, the last time I talked about this was in my post about my learning after using CursorAI for 9 months. Eventually, I decided to turn my setup into a reusable toolkit, I called it ai-devkit, so that I can share it with others and make AI-assisted workflows consistent across projects.

Let’s walk through how my engineering workflow actually looks today, step by step.

Engineering Workflow (Before AI)

Whether you’re at a startup or a big tech company, the core process looks like this:

  1. Understand Requirement: Read the PRD or tickets, clarify what’s being asked.
  2. Review Requirement: Identify gaps, edge cases, or unclear assumptions.
  3. Design: Draft your architecture, data model, or API contract.
  4. Review Design: Discuss with peers or leads to validate approach.
  5. Plan: Break it into tasks, estimate effort, and define milestones.
  6. Implement: Write the code, debug, and document.
  7. Review Implementation: Peer review for quality and maintainability.
  8. Testing: Unit, integration, or E2E coverage before shipping.
  9. Deploy & Monitor: Release safely and track metrics or logs.

Each phase produces something tangible: a document, a design, a Pull Request (PR) or Merge Request (MR), or a deployment. Every phase waited for human feedback or context setup, so it could take weeks or months for even a mid-sized feature.

AI-Accelerated Workflow

To make this section easier to navigate, I’ve broken it down into two parts: Feature development and Understanding existing code.

Feature development

The workflow itself didn’t change, we just execute it differently now. I personally use Cursor as the default editor and tried to tighten the workflow with it, although I was a Neovim user for many years.

I use Cursor Commands to create reusable workflows so that I don’t need to repeat myself.

You can create commands yourself. Or you also can easily scaffold the working environment with npx ai-devkit init. You can choose your environment, such as Cursor or Claude Code, so that it will set up the files accordingly.

Here’s how it looks with ai-devkit:

  1. Understand Requirement
    • In the Cursor AI Chat (or Claude Code), use the /new-requirement command to summarize goals, constraints, and success metrics.
    • This prompt will try to highlight what’s unclear and what assumptions you’re making.
  2. Review Requirement
    • After finishing running /new-requirement commands, you now have all your required documents in docs/ai/. At this stage, we will review the files so that we won’t miss anything. I usually manually review this so that I will stay on top of the work.
    • Ask Cursor to review the requirement by using the command /review-requirement.
    • The output helps spot what the original specs might have missed.
  3. Design
    • When running /new-requirement it will also propose a design in docs/ai/design.
    • I often generate multiple options, compare trade-offs, and refine manually.
  4. Review Design
    • To support this, I have a command /review-design so that Cursor can act like a simulated peer reviewer, which challenges design decisions and reminds me about different things that I overlooked.
  5. Plan
    • Convert the design into concrete steps and a checklist.
  6. Implement
    • This is where AI tools such as Cursor or Claude Code shine.
    • Whenever we finish reviewing all the design and plan documents, we can run /execute-plan the tool will pick up the tasks one by one and execute them.
    • Whenever the implementation is done, you can run check-implementation to make sure the code follows the requirements.
    • You will need to manually review the code with the support of /code-review, so that you still own the quality of the code.
  7. Testing
    • Run /writing-test to create unit and integration tests targeting high coverage.
  8. Deploy & Monitor
    • Cursor helps generate release notes, deployment steps, monitoring metrics, and alerts.

I still think through the same steps, but AI keeps the flow unbroken. It helps me stay in problem-solving mode longer.

Understanding existing code

A big part of coding is understanding the code that’s already there. For years, reading and making sense of existing code has been one of the hardest things, especially for newer engineers.

Now, AI can help speed up that process. It can describe what a piece of code is doing, show you where it’s being used, and help you understand how different parts fit together.

With ai-devkit, you can use /capture-knowledge command helps you understand how existing code works by analyzing it from any entry point and generating comprehensive documentation with visual diagrams.

Know when to use MCP

When working inside Cursor or Claude Code, not everything needs to run through a Model Context Protocol (MCP) server. MCP is extremely useful when you need live integration between tools, but many workflows are still more practical using existing CLIs.

For example, I use the Figma MCP server to fetch design details directly into Cursor. It saves time when I need to reference a component or color spec while coding. Similarly, I rely on the Atlassian MCP server to fetch Jira ticket or Confluence information automatically when running /new-requirement, which keeps requirements in sync.

However, not every integration benefits from MCP. For GitLab, I prefer to use the GitLab CLI instead of setting up an MCP server. It’s faster, simpler, and perfectly fine for tasks like creating MR. Sometimes, adding MCP just for the sake of it adds unnecessary complexity.

Use MCP when context-sharing between AI and external tools makes your workflow smoother, but don’t force everything into the MCP pattern. Command-line tools and direct editor integrations are often more efficient for straightforward tasks.

Why I Built ai-devkit

Before ai-devkit, I used to keep a folder of prompts. Every time I needed to start something new, I copied one prompt into Cursor, adjusted the wording, and ran it. It worked, but it was manual, inconsistent, and error-prone. Sometimes the context is lost in the middle.

So I bundled everything into a single, reusable setup ai-devkit. It gives me consistency across projects and lets others benefit too.

You can use it directly in Cursor, or even Claude Code, and that is your choice. The structure is universal, it just depends on how you integrate prompts into your workspace.

Real Example: Better Output, Lower Cost

In my previous post, I compared Claude Sonnet 4.5, GPT-5 Codex, and Grok Code Fast 1. Grok Code Fast 1 was fast, but produced incomplete code.

When I tested the same task again using ai-devkit with the /new-requirement command in Cursor with Grok Code Fast 1, it was fast, the result was good on the first try, and the cost was much lower compared to the other models.

Model matters, but what we give to the model matters more.

The future of AI-assisted Engineering

AI is now part of the engineering workflow, it amplifies, not replaces, human engineers in the workflow. We are augmented by it.

The phases remain the same, but the cycle time is reduced. You still need to think clearly, design responsibly, and test thoroughly. AI just helps you get there faster, with less friction between steps.

I see this as the new normal: Engineers still own the craft, but they now have powerful copilots to accelerate it.

If you’re curious to explore this kind of workflow, try ai-devkit. Use it, experiment with your own commands, or even extend it with better prompts. All contributions are welcome.


Discover more from Codeaholicguy

Subscribe to get the latest posts sent to your email.

5 thoughts on “My Engineering Workflow in CursorAI

    1. Yes, I did, spec-kit is a good tool, I like how it handles spec generation independently. For ai-devkit, I want to go a bit deeper into the full workflow and integrate tightly with tools, so it feels native to the tools that engineers are using.

  1. Hi anh,

    Thanks for sharing this. I’m applying this workflow in my own development and I have a question: What is the role of the documents in the docs/ai/implementation directory?

    I noticed they don’t seem to be used in the /execute-plan command. Am I missing something?

    Thanks!

    1. Thanks for the question, currently `docs/ai/implementation` is for taking notes of implementation actions, so that we can trace back and update the code, when execute the plan, it will mainly depend on the `docs/ai/plan` which is the implementation plan from `docs/ai/design`. Hope this help.

Leave a reply to Tai Bui Cancel reply