What I learned using CursorAI every day as an Engineer

CursorAI is an AI-powered code editor (built on VS Code) that integrates a large language model (LLM) directly into your development workflow. Instead of just autocompleting code, Cursor can predict multi-line edits, apply changes across your codebase, and even answer questions about your code.

I have shared that if you use AI effectively, it will help you save 50% of coding time compared to before. This post is a reflection of what I’ve learned using Cursor every day as an engineer. I started used it as a Free user from July 2024, and started to become paid user in Sep 2024. I’ll walk through my 9 months journey with it, how I used Cursor to prompt the LLM, speed up code generation, debug issues, write and improve tests, refactor, document my work, and even understand code faster. Everything here comes from my day-to-day experience, shared in a simple and practical way to help others make the most of Cursor in real engineering.

Prompting Effectively with Cursor

Prompting is the way you tell the AI exactly what you need in a way it understands. In Cursor, effective prompts are key to getting helpful code suggestions or fixes. Here are some prompting tips and features to keep in mind.

Be Clear and Specific

Describe the task or problem in plain language, mentioning relevant details like framework (NestJS, React), function names, or expected behavior. For example, “Implement a NestJS route to GET /users returning a list of users from the database”.

The more precisely you describe the goal, the better the AI can help. Remember that the context is important. The more structured, semantic, and contextual the input, the smarter your AI becomes.

Leverage context with @ mentions

Cursor lets you reference files or documentation in your prompt using the @ symbol. You can refer to your code (e.g., @user.service.ts) or even external docs (via @Docs entries) to give the AI more knowledge​. 

For instance, if you’re using a specific library or NestJS module, you could add its documentation with @Docs and then say, “Use @NestDocs to implement the authentication guard”.

This ensures the AI follows the official patterns. Adding documentation links in prompts is especially useful for less common libraries or for enforcing certain framework’s best practices.

Inline vs. Chat/Composer

Cursor provides multiple ways to prompt the AI.

Inline Edits (Cmd/Ctrl + K)

You can highlight a few lines of code and press Cmd+K to bring up a small prompt box for quick fixes or generation. This is great for small refactors or one-off questions about a specific snippet. After you type a prompt and submit, Cursor shows the changes as a diff,  with removed lines in red and added lines in green, similar to how you read the code in Pull Request on Github. You can review and accept the changes.

Chat/Composer (Cmd/Ctrl + L)

For larger tasks involving multiple files or more discussion, open the chat interface (also known as the Composer). This is like having a conversation with the AI. You can ask it to generate bigger pieces of code, design entire modules, or perform complex refactors across the codebase. In the chat, you can use @ mentions to include multiple files or docs as context. Cursor will index your codebase, meaning it can access a code index to answer questions or apply changes across files. Use this mode for broad changes like “Refactor the authentication module to use JWT instead of sessions”. You can attach all relevant files (user model, auth service, etc.) with @ so the LLM considers them together.

Use TypeScript over JavaScript

This is specific from my experiment where I spend most of my time with Node.js and JavaScript ecosystem. When possible, write code in strong type, TypeScript rather than plain JavaScript. Strong types give the AI meaningful clues about how to proceed. The AI can check its output against TypeScript definitions, reducing errors.

For example, if you prompt “create a function to compare two Event objects”, having TypeScript types will guide the LLM to handle it properly or catch type mismatches.

Iterate and Refine

Treat interacting with Cursor as an iterative process. If the AI’s first answer isn’t perfect, you can clarify or adjust your prompt and try again. Often, shorter, focused prompts work best.

For example, rather than asking, “Build a whole e-commerce app frontend”, break it into steps: “Create a React component for the product list with props for items”, then “Now add a button that adds an item to cart, updating state”, etc.

You can also ask follow-up questions in chat if something is unclear or needs tweaking. If one approach doesn’t yield good results, rephrase the request. Many times, trying a few different prompt strategies will lead the AI to a correct solution.

Set Project-Specific Rules

This is an advanced feature. Cursor allows custom AI rules. Previously, they allowed you to create a .cursorrules file in the root of the project to guide the AI’s style and preferences. Now, to create rules specific to a project, you can put the files in the .cursor/rules directory. They are automatically included when matching files are referenced.

With rules, you can instruct the AI to always follow certain conventions without repeating them in every prompt. For example, for a NestJS project, you might add a rule: “Always use NestJS dependency injection and @Injectable() for services”.

These rules help ensure consistency and save you time by using your project’s standards into the AI’s behavior. During prompting with Cursor, you can also ask it to reflect and update the rule files.

By applying these prompting practices, you have the basic foundation for starting with Cursor. Now, let’s look at how Cursor can assist in various parts of your development workflow.

Understanding Unfamiliar Code with AI

As an experienced engineer, you often inherit or interface with code you didn’t write. Cursor can act as a smart guide to help you understand such code quickly. Instead of manually tracing through every line, you can ask the LLM for explanations.

If you come across a complex function or class, highlight the code and press Cmd+K, then ask Cursor, “Explain what this code does”. This saves time when deciphering unfamiliar logic.

In the chat interface, you can ask higher-level questions about the codebase. For instance, “Summarize what the UserService class does in our NestJS app”. Because Cursor indexes your codebase, it can retrieve relevant info to answer. It might tell you that UserService handles user creation, retrieval, and authentication by interacting with the database and JWT module. The AI effectively “knows your codebase” and can use that knowledge to give you insights. This is extremely helpful when onboarding onto a new project or reviewing a large code review. You can get quick overviews of different pieces of the system.

You can use the chat to find where something is defined or used by just asking. For example, “Where in the codebase is the forgot password function defined?”. Cursor will search the indexed codebase and point you to the file and line number if possible. Similarly, you could ask, “Which files reference the UserEntity class?” and the AI can list those references. This Q&A style search helps navigate large projects faster than manual grep, as the AI does the lookup for you.

If you see an unfamiliar framework pattern (say a NestJS decorator or a React hook you haven’t used), you can query Cursor about it. For instance, “What does the @Throttle() decorator do in NestJS?”. If the knowledge isn’t in your codebase, you could add @Docs for NestJS documentation and then ask. Cursor will combine the documentation and its own trained knowledge to explain that @Throttle() is used for rate-limiting requests in NestJS. With this, you learn about new libraries or patterns without leaving your editor.

Using Cursor as a code-reading companion can cut down the time needed to understand existing code or third-party modules. It’s like having a teammate who instantly summarizes or clarifies code for you. Always double-check critical sections yourself, but for a first pass understanding or confirming your interpretations, the AI is incredibly useful.

Code Generation and Implementation

One of Cursor’s biggest benefits is speeding up code writing. Instead of writing boilerplate or repetitive code by hand, you can ask the AI to generate it for you.

As you type, Cursor will suggest code completions. It can auto-complete entire lines or blocks of code that “just make sense” in context​. Get comfortable hitting Tab to accept these completions; it’s a quick way to stub out code.

For larger chunks of code, use natural language prompts. Open an empty file or position your cursor where you want the code, then activate the inline prompt (Cmd+K). Describe what you want. For example, “Create a NestJS service class named TasksService with methods to get all tasks, get one by ID, create, update, and delete tasks. Use an in-memory array for storage”. Cursor will then produce the code based on your description​. You can then refine these results by either editing yourself or prompting Cursor to fill in details. The key is that you have 80-90% of the boilerplate written in seconds, which you can then customize.

When starting a new feature that touches multiple files (say a new module with a model, service, and controller in NestJS, or a new component plus context in React), you can use the Composer (chat interface). List all relevant files using @ (or use @codebase to refer to the whole project) and then describe the overall change. For example: “Create a new AuthModule with a controller, service, and guard. The controller should have login and signup endpoints, the service should validate users, and use JWT strategy for authentication”. Cursor can generate multiple files or suggest diffs across files in one go​. This is like having a junior developer draft an entire feature, which you can then review and polish.

After each generation, always review the code. Cursor’s strength is speed, but it might not get everything perfect, especially for complex logic. Verify types and logic, the advantage is that you now have something concrete to tweak rather than a blank page. If something is slightly off, you can correct it manually or just prompt Cursor again with a refined request. For instance, “Now add input validation to the signup DTO using class-validator decorators”, and it can modify the code to include @IsEmail(), @MinLength(), etc. Gradually, you guide the AI to the desired solution.

You’ll find you’re typing less boilerplate and focusing more on high-level logic. It feels like pair programming: you describe the intent, the AI writes the initial code, and you oversee and adjust as needed. This can accelerate development while keeping you in control of the final code.

Debugging and Troubleshooting

When you encounter an error or stack trace, you can ask Cursor to interpret it. Copy the error message or exception and prompt something like: “What does this error mean, and how do I fix it?”. This saves you from searching StackOverflow for the explanation; the answer comes to you in the chat. This might not work every time, but you can find some hints before searching more on the internet. I also think that this is another area that LLM should improve.

If a function isn’t producing the expected result and you can’t spot why, ask Cursor. For instance, “The calculateTotalAmount() function is returning 0 sometimes when it shouldn’t. Can you find the bug?”. Provide the relevant code either by selecting it and using the inline prompt, or by ensuring the chat has access to that file (e.g., mention @order.utils.ts). The AI will analyze the logic. It might point out the logic issue. In many cases, it can not only pinpoint the problem but also suggest a fix, given enough context about what the code should do. You get a second set of eyes on your code, which is helpful for tricky issues.

Sometimes, you’re not sure what’s wrong. You can literally talk through the problem with Cursor. Explain the situation: “After implementing the login API, I get a 401 on every request, even with correct credentials. Here’s the @AuthService.login() method. Why might this be happening?”. By giving the AI the code (and maybe expected behavior), it can reason about possible causes. This kind of higher-level debugging advice can point you in the right direction, even if the AI can’t directly run the code. But since you are an experienced engineer, you should use your judgment on which part you should follow and which part you should not.

When using Cursor for debugging, always back up your code or use version control (which you should as an experienced engineer!). That way, if an AI-suggested change doesn’t work out, you can revert easily. In practice, you’ll find the AI catches things you overlooked, or at least provides a fresh perspective on the bug. Debugging with an AI co-pilot can reduce the time spent stuck on issues, getting you to a solution faster.

Writing Tests

Writing tests was something that took time. Cursor can boost your productivity by generating test cases and even helping you adopt a test-driven approach. Whether you’re using Jest for your NestJS backends or React apps, or any other testing framework, the workflow is similar.

After writing a new function or feature, you can ask Cursor to create tests for it. For example, if you have a function formatName(firstName, lastName) in a file, you can prompt: “Write a Jest test suite for the formatName function, covering cases like normal names, missing last name, and all-caps input”.

This saves you the time of writing boilerplate describe/it blocks for each case. You may need to tweak expectations or add edge cases. The AI knows common testing patterns (like using expect and organizing tests) and will align with your project’s testing libraries (especially if you mention them, e.g., Mocha vs Jest).

If you’re following TDD or just want to nail down behavior first, you can actually have Cursor write a test before the implementation. Describe what the code should do and ask for a test. For instance, “Write a unit test for a NestJS AuthService.validateUser method: it should return a user object when credentials are valid, or null when invalid.”. This prompt yields a test for a function that doesn’t exist yet. Once you have the test, you can either implement the function yourself or ask Cursor to implement it to make the test pass​. This approach is powerful. You use the AI to formalize the requirements in code (the test), then use the AI again to fill in the implementation. It’s like having a junior dev who writes the test specs and then attempts the code under your guidance.

Beyond unit tests, you can ask Cursor for higher-level tests. For a NestJS app, you might prompt: “Create an e2e test for the Auth module using Supertest. It should start the Nest app, call the signup and login endpoints, and expect a JWT in the response”. The AI can draft an end-to-end test, including the setup of the testing module and HTTP calls.

A workflow that I found useful is to run your test suite after Cursor generates or modifies code. If some tests fail, feed those failure messages back into Cursor (as part of a prompt) to get it to fix the code.

When using Cursor for testing, remember the golden rule: don’t blindly trust generated tests. Review them to ensure they actually test what you intend. Sometimes AI-written tests might assert the wrong expectation (misinterpreting the requirement). Use them as a time-saver and template, then refine.

Refactoring Code

Refactoring the structure of code without changing its functionality is a task that Cursor handles very well. It allows you to perform structural changes using natural language, which can boost your productivity.

For localized refactoring, highlight the code you want to change and use the inline prompt (Cmd+K). Describe the desired refactor in a simple prompt. For example: “Refactor this function to use async/await instead of Promises.”. These are things you might do manually with find-replace or multiple steps, but Cursor can often execute them in one prompt.

For refactoring that spans across files or requires understanding of the project structure, use the Composer chat. For instance, say you want to reorganize a NestJS module or change an API route across controllers. You can instruct: “Rename the Users module to Members, update the module name, folder name, and all references in imports or paths”. Add @ references to the key files (the users.module.ts, maybe the app.module.ts where it’s imported, etc.) or even @codebase for a broad refactor. Cursor will attempt to update all those files in one go, showing you a combined diff. It might update class names, filenames, and strings like route URLs if mentioned. This is incredibly powerful, a task like renaming a widely-used class can be done in moments, whereas doing it manually might be error-prone. Always verify each changed file in the diff to ensure it did what you intended, especially with broad find-and-replace style changes.

You can ask Cursor to make code more optimized. For example: “Refactor this code to use modern ES2020 features” or “Optimize this loop using array methods”. If you have a chunk of JavaScript that uses older approaches, the AI can transform it (e.g., replace for loops with .map or .reduce where appropriate). If you have a long function, you might say “Refactor into smaller functions each doing one task, apply Single Responsibility Principle”. The AI could split the function into multiple helper functions.

Using natural language to refactor feels like telling a junior developer, “Hey, clean this up for me”, and getting the result instantly. Just be sure to keep an eye on the changes and back them up in version control in case you need to revert​; a safety net is always wise when making sweeping changes.

Documentation and Explanations

If you want to add explanatory comments or JSDoc/TSDoc comments to your code, ask Cursor to do it for you. For example, highlight a function and prompt: “Add a JSDoc comment explaining this function’s purpose, parameters, and return value”. This provides immediate documentation for anyone reading the code later. Similarly, you can ask for comments inside complex logic.

Cursor can also help you maintain external documentation. Let’s say you have a README.md or a docs/ folder describing your project. After implementing new features or changes, you can prompt the AI to update the docs accordingly. For example: “Update the @README.md to include usage instructions for the new AuthModule”. By referencing the README in the prompt (using @README.md), Cursor knows to modify that file. Always review the changes to ensure they are accurate, but this can save a ton of effort in writing documentation from scratch. If you need examples in your documentation (say, how to use a certain API or component), you can ask Cursor to generate them. Other than that, you can also use these documents, mention them in the other prompt to provide more context for the AI.

A powerful workflow is maintaining a “plan” or “spec” in a Markdown file and using it to drive both code and docs updates. For instance, you might write a PLAN.md that outlines a new feature (functions to create, data models, etc.). You can have Cursor implement the code according to PLAN.md and also ensure the documentation matches it. By referencing the plan in prompts, e.g., “Make sure @README.md reflects the current state of the @codebase according to @PLAN.md”, you create a loop where the AI uses the plan to update docs and code consistently. This reduces the chance of docs drifting out of date since the AI always refers to the single source of truth you provided.

Tips and Customizations

We touched on custom AI rules earlier; let’s dive a bit deeper. By creating a rule file in your project, you can set guidelines that the AI will always follow. This is like configuring the AI’s coding style and preferences. For example, in a TypeScript project, you might add rules like:

  • “Prefer async/await over promises.”
  • “Use functional components exclusively in React (no class components).”

Once these are in place, you don’t have to repeat these instructions in every prompt, the AI will adhere to them by default​. This leads to more consistent code generation that matches your project’s conventions. It’s a powerful way to tailor Cursor to your team’s style. You can find community-curated examples of rules for various frameworks to get ideas​.

For frameworks or any tools the AI might not be trained on, use Cursor’s @Docs feature to feed it the info. For instance, if you have an internal library, you can add its docs URL (or local markdown files) to Cursor. Then, when prompting, reference those docs. The AI will then base its answers on both its trained knowledge and your custom docs​.

The underlying LLM has context length limits. If you try to include too many files or an entire huge codebase in a single prompt, the model might get overwhelmed or ignore some context. A good practice is to include only the relevant pieces. If you find the AI is getting confused or timing out, consider reducing the prompt scope. For huge refactors, do it in parts.

Ensure you are using the best model available for your needs, check Cursor’s settings for the model in use. If one model struggles (rare, but maybe), you could try another. Keep in mind that some models might produce different styles of output. In my experience, I found that Gemini 2.5 is very effective for planning and breaking down the tasks, but Sonet 3.7 is much better for code generation.

A few other notes while using Cursor:

  • If you rename files or move things around, Cursor might not immediately know the new paths (it has an index, but it could reference old filenames until it updates). If you notice weird references, try restarting Cursor or re-indexing the project
  • Ensure the framework or library versions are consistent. If your project uses React 18 but the AI suggests a pattern from React 16, gently correct it in your prompt: “use hooks (React 18) not older classes” or update your rules to enforce patterns.
  • Be mindful not to accidentally share sensitive info. Enable Cursor’s privacy mode if needed to avoid sending code to cloud models​. But your prompt might be collected by the Cursor team for tool improvement.
  • AI is not infallible. It can produce insecure code or logical errors. Always use your expertise to review critically. Think of Cursor as an assistant that does the heavy lifting, but you are the final quality gate.

Final words

Cursor represents a new way of AI-assisted software development. The key is to collaborate with the AI: treat it as a partner that needs guidance. Provide context, set rules, and iterate with it. In return, you’ll write code faster, spend less time on routine tasks, and maybe even learn new insights from the AI along the way.

As an experienced engineer, you have the foundation to judge the AI’s output and steer it in the right direction. Cursor amplifies what you can do, but it’s your direction that ensures the work is correct and high-quality. Use it responsibly!

If you have better ways of using Cursor, please let others know in the comment section.


Discover more from Codeaholicguy

Subscribe to get the latest posts sent to your email.

2 thoughts on “What I learned using CursorAI every day as an Engineer

  1. Thanks for sharing!

    Let me give my 2 cents:

    Understands the frameworks we use in the projects is one of the most important thing when talking with the AIs.

    Having a list of features, use cases, use stories at the starting points is a good idea

    Splitting the prompts into a smaller chunks will easy the AIs, easy to manage and debug.

    However I do have the same problem when the AIs deal with huge code base that runs out of window limit.

  2. Thanks for posting – I’m going to print it out and read it – have you read Cassie Korzykov substack on vibe coding? I’d recommend it

Comment