AI Copilot in VS Code: Tips for Using Claude Sonnet 4 Effectively

I have a bit of a love-hate relationship with automatic coding agents like Claude Sonnet 4. On the one hand, they really do make life easier: they can generate boilerplate, wire up standard patterns, and generally take care of work that is not very interesting but still needs to be done. On the other hand - a bit like always driving with GPS and slowly losing your own sense of direction - relying too much on an AI helper can make you less sharp over time as a developer.

There is another side to this, though. Used thoughtfully, an AI coding assistant can clear away a lot of routine work and free your attention for the parts of development that actually require design and judgment. The shift from assembly to C, and later from C++ to Python, already moved us up the abstraction ladder so we could think more about what we want to build instead of every detail of how to express it. Prompting an AI is just one more step in that direction. You are still communicating with a machine, just in a more indirect and less deterministic way.

In the end, programmers are basically using languages to shape logic. Writing prompts for an AI is another language you can learn. The important questions are when it makes sense to use it, when you are better off writing the code yourself, and how to phrase requests so the output is actually useful.

After a few months of using Claude Sonnet 4 as a kind of pair programmer inside VS Code, I’ve settled into some habits that work reasonably well. Below are a few of them.

1. Use It for Boilerplate, Not for the Tricky Core

These models are very good at producing standard patterns and boilerplate. That is where I let them do most of the work. For example, I’m happy to have Claude generate a basic Node.js REST API skeleton, a simple React component, or a standard configuration file. These are cases where the problem is well known and the space of reasonable solutions is narrow.

For logic that is unusual, subtle, or central to the design, I’m much more cautious. If the task requires a careful understanding of the domain or some non‑obvious reasoning, I usually prefer to write that part myself. Models are good at sounding confident even when they are slightly off, and that’s exactly what you do not want in the core of your system.

So I treat the AI as support for the routine layers of the code, not as the author of the key ideas.

2. Let It Write Tests - and Run Them

One habit that has helped a lot is asking the AI to generate tests for any code it just wrote, and then actually running those tests right away.

It is very common for an AI to return code that looks reasonable at a glance but contains small mistakes or misses some edge cases. If you immediately run unit tests against it, you get a quick signal about what is wrong. You can then feed that back into the next prompt and let the model fix its own output.

If you are paying per request, there is an extra benefit: combining “write the code” and “write and run the tests” into a single interaction can cut down on the number of round‑trips. Fewer separate prompts mean less time waiting and less cost, and you end up with tested code instead of just plausible‑looking code.

3. Keep Files Small and Solutions Simple

Left on its own, an AI can easily overcomplicate a solution. If you keep asking for tweaks and additions in the same file, that file can gradually grow into a long, tangled piece of code that is hard to understand.

To avoid this, I set some informal limits. I usually ask Claude to keep individual files below roughly 300-350 lines of code and to avoid elaborate designs unless they are clearly needed. After the first version is generated, I often follow up with prompts like “Please refactor this to be simpler and more concise” or “Remove duplication and make the logic clearer.”

Encouraging a DRY (Don’t Repeat Yourself) mindset in your prompts helps keep things from sprawling. Smaller, more focused files are easier to reason about for humans, and they also seem to work better with the model’s context handling. In my experience, very long files cause the model to lose track of details or slow down as it tries to consider everything at once. Keeping code in modestly sized, well‑separated modules leads to quicker and more reliable responses.

4. Ask for Documentation Alongside the Code

Another useful habit is to ask the AI to document what it is doing. That can mean inline comments, a short explanation block at the top of the file, or a small usage example in a docstring or markdown section.

Good documentation makes it easier for you to understand the code later without re‑deriving everything from scratch. It also helps in future prompts: if you come back the next day and want Claude to extend or modify that code, you can paste the documented version, and the explanations give the model immediate context about the intent and assumptions.

In that sense, comments and docstrings act as a bridge between your understanding and the AI’s interpretation. They also make life easier for any human collaborators who might work with the same codebase.

5. Keep Control Over What Gets Created

One pattern I’ve noticed is that Claude, like many AI assistants, can be a bit too eager to be helpful. When you ask it to explore an idea or debug something, it may start producing extra files: scratch scripts, notebooks, temporary helpers, or README fragments that you never explicitly requested.

Sometimes these are useful for a moment, but if you just leave them there, your project slowly fills up with half‑finished pieces. To keep things tidy, I make it explicit in my instructions that the AI should only create new files when asked, and that temporary artifacts should be removed afterwards or clearly marked as disposable.

When I notice a throwaway file has been created, I will delete it and mention in the next prompt that we should focus on the main project files. Over time, this keeps the repository cleaner and reduces the risk of someone later mistaking experimental code for production code.

6. Spend Time on Clear Prompts and Iteration

How you talk to the AI matters quite a lot. Vague requests tend to produce vague results. I try to write prompts that are specific about the task, constraints, and expectations. If the first answer isn’t close to what I had in mind, I see it as a draft and refine the request.

Often, it works better to split a complex task into stages. For example, I might first ask for an outline or high‑level plan, then for an implementation of just one part, and only later for refactoring and tests. This mirrors how you might work with a junior developer: first agree on the approach, then move into details.

Over time you get a feel for which phrases and patterns produce the most reliable output. Prompting starts to feel less like “talking to a black box” and more like giving structured instructions to a teammate who is fast but occasionally careless.

A Shared Instructions File for Projects

To make this way of working more consistent, I’ve started adding a small instructions file for Claude to each repository. For example, I might include a .github/instructions.md file that describes how I want the assistant to behave in that particular project.

This file usually encodes the principles above: focus on boilerplate and standard patterns, keep files small and simple, write and run tests, document the code, avoid creating unnecessary files, and so on. I then adjust the details depending on the language, framework, or domain of the project.

Having these rules written down gives the AI a stable reference point and also reminds me how I intend to use it. It turns the ad‑hoc tricks that evolved in one project into something reusable across many.

Example:

- Focus on implementing well-known patterns and boilerplate. Avoid tackling very niche or complex domain logic without
  explicit guidance.
- Whenever you generate code for a feature, also write corresponding unit tests and execute them to verify the code
  works as intended.
- Keep each file short (around 300 lines max) and avoid repetition. If a solution starts getting too long or complex,
  refactor and simplify it.
- Do a few passes of simplification of any final code you produce. Aim for clarity and conciseness while not
  compromising on the requested functionality.
- Write clear comments and documentation for any code you produce. This helps clarify the code's purpose for both humans
  and future AI prompts.
- Do not create additional files or artifacts unless explicitly asked. Avoid littering the project with extra scripts,
  notebooks, or READMEs that were not requested.

For new projects I also add:

- Do not make the code backwards-compatible. Make sure the rest of the code knows the interface of the new code, but do
  not worry about support of deprecated features.
- Do not worry about support of different major versions of dependencies. Use the latest stable versions.

Happy coding!

AI Copilot in VS Code: Tips for Using Claude Sonnet 4 Effectively - Roman Semko