AI Copilot in VS Code: Tips for Using Claude Sonnet 4 Effectively
I have a bit of a love-hate relationship with automatic coding agents like Claude Sonnet 4. On one hand, these AI helpers make life easier by churning out boilerplate code and handling mundane tasks. But on the other hand—much like driving everywhere with GPS and losing your sense of direction—leaning too heavily on an AI can make you a bit "dumber" in the long run as a developer.
That said, there’s a flip side: when used wisely, an AI coding assistant can free up your mental bandwidth for more complex, high-level problem solving. It’s similar to how using C++ instead of assembly (and later, Python instead of C++) lets you focus on what you want to accomplish rather than how to do it in painstaking detail. At the end of the day, programmers are linguists of logic. Writing prompts for an AI is just another form of communication with the computer—higher-level and less deterministic than writing code, but still a skill that can be honed. The key is knowing how to craft good prompts, when to leverage the AI’s help, and when you’re better off writing the code manually.
After several months of using Claude Sonnet 4 as my AI pair programmer in VS Code, I’ve picked up a number of best practices. Below are a few tips I now follow to get the most out of this tool while avoiding its pitfalls:
1. Use AI for Boilerplate, Not Complex Logic
AI excels at generating common patterns and boilerplate code, so stick to using it for the mundane stuff. For example, I’ll happily let it spin up a Node.js REST API skeleton or a boilerplate React button component. These are well-known, generic tasks where the AI is likely to produce correct and efficient code. However, I avoid delegating “exotic” or highly complex logic to the AI. If your problem is unusual or requires careful, nuanced reasoning, the model might struggle or produce something incorrect or nonsensical. In those cases, you're better off writing the critical code yourself. Use the AI as a booster for the easy parts of coding, not the core ingenious parts of your solution.
2. Have It Write and Run Tests
One of the most effective habits I’ve adopted is to ask the AI to generate tests for any code it writes, and even execute those tests. Often, Claude will produce code that looks plausible but hides logical errors or edge-case bugs. By immediately running unit tests against the AI’s code, you force the model to double-check its work. In many cases, the tests will fail on the first try—prompting Claude to spot the mistake and fix it in the next iteration. This tight feedback loop greatly improves accuracy. Plus, if you’re using a paid model where each request costs money (say, around $0.04 per call), combining code generation and testing in a single request can halve the number of requests you need. Fewer back-and-forth fixes means saved time and lower API costs.
3. Limit File Size and Keep Code Simple
AI models like Claude have a tendency to over-engineer solutions if you let them run wild. The more you prompt for changes or additions to a single file, the more bloated and convoluted that file can become over time. I’ve learned to set some boundaries up front. Specifically, I instruct the AI to keep files under about 300–350 lines of code and to avoid overly complex designs. Once the initial code is generated, I often ask for a couple of simplification passes (usually added by default in my instructions file below): “Now refactor this to be cleaner and more concise,” or “Simplify the logic and remove any duplication.” Encouraging a DRY (Don’t Repeat Yourself) philosophy in the prompts goes a long way. Shorter files with focused responsibilities not only make for cleaner architecture, they also play better with Claude’s limitations. In my experience, Claude struggles with very long files or sprawling contexts – it can lose track of details and slow down trying to analyze them. Keeping each piece of code bite-sized and well-separated leads to faster responses and fewer mistakes.
4. Ask for Documentation in the Code
Another tip is to have the AI document the code it generates. This means asking for meaningful comments or even a short usage example in a docstring or markdown section. Well-documented code is easier for you to understand later, and interestingly it also helps the AI itself in subsequent sessions. For instance, if you come back the next day and ask Claude to extend a piece of code, having those comments means you can paste the code (with comments) into the prompt and the AI will immediately grasp the context and intent. Essentially, you’re giving it a breadcrumb trail to follow. Documentation acts as a bridge between your understanding and the AI’s, ensuring continuity over multiple coding sessions. Plus, as a bonus, you end up with code that any human teammate can read and understand more easily.
5. Make the AI Clean Up After Itself
One of the quirks I noticed with Claude (and AI coders in general) is that it can be too eager to please – sometimes going so far as to create extra files or outputs that you didn’t ask for. For example, while debugging an issue or following an instruction, it might spontaneously generate a new helper script, open a notebook with some test code, or drop a README file with notes. In theory it’s trying to be helpful, but in practice this can bloat your project with junk. My solution: explicitly tell the AI not to create unnecessary files, or if it must, to clean them up afterward. If I notice it made a throwaway script to illustrate a point, I’ll delete it and remind Claude that we should stick to the files relevant to the project. By reining in this behavior, you keep your workspace tidy and maintain control over what's being added to your codebase.
6. Craft Clear Prompts and Iterate
Using an AI coding assistant effectively is as much about how you ask as what you ask. I always try to write clear, specific prompts that outline exactly what I need, including any constraints or expectations. If the first answer isn’t what I wanted, that’s okay—treat it as a draft. I’ll refine the prompt or give the AI feedback (“That isn't quite right, please try a different approach for X part”) and run it again. Often, breaking a complex task into smaller, sequential prompts gets better results than one giant request. For instance, I might first ask for an outline or plan for the solution before having the AI write any code. Remember, this is an iterative collaboration, not a one-shot delegation. By guiding Claude step by step and honing your queries, you can coax out much better solutions. Over time, you also learn the phrasing that works best, almost like learning to talk to a teammate or a junior developer. Prompting is a skill, and investing in that skill pays off in the quality of the AI’s output.
Base AI Instructions for Projects
I’ve found it helpful to codify the above principles into a project-specific instructions file for Claude. In
each of my repositories, I include a file (for example, .github/instructions.md
) that contains guidelines for the AI
assistant. This file serves as a baseline that I tweak per project type or context. By having these rules written down,
I ensure consistency in how I use the AI across projects. Here’s an example excerpt from such an instructions
file, reflecting the tips discussed:
- Focus on implementing well-known patterns and boilerplate. Avoid tackling very niche or complex domain logic without
explicit guidance.
- Whenever you generate code for a feature, also write corresponding unit tests and execute them to verify the code
works as intended.
- Keep each file short (around 300 lines max) and avoid repetition. If a solution starts getting too long or complex,
refactor and simplify it.
- Do a few passes of simplification of any final code you produce. Aim for clarity and conciseness while not
compromising on the requested functionality.
- Write clear comments and documentation for any code you produce. This helps clarify the code's purpose for both humans
and future AI prompts.
- Do not create additional files or artifacts unless explicitly asked. Avoid littering the project with extra scripts,
notebooks, or READMEs that were not requested.
For new projects I also add:
- Do not make the code backwards-compatible. Make sure the rest of the code knows the interface of the new code, but do
not worry about support of deprecated features.
- Do not worry about support of different major versions of dependencies. Use the latest stable versions.
Happy coding!