Skip to main content

Vibe Coding Tips

I. Setting the Stage: Foundations & Strategy

Before you even think about prompting an AI for code, laying the right groundwork is crucial.

Tip 1: Use Widely Documented Tech Stacks

  • The Gist: Stick to popular, well-documented technologies (e.g., React, FastAPI, Supabase, Firebase, Stripe).
  • Why: LLMs are trained on vast amounts of public code and documentation. Popular stacks mean more training data, leading to more accurate code generation and fewer "hallucinations" (incorrect or nonsensical outputs).
  • Action: When starting a new project, research and select technologies known for their extensive community support and comprehensive documentation.

Tip 2: Plan Extensively Ahead

  • The Gist: Dedicate a significant portion of your time (aim for 60-70%) to planning before writing code.
  • Why: LLMs excel at narrowly defined tasks. They are not (yet) great at architecting complex systems and coding them simultaneously. A solid plan provides the necessary structure.
  • Action: Your plan should include:
    • Clear requirements
    • System architecture
    • Chosen tech stack
    • User stories
    • Database schemas
    • API designs (if applicable)

Tip 18: MVP First, Then Enhance

  • The Gist: Focus on building a Minimum Viable Product (MVP) – the simplest, core, valuable version of your application – before adding complex features.
  • Why: This approach validates your core idea quickly and prevents wasting effort on features users might not need or want. It keeps the initial scope manageable for AI-assisted development.
  • Action: Define the absolute essential features that deliver core value. Get that working, test it, and then iterate with enhancements and new features based on feedback and your roadmap.

II. Mastering AI Interaction & Prompting

How you communicate with your AI coding assistant dramatically impacts the quality of its output.

Tip 3: Use Context Wisely

  • The Gist: Be deliberate about the context you provide to the AI.
  • Why: While some IDEs auto-select context, you often have a better understanding of what's relevant. Providing too much or irrelevant information can overwhelm the LLM's context window, leading to poor or unfocused responses.
  • Action: Manually select only the relevant files, code snippets, or documentation sections needed for the current task.

Tip 6: Ask for Multiple Perspectives

  • The Gist: If the AI struggles or its initial solution isn't ideal, prompt it to brainstorm multiple different approaches.
  • Why: This forces the AI to think more broadly and can uncover more creative or suitable solutions you hadn't considered.
  • Action: Ask: "Can you suggest three different ways to solve this? Explain the pros and cons of each, and rank them by likelihood of success for my specific use case."

Tip 8: Strategic Model Selection

  • The Gist: Different LLMs (e.g., Claude 3.x Sonnet/Opus, Google's Gemini, various open-source models) excel at different tasks.
  • Why: Some models are better for creative brainstorming, others for logical code generation, and others for concise explanations or debugging.
  • Action: Be mindful of the task at hand. Don't hesitate to switch models within your AI tool if one seems better suited (e.g., Opus for planning, Sonnet for quick coding, a specialized model for debugging).

Tip 9 & 19: Implement Project Rules

  • The Gist: Define project-specific rules and guidelines and consistently pass them as context to the LLM.
  • Why: This ensures consistency in coding style, tech stack usage, and architectural patterns. It also helps prevent the AI from injecting unwanted libraries or deviating from your project's standards.
  • Action: Create a "PROJECT_RULES.md" file (or use features like Cursor.directory) outlining:
    • Frontend/backend conventions
    • Styling preferences
    • Approved libraries and frameworks (and those to avoid)
    • Security guidelines
    • Include this as part of your context for relevant prompts.

Tip 10: Detailed Task Planning

  • The Gist: Break down your high-level project plan (from Tip 2) into granular, step-by-step tasks.
  • Why: Smaller, well-defined tasks are much easier for AI to handle effectively. This makes the overall project more manageable.
  • Action: For each feature or module, list out the individual coding, configuration, and testing steps. Tools like "Claude Task Master" (open-source) can assist in automating this breakdown.

Tip 11: Design-First Approach

  • The Gist: Design your UI screens and their various states before you start coding features.
  • Why: A clear visual and experiential target makes it easier for the AI (and you) to build towards a specific outcome. It reduces ambiguity.
  • Action: Mock up or wireframe:
    • Blank states
    • Error states
    • Success states
    • Loading states
    • User interactions and animations

Tip 13: Create Custom Modes

  • The Gist: Leverage features in AI IDEs (like RooCode's custom modes) to create specialized "personas" or instruction sets for recurring tasks.
  • Why: This streamlines your workflow by pre-loading specific instructions, contexts, or roles for the AI, saving you from repetitive prompting.
  • Action: Identify common tasks (e.g., "debugging Javascript," "refactoring Python," "writing React components," "architecting a new feature") and create custom modes with tailored pre-prompts or system messages.

Tip 14: Configure Up-to-Date Documentation

  • The Gist: Ensure your AI tool has access to the latest documentation for your chosen libraries and frameworks.
  • Why: LLM training data can be months or even years out of date. Current docs are crucial for accurate code and avoiding deprecated features.
  • Action:
    • Use IDE features that allow direct indexing of documentation URLs (e.g., Cursor, Windsurf).
    • Manually scrape docs into a project folder and add it as context.
    • Use MCP (Memory, Context, Prompting) tools like Context7 to pull in docs.
    • Regularly re-index or update these documentation sources.

Tip 21: Create New Conversations As Needed

  • The Gist: Start a fresh AI conversation if the previous context is no longer highly relevant to your new task.
  • Why: Many AI IDEs send the entire conversation history as context. Long, winding conversations can become bloated, unfocused, and consume valuable context window space, leading to less effective AI responses.
  • Action: When switching to a significantly different feature or problem, consider starting a new chat/conversation to give the AI a clean slate focused on the current task.

III. Code Management & Quality Assurance

AI generates code, but you own it. Maintaining quality and control is paramount.

Tip 4: Commit Between Conversations

  • The Gist: Regularly commit your code to version control (like Git), especially between distinct AI "conversations" or when a logical chunk of work is complete.
  • Why: This creates a safety net. If an AI suggestion breaks things or you go down a wrong path, you can easily revert to a known good state.
  • Action: git add ., git commit -m "feat: implement user login via AI assist (session XYZ)", git push.

Tip 5: Commit Solutions to Memory

  • The Gist: Utilize your AI tool's memory features (e.g., MCP servers, built-in memories).
  • Why: When you solve a significant problem or establish a preferred pattern (e.g., for styling, API error handling, or a specific debugging approach), saving it to the AI's memory helps it apply this solution consistently in the future, preventing redundant work and prompts.
  • Action: After a successful resolution or establishing a pattern with the AI, explicitly tell it: "Remember this approach for handling X," or use the tool's specific command to save the relevant part of the conversation or code.

Tip 7: Evaluate Outputs Against a Standard

  • The Gist: Don't blindly accept AI-generated code. Provide the AI with your standards and ask it to evaluate its own output.
  • Why: This reinforces quality and helps the AI learn your preferences. It also acts as a first-pass quality check.
  • Action:
    • Feed the AI your style guides, security best practices, aesthetic preferences, or performance requirements.
    • Prompt: "Review the code you just generated. Does it adhere to [our company's React style guide / OWASP security principles / these performance benchmarks]? If not, please revise it."
    • Consider using other AI agents (e.g., Exa AI, Perplexity) to research and critique specific code snippets or approaches.

Tip 17: Understand AI-Generated Code Before Accepting

  • The Gist: Always read and strive to understand the code generated by the AI.
  • Why: This is crucial for learning, catching subtle errors the AI might miss, debugging effectively, and maintaining ultimate control and ownership of your project.
  • Action: If any part of the code is unclear, ask the AI: "Can you explain this function line by line?" or "Why did you choose this approach instead of X?". Don't merge code you don't understand.

Tip 20: Regular Security Checks

  • The Gist: Periodically run security checks on your codebase, especially for critical components.
  • Why: Even with project rules, AI might introduce vulnerabilities. This is particularly important if your project rules for security aren't yet fully comprehensive.
  • Action:
    • Focus on areas like authentication, authorization, input validation, and data handling.
    • Use static analysis security testing (SAST) tools or prompt the AI specifically: "Review this authentication module for common security vulnerabilities like XSS, SQL injection, or insecure direct object references."

IV. Debugging & Iteration with AI

AI can be a powerful debugging partner if you guide it correctly.

Tip 15: Add Early Returns & Logging When Debugging

  • The Gist: When debugging, explicitly ask the AI to insert early return statements and robust console.log() (or equivalent) statements.
  • Why: LLMs can often identify issues more effectively when provided with the output of these logs and the state of variables at different points in the execution flow. Early returns help isolate the problematic code section.
  • Action: Prompt: "Help me debug this function. Insert console logs for these variables [var1, var2] at these points [before X, after Y]. Also, add an early return if [condition] is met to check its value." Then, provide the log output back to the AI.

Tip 16: Use Checkpoint Restores

  • The Gist: Take advantage of "checkpoint restore" features within AI chat interfaces if available.
  • Why: If a series of prompts leads your code or the AI's understanding into a broken or undesirable state, you can easily revert the conversation (and sometimes associated code changes) to an earlier, working checkpoint.
  • Action: Look for options like "restore from here" or "revert to this point" in your AI chat tool. Use it when a debugging path proves fruitless or an AI suggestion derails progress.

V. Choosing Your Tools

The right tools can make or break your Vibe Coding experience.

Tip 12: Pick the Right IDE for You

  • The Gist: Choose an AI-powered IDE (e.g., Cursor, Windsurf, RooCode, Codium) that fits your specific needs.
  • Why: Different IDEs offer varying features, levels of AI integration, model access, and pricing. What works for one person might not be optimal for another.
  • Action: Consider your:
    • Experience level
    • Preferred workflow
    • Budget
    • Specific features needed (e.g., direct doc indexing, custom modes, specific model access)
    • Don't just follow trends; try a few if possible.