Untitled
Working with Claude Code: Capabilities, Limitations, and Best Practices#
Overview#
This guide explains how to work effectively with Claude Code (and LLM-based coding assistants in general). Understanding what I can and can't do, and how to ask for things, will help you get better results faster.
Core Capabilities#
What I'm Good At#
1. Reactive Problem Solving#
When you give me a specific task or problem, I can:
- Implement features end-to-end
- Debug errors and fix issues
- Refactor code with clear objectives
- Migrate between technologies/patterns
- Generate boilerplate and repetitive code
Example:
text2. Pattern Recognition#
I can identify:
- Common anti-patterns and code smells
- Inconsistencies in configuration
- Security vulnerabilities (SQL injection, XSS, hardcoded secrets)
- Duplicate code or dependencies
- Deprecated API usage
Example:
text3. Domain Expertise#
I have deep knowledge of:
- Modern web frameworks (Next.js, React, Node.js)
- TypeScript and JavaScript ecosystems
- Database patterns and ORMs
- Authentication and security
- Monorepo tooling (Turborepo, pnpm workspaces)
- CSS frameworks (Tailwind, shadcn)
- Testing frameworks
- DevOps and deployment patterns
Example:
text4. Systematic Review#
When pointed in a direction, I can:
- Audit codebases for specific issues
- Check compliance with style guides
- Verify architectural patterns
- Generate documentation
- Create test cases
Example:
text5. Contextual Understanding#
I can:
- Understand project structure from files I've read
- Remember conversations within a session
- Reference previous work from summaries
- Understand relationships between components
- Follow coding patterns established in the codebase
Example:
textLimitations#
What's Hard for Me#
1. Proactive Discovery#
I won't automatically:
- Scan entire codebases looking for issues
- Notice problems outside the current task context
- Remember to check related areas without prompting
- Suggest improvements unless asked or in context
Why: Would be token-expensive and potentially annoying (unsolicited advice).
Workaround:
text2. Broad Awareness Without Context#
I need to read files to understand them:
- Can't "just know" what's in your codebase
- Need to load context before analyzing
- Limited by conversation history/token limits
Workaround:
text3. Judgment Calls on Intent#
Sometimes "bad practice" is contextual:
- Is this a quick prototype or production code?
- Is this technical debt you're aware of?
- Is this an intentional pattern or a mistake?
Workaround:
text4. Long-term Memory#
Between sessions:
- I get a summary, not full conversation history
- Important details might be lost or compressed
- Context needs to be rebuilt
Workaround:
text5. Non-deterministic Behavior#
LLMs are probabilistic:
- Different sessions might take different approaches
- Code style might vary slightly
- Need explicit style guides for consistency
Workaround:
textHow to Ask for Things#
Being Explicit vs. Implicit#
❌ Too Vague#
text✅ Specific#
text✅ Contextual#
textTask Types and How to Frame Them#
Implementation Tasks#
textDebugging Tasks#
textReview Tasks#
textRefactoring Tasks#
textDocumentation Tasks#
textLearning Tasks#
textStrategies for Better Results#
1. Progressive Refinement#
Start broad, then iterate:
text2. Reference Existing Patterns#
Point to good examples:
text3. Provide Context#
Help me understand the bigger picture:
text4. Use Checklists#
Create review checklists I can reference:
markdownThen ask: "Complete this task and run through the review checklist"
5. Batch Related Tasks#
Group related work together:
text6. Ask for Explanations#
Don't just accept code, understand it:
text7. Iterative Review#
For complex changes:
textProactive Suggestions: How to Enable Them#
Since I'm better at reactive analysis than proactive discovery, here are strategies to get proactive suggestions:
Option A: Post-Task Reviews#
After completing a task, ask:
textOption B: Review Checklists#
Create .claude/review-checklist.md:
markdownThen I can reference this during tasks.
Option C: Explicit Review Requests#
Periodically ask for systematic reviews:
textOption D: Pattern Documentation#
Document your preferred patterns in .claude/:
markdownThen I can flag deviations automatically.
What I Need to Work Effectively#
1. File Access#
I need to read files to understand them. If I'm making wrong assumptions, point me to the right files:
text2. Clear Objectives#
Tell me what success looks like:
text3. Error Messages#
If something's broken, give me the error:
text4. Constraints#
Tell me about limitations:
text5. Preferences#
Share your coding preferences:
textCommon Scenarios#
Scenario 1: "Something's Broken"#
❌ Less Effective#
text✅ More Effective#
textScenario 2: "Add a Feature"#
❌ Less Effective#
text✅ More Effective#
textScenario 3: "Review My Code"#
❌ Less Effective#
text✅ More Effective#
textScenario 4: "Refactor Something"#
❌ Less Effective#
text✅ More Effective#
textScenario 5: "Explain Something"#
❌ Less Effective#
text✅ More Effective#
textWorking with Monorepos#
Special considerations for monorepo projects like yours:
1. Scope Clarification#
text2. Cross-Package Changes#
text3. Dependency Management#
text4. Consistency Checks#
textSession Management#
Starting a Session#
Give me context if needed:
textDuring a Session#
I maintain context, but you can help:
textEnding a Session#
If you want to document decisions:
textDocumentation Strategies#
What to Document#
Project Structure#
Keep .claude/project-structure.md updated with:
- App purposes and relationships
- Package responsibilities
- Key files and their roles
- Recent major changes
Coding Standards#
Create .claude/coding-standards.md:
- Import patterns
- Component patterns
- Error handling approaches
- Naming conventions
- Testing requirements
Architecture Decisions#
Document in fs/storage/docs/:
- Why certain patterns were chosen
- Trade-offs that were considered
- Security considerations
- Integration points
When to Update Documentation#
Ask me to update docs:
- After major refactoring
- When adding new patterns
- After architectural decisions
- When onboarding new features
textAdvanced Techniques#
1. Multi-Step Planning#
For complex tasks, ask for a plan first:
text2. Parallel Work#
I can handle multiple independent tasks:
text3. Incremental Validation#
For risky changes, validate incrementally:
text4. Template Creation#
Create reusable patterns:
text5. Debugging Assistance#
Use me as a debugging partner:
textRed Flags to Watch For#
When I Might Be Wrong#
I can make mistakes. Watch for:
-
Over-Simplification
- "Just do X" without considering edge cases
- Missing error handling
- Ignoring security implications
-
Cargo Culting
- Copying patterns without understanding context
- Using outdated approaches
- Over-engineering simple problems
-
Assumption Errors
- Assuming file structure without reading
- Guessing at API contracts
- Making up variable names
How to Catch Issues#
textWhen to Push Back#
textTools and Workflows#
File Operations#
I can:
- ✅ Read files
- ✅ Write files
- ✅ Edit files (find/replace)
- ✅ Search for patterns (grep)
- ✅ Find files (glob)
I cannot:
- ❌ Watch files for changes
- ❌ Run long-lived processes (servers, watchers)
- ❌ Access files without permission
- ❌ Modify files outside the workspace
Command Execution#
I can run bash commands:
- ✅ Build commands (
npm run build) - ✅ Test commands (
npm test) - ✅ Git operations (
git status,git diff) - ✅ Package management (
pnpm install)
I should not:
- ❌ Run destructive commands without confirmation
- ❌ Commit/push without asking
- ❌ Modify git config
- ❌ Run commands that require interactive input
Best Practices#
textExamples of Effective Collaboration#
Example 1: Feature Implementation#
textExample 2: Debugging Session#
textExample 3: Refactoring Project#
textExample 4: Learning Session#
textQuick Reference#
Ask Me To...#
✅ Implement: "Add feature X using pattern Y" ✅ Debug: "Fix error Z, here's the stack trace" ✅ Review: "Check file X for security issues" ✅ Refactor: "Move code from A to B" ✅ Document: "Explain how X works" ✅ Analyze: "Find all uses of pattern X" ✅ Validate: "Check if approach X will work for case Y" ✅ Plan: "Outline steps to implement feature X"
Don't Expect Me To...#
❌ Automatically know what's in files I haven't read ❌ Remember specific details across session boundaries ❌ Proactively scan codebases without being asked ❌ Make judgment calls without context ❌ Know your unstated preferences ❌ Run indefinitely running processes ❌ Make destructive changes without confirmation
For Best Results...#
- Be specific about what you want
- Provide context about why you want it
- Reference examples of good patterns to follow
- Share constraints that limit the solution space
- Ask for explanations when you don't understand
- Iterate rather than expecting perfection first try
- Document patterns you want me to follow
Summary#
I'm most effective when:
- Given clear, specific tasks
- Pointed toward relevant context
- Asked to follow established patterns
- Given the "why" along with the "what"
- Encouraged to explain my reasoning
I'm least effective when:
- Expected to proactively discover issues
- Working with vague requirements
- Missing critical context
- Making judgment calls without guidance
- Operating without clear success criteria
Think of me as:
- A highly capable but literal pair programmer
- Great at execution, pattern matching, and systematic analysis
- Needs direction but can run with it once pointed
- Better at depth than breadth
- Effective when you're explicit about intent
The more you help me understand your goals, patterns, and constraints, the better I can help you build. Don't hesitate to course-correct, ask questions, or challenge my suggestions. We're collaborating, not just executing commands.
Additional Resources#
- Project Structure - Current monorepo layout
- Theming Guide - How to work with design tokens
- Your coding standards (create
.claude/coding-standards.md) - Your review checklist (create
.claude/review-checklist.md)
Questions or feedback about working together? Just ask!