New conversation for every task. Prevents context pollution.
Plan mode FIRST. Whether using Cline or Claude Code, always review the approach before AI writes code. For fixes and debugging, this is critical.
Minimal prompting. “Can we plan task 2.3?” is enough — AI reads your docs for context.
Verify and score. Check the work, assign confidence, close, next task.
1. Open new Cline chat / Claude Code session
2. "Can we please plan task X.X?"
3. AI reads docs, proposes approach
4. Review plan, ask questions, adjust
5. "Proceed" → execution
6. Approve commands as needed
7. AI completes work, provides confidence score
8. Verify it works (run app, check browser, run tests)
9. Close conversation
10. Next task
Every task follows this cycle. No exceptions.
Long conversations accumulate garbage:
By message 80, AI is confused and you’re paying for 80 messages of context on every response.
New conversation = fresh start. Your documentation provides continuity, not chat history.
The pattern:
For new tasks following a sprint plan, plan mode confirms alignment. For fixes, debugging, and tweaks, plan mode is absolutely critical:
I need to fix this: [describe the problem]
Please investigate and propose a plan before making any changes.
Why this matters for fixes:
In Cline: Use the explicit Plan Mode toggle.
In Claude Code: Ask it to plan, or use --plan flag.
You don’t need elaborate prompts. This works:
“Can we please plan task 2.3?”
AI will:
.clinerules / CLAUDE.md for rulesARCHITECTURE.md for the system designLEARNINGS.md for gotchasIf your docs are good, the prompt can be simple.
When to add context:
Otherwise, trust your docs.
Once the plan looks right:
“Looks good. Proceed.”
AI starts working — creating files, editing code, running commands.
Terminal approval (Cline):
Cline wants to run: npm install bcrypt jsonwebtoken
[Approve] [Reject] [Edit]
Quick approval for standard stuff (install, test). Careful review for anything destructive (rm, database operations, git operations).
Claude Code: Review the proposed changes in your terminal. You can stop it at any point.
When AI finishes, it should provide:
## Confidence: 8/10
**Done:**
- Login endpoint working
- Register endpoint working
- JWT tokens generating correctly
- Tests passing (8/8)
- Smoke tested in browser
**Deferred:**
- Rate limiting (Sprint 2 per roadmap)
- Refresh tokens (V2)
**Notes:**
- Used 24h token expiry as discussed
- Added entry to LEARNINGS about bcrypt cost factor
Your job:
AI seems confused: Your docs might be incomplete or contradictory. Check them.
AI does something unexpected: Stop execution, go back to plan mode, discuss.
Confidence below 8: Don’t move on. Ask what’s missing and fix it.
Task taking way too long: Probably too big. Split into subtasks.
Going in circles on a bug: Stop. Write a task doc capturing what’s been tried and what’s left. Start fresh. (See Context Management for details.)
During a task, AI (or you) notices something else that needs fixing in a different part of the app.
Wrong approach: “While we’re here, let’s fix that too.”
Right approach: “That’s important but not our current task. Write a quick task doc for it and let’s stay focused.”
Resist the temptation. Each conversation has limited context. Piling unrelated work into it:
Write the task doc. Start a fresh conversation for it later.
| Phase | What Happens |
|---|---|
| New conversation | Fresh context, AI reads docs |
| Plan Mode | AI proposes approach, you review |
| Execution | AI writes code, you approve commands |
| Completion | Confidence score, verification |
| Close | Done. Next task gets new conversation |
Next: Task Patterns — How to document completed tasks.