After months of working with AI coding assistants, I’ve discovered the secret to a productive partnership. It isn’t crafting the perfect prompt or finding the most capable model. It’s something far more fundamental: the art of context management.
Let me take you through my journey from chaos to clarity, and share the hard-won lessons about managing context, state, and workflow when building real applications with AI assistance.
The Early Days: Simple and Sweet
When I first started using Claude Code for development work, everything felt magical. I’d describe what I wanted, and out would come working code. For simple scripts and small utilities, this worked beautifully. My workflow was straightforward:
- Open a session
- Describe the problem
- Get a solution
- Move on with my life
The context was clean, the scope was limited, and the AI had everything it needed right there in our conversation. Life was good.
The First Growing Pains
As my projects grew more ambitious, I started noticing something interesting. The quality of the AI’s output was inversely proportional to how much context I was trying to maintain in a single session. The more I tried to explain about the broader system, the more the AI would lose track of specific implementation details. The more history we built up in our conversation, the more likely it was to contradict earlier decisions or forget key constraints.
This led to my realization: AI assistants work best with focused, limited context.
Clearing your session context before you start a new task provides much better output than allowing your session context to grow, filled with irrelevant information, and compacting. So it’s best to clear your session context after each step in the process, requirements discussion, architecture planning, and each individual task implementation. The problem here is that while a an over burdened context is bad, you still need SOME context to help the AI complete the current task. So some state, context, and reference information needs to be kept around, somewhere…
The Markdown Revolution
My first breakthrough came when I started using markdown files as external state storage. Instead of trying to maintain everything in the conversation, I began creating simple documents:
- architecture.md – High-level technical decisions and system design
- decisions.md – Key choices made and their rationale
- tasks.md – What needed to be done (this quickly evolved into a tasks directory with individual task files in todo, current, and done sub-directories)
- completed.md – What was finished and any important notes
This approach had several immediate benefits:
- Persistent state across sessions – I could start fresh conversations without losing critical information
- Clear separation of concerns – The AI could focus on the task at hand while still having access to broader context when needed
- Human-readable documentation – These files became valuable project documentation, not just AI food
The Model Selection Insight
Around this time, I made another crucial discovery: not all tasks require the same level of AI horsepower.
I started using larger, more expensive models (like Claude Opus or Gemini 2.5 Pro) for:
- Initial architecture and system design
- Complex problem-solving
- Creating the overall development plan
- Making key technical decisions
Then I’d switch to faster, more affordable models (like Claude Sonnet or GPT 5) for:
- Implementing individual functions
- Writing unit tests
- Code formatting and cleanup
- Simple refactoring tasks
This wasn’t just about cost optimization (though that was nice). The smaller models actually performed better on focused tasks because they weren’t overthinking things. Give a powerful model a simple task, and it might architect a distributed system where a simple function would do.
Scaling Up: The Hierarchy Emerges
As my projects grew from simple utilities to full applications, my flat task list started breaking down. I needed more structure. This led to adopting a more traditional project management hierarchy:
- Epics – Large features or major components
- Features – Specific functionality within an epic
- Tasks – Individual, implementable units of work
Each level got its own markdown file with appropriate detail. An epic might describe the business goal and high-level approach. A feature would specify the user-facing functionality. A task would contain specific implementation requirements.
This hierarchy served two purposes:
- It helped me organize my own thinking
- It allowed me to give the AI exactly the right level of context for what I was asking it to do
The Context Window Strategy
Through trial and error, I developed what I call the “context window strategy.” For any given AI session, I’d include:
- The immediate task – What we’re building right now
- One level up – The feature this task belongs to
- Critical constraints – From the architecture doc, but only what’s relevant, coding conventions doc, etc..
- Recent decisions – But only if they affect this task
Everything else stayed in markdown files that the AI could reference if needed but didn’t clutter the active context.
This is like the difference between a cluttered desk where you can’t find anything and a clean workspace with clearly labeled filing cabinets. The AI performs much better with the latter.
Lessons Learned
After months of refinement, here are my key takeaways for productive AI-assisted development:
1. Embrace Session Boundaries
Don’t try to maintain one long conversation for an entire project. Start fresh sessions for new features or major tasks. It’s liberating and leads to better results.
2. Externalize Everything Important
If a decision, constraint, or piece of information matters beyond the current task, put it in a markdown file. Your future self (and the AI) will thank you.
3. Match Model to Task
Use your most powerful models for planning and architecture. Use faster models for implementation. It’s like using a sledgehammer for demo and a finish hammer for trim work.
4. Structure Scales
What works for a simple script breaks down for a complex application. Don’t be afraid to add organizational structure as your project grows.
5. Context is Currency
In AI-assisted development, context is your most valuable and limited resource. Spend it wisely. Include what’s necessary, exclude what’s not, and always know where to find what you’ve excluded.
6. Documentation is Not Overhead
Those markdown files aren’t just for the AI. They become valuable project documentation that helps you understand your own system months later.
The Path Forward
Working with AI assistants has fundamentally changed how I approach development, but not in the way I expected. It’s not about writing better prompts or finding smarter models. It’s about building systems that allow both human and AI to work at their best.
The key insight is that AI assistants are incredibly capable but fundamentally stateless. They’re like brilliant consultants who show up fresh each day with no memory of yesterday. Your job is to build the systems that give them exactly what they need to be productive, no more and no less.
This approach has transformed my development workflow from frustrating conversations where the AI keeps forgetting things to smooth, productive sessions where each task gets completed efficiently and correctly. The overhead of maintaining these markdown files and managing context pays for itself many times over in reduced frustration and improved output quality.
The future of AI-assisted development isn’t about more capable models (though those are nice). It’s about better workflows, smarter context management, and systems that play to the strengths of both human creativity and AI capability.
In my next post, I’ll share the specific tooling and automation I’ve built to make this workflow even more efficient. But even with just markdown files and a thoughtful approach to context management, you can dramatically improve your AI-assisted development experience today.
The robots are here to help. We just need to learn how to work with them effectively.
Leave a Reply