Working smarter with AI: a builder's guide to staying in control
One of my main frustrations when working with my Claude Pro account is the usage restrictions. I get why they're needed, and I am regularly having to top up my account to keep working.
So when I saw a popular Substack post doing the rounds that lists 23 habits for saving Claude credits, I was intrigued.
Some of it is useful. Most of it is written for passive consumers who treat AI like a subscription they're trying to get value from.
That's not you.
If you're like me and using something similar to the Outcome Path approach, you're building with AI tools. You're writing PRDs, validating hypotheses, structuring build sequences, and using Claude Code or Cursor to get working software out the other side. The stakes are different. The habits need to be different too.
This isn't a list of tricks. It's a structured guide to working with AI tools in a way that keeps you in control, keeps your sessions coherent, and keeps your builds moving forward.
Why AI sessions go wrong
Before habits, you need to understand the mechanism. Everything else follows from this.
Every time you send a message to an AI model, it reads the entire conversation from the beginning. Message one. Message two. All the way down to whatever you just typed. It doesn't have memory between sessions. It doesn't retain a mental model of your project. It starts fresh each time, and the only thing it knows is what's currently in its context window.
This has two practical consequences.
First, the longer a conversation runs, the more the model is spending its capacity re-reading history rather than thinking about your current problem. A 30-message session is dramatically less coherent than three 10-message sessions covering the same ground.
Second, when a session runs long and the history becomes noisy, the model starts making inferences based on earlier context that's no longer relevant. It loses the thread. You get outputs that feel like the model forgot what you were building. Because in a meaningful sense, it did.
This is why Claude Code and Cursor often condense conversation history as sessions get longer. The model is internally summarising earlier exchanges to keep running, which means early decisions get paraphrased rather than preserved precisely. If those decisions were important, you want them in a document, not in a chat history the model is quietly compressing.
Understanding this changes how you structure your sessions entirely.
Before you open a session
The most expensive thing you can do is start without knowing what you want out of it. Vague prompts lead to iterative corrections. Corrections stack up history. History dilutes coherence.
Write your intent before you open the tool.
This is already baked into the Outcome Path flow. Your hypothesis defines the session before it starts. But it's worth being explicit: before you open Claude, Cursor, or Claude Code, write one or two sentences describing the concrete output you need. Not the goal. The output. There's a difference.
"Help me think through my auth approach" is a goal. It will take ten messages to get anywhere.
"Review this auth section of my PRD and identify the two highest-risk assumptions" is an output. It takes one message and a clear response.
Sometimes you will need to do the former. Thinking out loud and pulling on threads is a legitimate part of the process. Once you have better defined your goal, start a new session when moving to the output.
Don't do everything in a single conversation or thread. More on this later.
Keep your context files lean.
If you're using Projects in Claude or a CLAUDE.md file in Claude Code, what you put in there gets read at the start of every session. That's useful. But it also means bloated context files cost you before you've typed a single prompt.
Keep your persistent context to what the model genuinely needs to do its job: the hypothesis, the current phase of the build, key constraints, and any conventions it should follow. If it's not decision-relevant, cut it.
A good CLAUDE.md is under 300 words. A good project context file covers the hypothesis, the stack, and the current milestone. Nothing else.
During a session: PRD and planning work
Planning sessions are where most people lose control. You're thinking out loud, the model is responding at length, and before long you have a 25-message conversation that no longer reflects the decisions you've actually made.
One session, one job.
Don't write your problem statement, refine your hypothesis, draft your user stories, and review your assumptions in the same conversation. Each of those is a distinct job. Give each one its own session with its own clear starting context.
When you finish a planning session, extract the output before you close it. Copy the final PRD section, the confirmed hypothesis, the assumption list. Don't rely on being able to scroll back later. The session is ephemeral. The document is not.
Use the model to stress-test, not just generate.
This is where Outcome Path thinking separates from generic AI use. Most people use Claude to produce content. Use it to challenge yours instead.
"Here is my hypothesis. What assumptions does it rest on that I haven't made explicit? What would have to be true for this to fail within 90 days?"
That kind of prompt produces more useful output than "write me a PRD" and costs far fewer tokens to act on.
When something is wrong, edit rather than correct.
If the model misunderstood your prompt, don't send a follow-up explaining what you actually meant. Go back and edit the original message. In Claude's chat interface, every message has an edit option. Use it. Follow-up corrections stack history. Edits replace it.
During a session: build work in Cursor and Claude Code
Build sessions have different failure modes to planning sessions. The risk isn't losing the thread. It's scope creep and uncontrolled exploration.
Scope every prompt like a user story or task.
Before you type a build prompt, ask yourself: could I put this on a user story with a clear acceptance criterion? If not, it's too vague. Claude Code in particular will explore widely if given room to. It will read directories, check dependencies, investigate adjacent files. That's useful when you want it. It's a problem when you don't.
"Add form validation to the email field on the signup page. Return an inline error if the format is invalid. Do not touch any other component." is a scoped prompt.
"Make the signup form better" is not.
It's the same discipline as working with a human. You wouldn't hand a developer a vague brief and expect a precise result.
Maintain a build-plan.md.
Across a multi-session build, the model has no memory of what you did last time. Your build-plan.md is the bridge. Keep it updated at the end of every session: what was completed, what's next, any decisions made that affect future tasks.
Start every new build session by pointing the model at it: "Read build-plan.md. We're picking up from the task marked in progress. Here's the relevant context..."
This costs almost nothing. It saves you from re-explaining your entire project at the start of every session.
Know when to restart.
If a build session has gone sideways and you've sent several corrective messages that haven't resolved the problem, stop. Don't keep going. The history is now working against you.
Summarise what you were trying to do, what went wrong, and what the current state of the code is. Open a fresh session. Paste the summary. Start clean.
A fresh session with good context outperforms a long session with noisy history every time.
Managing context across the full build
Use Projects for persistent reference material.
If you're referencing the same PRD, the same architecture notes, or the same design decisions across multiple sessions, put them in a Claude Project. Files uploaded to a Project are cached and don't burn through your context the way repeated uploads to individual chats do.
Think of a Project as your shared working memory with the model. Anything that doesn't change session to session belongs there. Anything session-specific goes in the chat.
Generate a session recap before you close.
At the end of any meaningful session, ask the model to produce a brief summary: decisions made, outputs produced, next steps. Paste that into your build-plan.md or your PRD notes. This is your handoff document to your next session's context.
It takes 30 seconds and it's the single most effective habit for maintaining coherence across a multi-day build.
Don't carry dead context forward.
If you finished a phase of the build, archive the notes from that phase. Don't carry your hypothesis validation notes into your deployment session. Don't keep your early assumption list in context once it's been resolved. Dead context adds noise and costs you coherence. Trim it regularly.
The underlying principle
Everything in this guide comes back to one idea: the model doesn't know what you know. It only knows what you've told it, in this session, right now.
The builders who get the most out of AI tools are not the ones who write the cleverest prompts. They're the ones who manage context deliberately, structure their sessions around clear outputs, and stay in the driver's seat throughout.
That's the Outcome Path approach. Product thinking first. AI as a capable collaborator that needs good direction, not a magic box that rewards longer conversations.
Keep your sessions lean. Keep your intent clear. Keep your context current.
The tools work better when you do.