This seems tailored to the Claude web/chat interface. Does anyone have any experience or systems specific to Claude code?
I've been using Opencode alongside Claude, trying to utilize Opencode for as much easy/rote functionality as possible so I don't blow through my Claude context, but it is a pain in the rear. I'm sure someone on here has solved this for themselves, and I'd love to hear what people are doing in the "token efficiency" realm.
in the Claude Code realm I use GSD and it manages all that and keeps token use low. Plus it writes to a file so memory is stored so I can type /clear and the token count resets yet it still can remember where I left off for the next work.
Check it out here:
I tend to just /compact when I reach a good point within the context of a task at hand, then start a new chat when that task is complete. Good documentation, planning and structures tend to help a lot as well.
This seems tailored to the Claude web/chat interface. Does anyone have any experience or systems specific to Claude code?
I've been using Opencode alongside Claude, trying to utilize Opencode for as much easy/rote functionality as possible so I don't blow through my Claude context, but it is a pain in the rear. I'm sure someone on here has solved this for themselves, and I'd love to hear what people are doing in the "token efficiency" realm.
in the Claude Code realm I use GSD and it manages all that and keeps token use low. Plus it writes to a file so memory is stored so I can type /clear and the token count resets yet it still can remember where I left off for the next work. Check it out here:
https://github.com/gsd-build/get-shit-done
I tend to just /compact when I reach a good point within the context of a task at hand, then start a new chat when that task is complete. Good documentation, planning and structures tend to help a lot as well.
Also use /compact. Slightly easier than copy pasting into new chat. But I forget not everybody uses Claude code
/compact also takes an optional instruction parameter, e.g. exclude the data format discussion.
Points 1, 2 and 3 also guard against the LLM going down the wrong path. Keep that context clean.
[flagged]