Pre-compaction topic splitting for OpenClaw agents. Never lose context again.
The problem: Every AI agent has a context window โ a limit on how much conversation it can hold. When that window fills up, the system compacts: it compresses old messages into a summary. Sounds harmless, but it's not. Specific numbers, the reasoning behind decisions, open threads, the "why" behind choices โ all gone. Your agent starts forgetting things. You start repeating yourself. The longer you work together, the worse it gets.
Context Keeper makes compaction irrelevant. It watches your sessions, warns you before the window fills, analyzes the conversation for natural topic boundaries, and transplants the full substance โ not a summary โ into fresh focused sessions. Every decision, every detail, every lesson learned moves to a new home with a clean context window. Nothing gets compressed. Nothing gets lost.
One "RV Park World" session at 86% โ product features, data pipelines, marketing strategy, SEO, pricing debates, scraper status all competing for the same 200K token window. Compaction about to crush everything into a vague summary.
โ
"RV Park Data/Ops" โ fresh session with full pipeline state, cron schedules, batch limits, data counts, infrastructure rules
โ
"RV Park Marketing" โ fresh session with pricing decisions ($499/yr annual only), SEO status, content strategy, outreach drafts, competitor analysis
โ
"RV Park World Product" โ original session now focused purely on product/UI decisions, lean and fast
Three focused sessions instead of one bloated one. Zero information lost. All three start with full context windows.
| Compaction | Context Keeper | |
|---|---|---|
| Method | Lossy compression | Lossless transplant |
| Control | Automatic, no approval | Human-in-the-loop |
| Details preserved | Summarized away | Fully preserved |
| Organization | One blob summary | Categorized by topic |
| Timing | Happens TO you | Happens FOR you, with warning |
| Recovery | Can't undo | Original context lives on |
These aren't generic โ they were developed through real-world usage and iterative self-critique:
โข Measure token volume, not topic count โ Assistant responses are where the real bloat lives. A topic generating heavy code output is a bigger split candidate than one with lots of short messages.
โข Don't force weak clusters โ If a topic overlaps with everything else, it's the core of the conversation. Leave it in the main session. Only split topics that are genuinely distinct.
โข Split noise from signal โ The best split candidate is always high-volume / low-decision content: pipeline logs, status reports, automated outputs. That's what eats context without adding decision-making value.
โข Start with 2 splits โ The urge to create 3-4 groups feels thorough but creates "where does this go?" confusion. Two splits plus keeping the original is usually right. Split again later if needed.
โข Self-critique once, then ship โ First analysis gets the structure. One critique sharpens it. After that you're spinning. Move.
Context Keeper is one layer in a full memory stack:
โข Memory files โ daily notes + long-term MEMORY.md catch what compaction misses
โข Semantic search โ memory_search finds anything written down, by meaning not just keywords
โข Mission Control dashboard โ visual context gauges show which sessions are running hot
โข HANDOFF.md โ carries session state and vibe between restarts
โข Context Keeper โ prevents compaction from ever being a surprise
No single piece solves context loss. Together they make compaction basically irrelevant.
1. Copy context-keeper/ into your workspace skills/ directory
2. Add a daily cron job pointing to check.py (see SKILL.md for the full cron command)
3. Configure the threshold in check.py if 65% doesn't suit your workflow
4. When it flags a session, you approve the splits and create the new chat groups โ the agent handles the rest