๐Ÿ›ก๏ธ Context Keeper

Pre-compaction topic splitting for OpenClaw agents. Never lose context again.

The problem: Every AI agent has a context window โ€” a limit on how much conversation it can hold. When that window fills up, the system compacts: it compresses old messages into a summary. Sounds harmless, but it's not. Specific numbers, the reasoning behind decisions, open threads, the "why" behind choices โ€” all gone. Your agent starts forgetting things. You start repeating yourself. The longer you work together, the worse it gets.

Context Keeper makes compaction irrelevant. It watches your sessions, warns you before the window fills, analyzes the conversation for natural topic boundaries, and transplants the full substance โ€” not a summary โ€” into fresh focused sessions. Every decision, every detail, every lesson learned moves to a new home with a clean context window. Nothing gets compressed. Nothing gets lost.

How It Works

1
Watches โ€” A daily cron monitors every conversation session's context usage. Configurable threshold (default: 65%). You see the gauges on your dashboard filling up like fuel tanks.
2
Flags โ€” When a session crosses the threshold, the agent reads the full conversation history and identifies natural topic clusters. Not random splits โ€” it looks at what's genuinely distinct and what's actually eating the most tokens.
3
Proposes โ€” The agent messages you: "This session is at 78%. I see 2 topics that should split: Data Ops and Marketing. Want me to proceed?" You approve, adjust, or reject. Nothing happens without your sign-off.
4
Transplants โ€” This is the core. For each approved topic, the agent extracts everything: decisions made, current state, open threads, key details, lessons learned, dead ends to avoid. It dumps a structured context transfer into each new session. Not a summary โ€” the full substance.
5
Clean slate โ€” The original session can compact safely because everything important already lives in focused sessions with full context windows. You pick up right where you left off โ€” in the right room, with the right context.

Real Example

Before Context Keeper

One "RV Park World" session at 86% โ€” product features, data pipelines, marketing strategy, SEO, pricing debates, scraper status all competing for the same 200K token window. Compaction about to crush everything into a vague summary.

After Context Keeper

โœ… "RV Park Data/Ops" โ€” fresh session with full pipeline state, cron schedules, batch limits, data counts, infrastructure rules
โœ… "RV Park Marketing" โ€” fresh session with pricing decisions ($499/yr annual only), SEO status, content strategy, outreach drafts, competitor analysis
โœ… "RV Park World Product" โ€” original session now focused purely on product/UI decisions, lean and fast

Three focused sessions instead of one bloated one. Zero information lost. All three start with full context windows.

What a Context Transplant Looks Like

๐Ÿ”„ CONTEXT TRANSPLANT โ€” [Topic Name] Source: [Original Session] ([X]% context) Date: [date] โ•โ•โ• CURRENT STATE โ•โ•โ• [Specific numbers, statuses, configurations] โ•โ•โ• KEY DECISIONS โ•โ•โ• [Every decision with reasoning โ€” not just what, but why] โ•โ•โ• ACTIVE PROCESSES โ•โ•โ• [Running automations, cron jobs, scheduled tasks] โ•โ•โ• OPEN THREADS โ•โ•โ• [Unresolved items, pending dependencies] โ•โ•โ• WHAT NOT TO DO โ•โ•โ• [Dead ends, failed approaches, lessons learned]

Context Keeper vs Compaction

CompactionContext Keeper
MethodLossy compressionLossless transplant
ControlAutomatic, no approvalHuman-in-the-loop
Details preservedSummarized awayFully preserved
OrganizationOne blob summaryCategorized by topic
TimingHappens TO youHappens FOR you, with warning
RecoveryCan't undoOriginal context lives on

Built-In Analysis Rules

These aren't generic โ€” they were developed through real-world usage and iterative self-critique:

โ€ข Measure token volume, not topic count โ€” Assistant responses are where the real bloat lives. A topic generating heavy code output is a bigger split candidate than one with lots of short messages.

โ€ข Don't force weak clusters โ€” If a topic overlaps with everything else, it's the core of the conversation. Leave it in the main session. Only split topics that are genuinely distinct.

โ€ข Split noise from signal โ€” The best split candidate is always high-volume / low-decision content: pipeline logs, status reports, automated outputs. That's what eats context without adding decision-making value.

โ€ข Start with 2 splits โ€” The urge to create 3-4 groups feels thorough but creates "where does this go?" confusion. Two splits plus keeping the original is usually right. Split again later if needed.

โ€ข Self-critique once, then ship โ€” First analysis gets the structure. One critique sharpens it. After that you're spinning. Move.

Works Best With

Context Keeper is one layer in a full memory stack:

โ€ข Memory files โ€” daily notes + long-term MEMORY.md catch what compaction misses
โ€ข Semantic search โ€” memory_search finds anything written down, by meaning not just keywords
โ€ข Mission Control dashboard โ€” visual context gauges show which sessions are running hot
โ€ข HANDOFF.md โ€” carries session state and vibe between restarts
โ€ข Context Keeper โ€” prevents compaction from ever being a surprise

No single piece solves context loss. Together they make compaction basically irrelevant.

What's Included

skills/context-keeper/
โ”œโ”€โ”€ SKILL.md # Full methodology, transplant format, cron setup
โ”œโ”€โ”€ check.py # Detection script โ€” reads state.json, flags sessions
โ””โ”€โ”€ splitter-state.json # Auto-created at runtime, tracks proposals

Requirements

Setup

1. Copy context-keeper/ into your workspace skills/ directory
2. Add a daily cron job pointing to check.py (see SKILL.md for the full cron command)
3. Configure the threshold in check.py if 65% doesn't suit your workflow
4. When it flags a session, you approve the splits and create the new chat groups โ€” the agent handles the rest

Read Full SKILL.md View check.py โ† All Skills