diff --git a/CLAUDE.md b/CLAUDE.md index 767a110..bb016db 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -6,6 +6,149 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co **Fading Memories** - Hierarchical conversation compression with natural memory decay +A memory system for AI conversations that mimics human memory: vivid recent details progressively compress into summaries, with frequently accessed memories staying sharp while neglected ones gradually fade into oblivion. + +### Core Concept + +``` +Time 0: [Full detailed conversation - 50,000 tokens] + │ +Time 1: [Summary L1 - 5,000 tokens] ──→ [Details accessible via links] + │ +Time 2: [Summary L2 - 500 tokens] ──→ [L1 accessible] ──→ [Details if accessed] + │ +Time 3: [Summary L3 - 50 tokens] ──→ [Faded memories pruned if never accessed] +``` + +### Memory Decay Model + +``` +┌─────────────────────────────────────────────────────────────┐ +│ │ +│ Strength │ +│ │ │ +│ ███│████ │ +│ ███│████████ │ +│ ███│████████████ ← Accessed memories stay vivid │ +│ ███│████████████████ │ +│ ███│█████████████████████ │ +│ ███│████████████████████████████ │ +│ │ ████████ │ +│ │ ████── → 0 (fade) │ +│ └───────────────────────────────────────────────────── │ +│ Time │ +└─────────────────────────────────────────────────────────────┘ +``` + +**Factors affecting decay:** +- **Access frequency** - Viewed memories decay slower +- **Importance markers** - Explicitly marked memories persist +- **Reference count** - Memories linked by other memories stay +- **Recency** - Recent memories have higher base strength + +### Architecture + +``` +src/fading_memories/ +├── __init__.py +├── __main__.py # CLI entry point +├── models/ +│ ├── memory.py # Memory node (content, strength, links) +│ ├── conversation.py # Conversation container +│ └── hierarchy.py # Memory tree structure +├── compression/ +│ ├── summarizer.py # LLM-based summarization +│ ├── linker.py # Extract/preserve important links +│ └── strategies.py # Compression strategies +├── decay/ +│ ├── strength.py # Strength calculation +│ ├── scheduler.py # When to compress/prune +│ └── pruner.py # Remove faded memories +├── storage/ +│ ├── sqlite.py # SQLite backend +│ └── export.py # Export to markdown/json +└── api/ + ├── server.py # REST API + └── routes.py # Endpoints +``` + +### Data Model + +```python +class Memory: + id: str # Unique identifier + content: str # The actual content + level: int # Compression level (0=raw, 1=summary, etc.) + parent_id: str | None # Link to more detailed version + children: list[str] # Links to compressed versions + + # Decay tracking + strength: float # 0.0 to 1.0, below threshold = prune + created_at: datetime + last_accessed: datetime + access_count: int + + # Metadata + importance: float # User-marked importance + tokens: int # Token count + tags: list[str] + +class Conversation: + id: str + memories: list[Memory] # Hierarchy of memories + root_id: str # Most compressed summary + + def access(self, memory_id: str) -> Memory: + """Access a memory, boosting its strength.""" + + def drill_down(self, memory_id: str) -> Memory | None: + """Get more detailed parent memory if it exists.""" + + def summarize(self) -> str: + """Get current top-level summary.""" +``` + +### Compression Flow + +1. **Ingest** - Raw conversation comes in +2. **Chunk** - Split into semantic chunks +3. **Summarize** - Create L1 summary, link to chunks +4. **Store** - Save with initial strength = 1.0 +5. **Decay** - Over time, strength decreases +6. **Access** - When accessed, strength boosts +7. **Compress** - When strength drops, create L2 summary +8. **Prune** - When strength ≈ 0 and no children need it, delete + +### API Endpoints + +``` +POST /conversations # Create new conversation +GET /conversations/:id # Get conversation summary +GET /conversations/:id/memory/:mid # Access specific memory (boosts strength) +POST /conversations/:id/drill # Drill down to more detail +GET /conversations/:id/tree # Get full memory hierarchy +POST /decay/run # Trigger decay cycle +``` + +### CLI Usage + +```bash +# Add a conversation +fading-memories add conversation.txt + +# View current summary +fading-memories view + +# Drill into details +fading-memories drill + +# Run decay cycle +fading-memories decay --threshold 0.1 + +# Export before it fades +fading-memories export --format markdown +``` + ## Development Commands ```bash @@ -15,34 +158,33 @@ pip install -e ".[dev]" # Run tests pytest -# Run a single test -pytest tests/test_file.py::test_name +# Start API server +fading-memories serve --port 8080 + +# Run decay scheduler +fading-memories daemon ``` -## Architecture +## Key Design Decisions -*TODO: Describe the project architecture* +1. **Hierarchical, not flat** - Memories link to more/less detailed versions +2. **Lazy deletion** - Only prune when storage pressure or explicitly requested +3. **Boost on access** - Reading a memory reinforces it +4. **Configurable decay** - Different decay curves for different use cases +5. **Export before fade** - Always allow exporting before deletion -### Key Modules +## Use Cases -*TODO: List key modules and their purposes* - -### Key Paths - -- **Source code**: `src/fading-memories/` -- **Tests**: `tests/` -- **Documentation**: `docs/` (symlink to project-docs) +- **Long-running AI conversations** - Keep context without unbounded growth +- **Chat history archival** - Compress old chats while keeping them searchable +- **Meeting notes** - Detailed notes fade to action items over time +- **Learning systems** - Spaced repetition based on access patterns ## Documentation Documentation lives in `docs/` (symlink to centralized docs system). -**Before updating docs, read `docs/updating-documentation.md`** for full details on visibility rules and procedures. - Quick reference: - Edit files in `docs/` folder - Use `public: true` frontmatter for public-facing docs -- Use `` / `` to hide sections - Deploy: `~/PycharmProjects/project-docs/scripts/build-public-docs.sh fading-memories --deploy` - -Do NOT create documentation files directly in this repository.