Elite Longterm Memory 🧠

The ultimate memory system for AI agents. Never lose context again.

npm version npm downloads License: MIT


Works With

Claude AI GPT Cursor LangChain

Built for: Clawdbot • Moltbot • Claude Code • Any AI Agent


Combines 7 proven memory approaches into one bulletproof architecture:

  • Bulletproof WAL Protocol — Write-ahead logging survives compaction
  • LanceDB Vector Search — Semantic recall of relevant memories
  • Git-Notes Knowledge Graph — Structured decisions, branch-aware
  • File-Based Archives — Human-readable MEMORY.md + daily logs
  • Cloud Backup — Optional SuperMemory sync
  • Memory Hygiene — Keep vectors lean, prevent token waste
  • Mem0 Auto-Extraction — Automatic fact extraction, 80% token reduction

Quick Start

# Initialize in your workspace
npx elite-longterm-memory init

# Check status
npx elite-longterm-memory status

# Create today's log
npx elite-longterm-memory today

Architecture

┌─────────────────────────────────────────────────────┐
│              ELITE LONGTERM MEMORY                  │
├─────────────────────────────────────────────────────┤
│  HOT RAM          WARM STORE        COLD STORE     │
│  SESSION-STATE.md → LanceDB      → Git-Notes       │
│  (survives         (semantic       (permanent      │
│   compaction)       search)         decisions)     │
│         │              │                │          │
│         └──────────────┼────────────────┘          │
│                        ▼                           │
│                   MEMORY.md                        │
│               (curated archive)                    │
└─────────────────────────────────────────────────────┘

The 5 Memory Layers

Layer File/System Purpose Persistence
1. Hot RAM SESSION-STATE.md Active task context Survives compaction
2. Warm Store LanceDB Semantic search Auto-recall
3. Cold Store Git-Notes Structured decisions Permanent
4. Archive MEMORY.md + daily/ Human-readable Curated
5. Cloud SuperMemory Cross-device sync Optional

The WAL Protocol

Critical insight: Write state BEFORE responding, not after.

User: "Let's use Tailwind for this project"

Agent (internal):
1. Write to SESSION-STATE.md → "Decision: Use Tailwind"
2. THEN respond → "Got it — Tailwind it is..."

If you respond first and crash before saving, context is lost. WAL ensures durability.

Why Memory Fails (And How to Fix It)

Problem Cause Fix
Forgets everything memory_search disabled Enable + add OpenAI key
Repeats mistakes Lessons not logged Write to memory/lessons.md
Sub-agents isolated No context inheritance Pass context in task prompt
Facts not captured No auto-extraction Use Mem0 (see below)

Auto-extract facts from conversations. 80% token reduction.

npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });

// Auto-extracts facts from messages
await client.add(messages, { user_id: "user123" });

// Retrieve relevant memories  
const memories = await client.search(query, { user_id: "user123" });

For Clawdbot/Moltbot Users

Add to ~/.clawdbot/clawdbot.json:

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai",
    "sources": ["memory"]
  }
}

Files Created

workspace/
├── SESSION-STATE.md    # Hot RAM (active context)
├── MEMORY.md           # Curated long-term memory
└── memory/
    ├── 2026-01-30.md   # Daily logs
    └── ...

Commands

elite-memory init      # Initialize memory system
elite-memory status    # Check health
elite-memory today     # Create today's log
elite-memory help      # Show help

Built by @NextXFrontier

Description
Cursor、Claude、ChatGPT和Copilot的终极AI代理记忆系统。
Readme 31 KiB
Languages
JavaScript 100%