Initial commit with translated description

This commit is contained in:
2026-03-29 13:08:38 +08:00
commit 00cea8d126
5 changed files with 821 additions and 0 deletions

162
README.md Normal file
View File

@@ -0,0 +1,162 @@
# Elite Longterm Memory 🧠
**The ultimate memory system for AI agents.** Never lose context again.
[![npm version](https://img.shields.io/npm/v/elite-longterm-memory.svg?style=flat-square)](https://www.npmjs.com/package/elite-longterm-memory)
[![npm downloads](https://img.shields.io/npm/dm/elite-longterm-memory.svg?style=flat-square)](https://www.npmjs.com/package/elite-longterm-memory)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://opensource.org/licenses/MIT)
---
## Works With
<p align="center">
<img src="https://img.shields.io/badge/Claude-AI-orange?style=for-the-badge&logo=anthropic" alt="Claude AI" />
<img src="https://img.shields.io/badge/GPT-OpenAI-412991?style=for-the-badge&logo=openai" alt="GPT" />
<img src="https://img.shields.io/badge/Cursor-IDE-000000?style=for-the-badge" alt="Cursor" />
<img src="https://img.shields.io/badge/LangChain-Framework-1C3C3C?style=for-the-badge" alt="LangChain" />
</p>
<p align="center">
<strong>Built for:</strong> Clawdbot • Moltbot • Claude Code • Any AI Agent
</p>
---
Combines 7 proven memory approaches into one bulletproof architecture:
-**Bulletproof WAL Protocol** — Write-ahead logging survives compaction
-**LanceDB Vector Search** — Semantic recall of relevant memories
-**Git-Notes Knowledge Graph** — Structured decisions, branch-aware
-**File-Based Archives** — Human-readable MEMORY.md + daily logs
-**Cloud Backup** — Optional SuperMemory sync
-**Memory Hygiene** — Keep vectors lean, prevent token waste
-**Mem0 Auto-Extraction** — Automatic fact extraction, 80% token reduction
## Quick Start
```bash
# Initialize in your workspace
npx elite-longterm-memory init
# Check status
npx elite-longterm-memory status
# Create today's log
npx elite-longterm-memory today
```
## Architecture
```
┌─────────────────────────────────────────────────────┐
│ ELITE LONGTERM MEMORY │
├─────────────────────────────────────────────────────┤
│ HOT RAM WARM STORE COLD STORE │
│ SESSION-STATE.md → LanceDB → Git-Notes │
│ (survives (semantic (permanent │
│ compaction) search) decisions) │
│ │ │ │ │
│ └──────────────┼────────────────┘ │
│ ▼ │
│ MEMORY.md │
│ (curated archive) │
└─────────────────────────────────────────────────────┘
```
## The 5 Memory Layers
| Layer | File/System | Purpose | Persistence |
|-------|-------------|---------|-------------|
| 1. Hot RAM | SESSION-STATE.md | Active task context | Survives compaction |
| 2. Warm Store | LanceDB | Semantic search | Auto-recall |
| 3. Cold Store | Git-Notes | Structured decisions | Permanent |
| 4. Archive | MEMORY.md + daily/ | Human-readable | Curated |
| 5. Cloud | SuperMemory | Cross-device sync | Optional |
## The WAL Protocol
**Critical insight:** Write state BEFORE responding, not after.
```
User: "Let's use Tailwind for this project"
Agent (internal):
1. Write to SESSION-STATE.md → "Decision: Use Tailwind"
2. THEN respond → "Got it — Tailwind it is..."
```
If you respond first and crash before saving, context is lost. WAL ensures durability.
## Why Memory Fails (And How to Fix It)
| Problem | Cause | Fix |
|---------|-------|-----|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
| Sub-agents isolated | No context inheritance | Pass context in task prompt |
| Facts not captured | No auto-extraction | Use Mem0 (see below) |
## Mem0 Integration (Recommended)
Auto-extract facts from conversations. 80% token reduction.
```bash
npm install mem0ai
export MEM0_API_KEY="your-key"
```
```javascript
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extracts facts from messages
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
```
## For Clawdbot/Moltbot Users
Add to `~/.clawdbot/clawdbot.json`:
```json
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"]
}
}
```
## Files Created
```
workspace/
├── SESSION-STATE.md # Hot RAM (active context)
├── MEMORY.md # Curated long-term memory
└── memory/
├── 2026-01-30.md # Daily logs
└── ...
```
## Commands
```bash
elite-memory init # Initialize memory system
elite-memory status # Check health
elite-memory today # Create today's log
elite-memory help # Show help
```
## Links
- [Full Documentation (SKILL.md)](./SKILL.md)
- [ClawdHub](https://clawdhub.com/skills/elite-longterm-memory)
- [GitHub](https://github.com/NextFrontierBuilds/elite-longterm-memory)
---
Built by [@NextXFrontier](https://x.com/NextXFrontier)

409
SKILL.md Normal file
View File

@@ -0,0 +1,409 @@
---
name: elite-longterm-memory
version: 1.2.3
description: "Cursor、Claude、ChatGPT和Copilot的终极AI代理记忆系统。"
author: NextFrontierBuilds
keywords: [memory, ai-agent, ai-coding, long-term-memory, vector-search, lancedb, git-notes, wal, persistent-context, claude, claude-code, gpt, chatgpt, cursor, copilot, github-copilot, openclaw, moltbot, vibe-coding, agentic, ai-tools, developer-tools, devtools, typescript, llm, automation]
metadata:
openclaw:
emoji: "🧠"
requires:
env:
- OPENAI_API_KEY
plugins:
- memory-lancedb
---
# Elite Longterm Memory 🧠
**The ultimate memory system for AI agents.** Combines 6 proven approaches into one bulletproof architecture.
Never lose context. Never forget decisions. Never repeat mistakes.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ ELITE LONGTERM MEMORY │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ HOT RAM │ │ WARM STORE │ │ COLD STORE │ │
│ │ │ │ │ │ │ │
│ │ SESSION- │ │ LanceDB │ │ Git-Notes │ │
│ │ STATE.md │ │ Vectors │ │ Knowledge │ │
│ │ │ │ │ │ Graph │ │
│ │ (survives │ │ (semantic │ │ (permanent │ │
│ │ compaction)│ │ search) │ │ decisions) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────┐ │
│ │ MEMORY.md │ ← Curated long-term │
│ │ + daily/ │ (human-readable) │
│ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ SuperMemory │ ← Cloud backup (optional) │
│ │ API │ │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## The 5 Memory Layers
### Layer 1: HOT RAM (SESSION-STATE.md)
**From: bulletproof-memory**
Active working memory that survives compaction. Write-Ahead Log protocol.
```markdown
# SESSION-STATE.md — Active Working Memory
## Current Task
[What we're working on RIGHT NOW]
## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...
## Pending Actions
- [ ] ...
```
**Rule:** Write BEFORE responding. Triggered by user input, not agent memory.
### Layer 2: WARM STORE (LanceDB Vectors)
**From: lancedb-memory**
Semantic search across all memories. Auto-recall injects relevant context.
```bash
# Auto-recall (happens automatically)
memory_recall query="project status" limit=5
# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9
```
### Layer 3: COLD STORE (Git-Notes Knowledge Graph)
**From: git-notes-memory**
Structured decisions, learnings, and context. Branch-aware.
```bash
# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h
# Retrieve context
python3 memory.py -p $DIR get "frontend"
```
### Layer 4: CURATED ARCHIVE (MEMORY.md + daily/)
**From: OpenClaw native**
Human-readable long-term memory. Daily logs + distilled wisdom.
```
workspace/
├── MEMORY.md # Curated long-term (the good stuff)
└── memory/
├── 2026-01-30.md # Daily log
├── 2026-01-29.md
└── topics/ # Topic-specific files
```
### Layer 5: CLOUD BACKUP (SuperMemory) — Optional
**From: supermemory**
Cross-device sync. Chat with your knowledge base.
```bash
export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."
```
### Layer 6: AUTO-EXTRACTION (Mem0) — Recommended
**NEW: Automatic fact extraction**
Mem0 automatically extracts facts from conversations. 80% token reduction.
```bash
npm install mem0ai
export MEM0_API_KEY="your-key"
```
```javascript
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
```
Benefits:
- Auto-extracts preferences, decisions, facts
- Deduplicates and updates existing memories
- 80% reduction in tokens vs raw history
- Works across sessions automatically
## Quick Setup
### 1. Create SESSION-STATE.md (Hot RAM)
```bash
cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md — Active Working Memory
This file is the agent's "RAM" — survives compaction, restarts, distractions.
## Current Task
[None]
## Key Context
[None yet]
## Pending Actions
- [ ] None
## Recent Decisions
[None yet]
---
*Last updated: [timestamp]*
EOF
```
### 2. Enable LanceDB (Warm Store)
In `~/.openclaw/openclaw.json`:
```json
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"],
"minScore": 0.3,
"maxResults": 10
},
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"autoCapture": false,
"autoRecall": true,
"captureCategories": ["preference", "decision", "fact"],
"minImportance": 0.7
}
}
}
}
}
```
### 3. Initialize Git-Notes (Cold Store)
```bash
cd ~/clawd
git init # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start
```
### 4. Verify MEMORY.md Structure
```bash
# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory
```
### 5. (Optional) Setup SuperMemory
```bash
export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence
```
## Agent Instructions
### On Session Start
1. Read SESSION-STATE.md — this is your hot context
2. Run `memory_search` for relevant prior context
3. Check memory/YYYY-MM-DD.md for recent activity
### During Conversation
1. **User gives concrete detail?** → Write to SESSION-STATE.md BEFORE responding
2. **Important decision made?** → Store in Git-Notes (SILENTLY)
3. **Preference expressed?**`memory_store` with importance=0.9
### On Session End
1. Update SESSION-STATE.md with final state
2. Move significant items to MEMORY.md if worth keeping long-term
3. Create/update daily log in memory/YYYY-MM-DD.md
### Memory Hygiene (Weekly)
1. Review SESSION-STATE.md — archive completed tasks
2. Check LanceDB for junk: `memory_recall query="*" limit=50`
3. Clear irrelevant vectors: `memory_forget id=<id>`
4. Consolidate daily logs into MEMORY.md
## The WAL Protocol (Critical)
**Write-Ahead Log:** Write state BEFORE responding, not after.
| Trigger | Action |
|---------|--------|
| User states preference | Write to SESSION-STATE.md → then respond |
| User makes decision | Write to SESSION-STATE.md → then respond |
| User gives deadline | Write to SESSION-STATE.md → then respond |
| User corrects you | Write to SESSION-STATE.md → then respond |
**Why?** If you respond first and crash/compact before saving, context is lost. WAL ensures durability.
## Example Workflow
```
User: "Let's use Tailwind for this project, not vanilla CSS"
Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it — Tailwind it is..."
```
## Maintenance Commands
```bash
# Audit vector memory
memory_recall query="*" limit=50
# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/
openclaw gateway restart
# Export Git-Notes
python3 memory.py -p . export --format json > memories.json
# Check memory health
du -sh ~/.openclaw/memory/
wc -l MEMORY.md
ls -la memory/
```
## Why Memory Fails
Understanding the root causes helps you fix them:
| Failure Mode | Cause | Fix |
|--------------|-------|-----|
| Forgets everything | `memory_search` disabled | Enable + add OpenAI key |
| Files not loaded | Agent skips reading memory | Add to AGENTS.md rules |
| Facts not captured | No auto-extraction | Use Mem0 or manual logging |
| Sub-agents isolated | Don't inherit context | Pass context in task prompt |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
## Solutions (Ranked by Effort)
### 1. Quick Win: Enable memory_search
If you have an OpenAI key, enable semantic search:
```bash
openclaw configure --section web
```
This enables vector search over MEMORY.md + memory/*.md files.
### 2. Recommended: Mem0 Integration
Auto-extract facts from conversations. 80% token reduction.
```bash
npm install mem0ai
```
```javascript
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extract and store
await client.add([
{ role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });
// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });
```
### 3. Better File Structure (No Dependencies)
```
memory/
├── projects/
│ ├── strykr.md
│ └── taska.md
├── people/
│ └── contacts.md
├── decisions/
│ └── 2026-01.md
├── lessons/
│ └── mistakes.md
└── preferences.md
```
Keep MEMORY.md as a summary (<5KB), link to detailed files.
## Immediate Fixes Checklist
| Problem | Fix |
|---------|-----|
| Forgets preferences | Add `## Preferences` section to MEMORY.md |
| Repeats mistakes | Log every mistake to `memory/lessons.md` |
| Sub-agents lack context | Include key context in spawn task prompt |
| Forgets recent work | Strict daily file discipline |
| Memory search not working | Check `OPENAI_API_KEY` is set |
## Troubleshooting
**Agent keeps forgetting mid-conversation:**
→ SESSION-STATE.md not being updated. Check WAL protocol.
**Irrelevant memories injected:**
→ Disable autoCapture, increase minImportance threshold.
**Memory too large, slow recall:**
→ Run hygiene: clear old vectors, archive daily logs.
**Git-Notes not persisting:**
→ Run `git notes push` to sync with remote.
**memory_search returns nothing:**
→ Check OpenAI API key: `echo $OPENAI_API_KEY`
→ Verify memorySearch enabled in openclaw.json
---
## Links
- bulletproof-memory: https://clawdhub.com/skills/bulletproof-memory
- lancedb-memory: https://clawdhub.com/skills/lancedb-memory
- git-notes-memory: https://clawdhub.com/skills/git-notes-memory
- memory-hygiene: https://clawdhub.com/skills/memory-hygiene
- supermemory: https://clawdhub.com/skills/supermemory
---
*Built by [@NextXFrontier](https://x.com/NextXFrontier) — Part of the Next Frontier AI toolkit*

6
_meta.json Normal file
View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn7ewywaj7mf48drbjw1baa5298016yv",
"slug": "elite-longterm-memory",
"version": "1.2.3",
"publishedAt": 1770799020241
}

187
bin/elite-memory.js Normal file
View File

@@ -0,0 +1,187 @@
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const TEMPLATES = {
'session-state': `# SESSION-STATE.md — Active Working Memory
This file is the agent's "RAM" — survives compaction, restarts, distractions.
Chat history is a BUFFER. This file is STORAGE.
## Current Task
[None]
## Key Context
[None yet]
## Pending Actions
- [ ] None
## Recent Decisions
[None yet]
---
*Last updated: ${new Date().toISOString()}*
`,
'memory-md': `# MEMORY.md — Long-Term Memory
## About the User
[Add user preferences, communication style, etc.]
## Projects
[Active projects and their status]
## Decisions Log
[Important decisions and why they were made]
## Lessons Learned
[Mistakes to avoid, patterns that work]
## Preferences
[Tools, frameworks, workflows the user prefers]
---
*Curated memory — distill insights from daily logs here*
`,
'daily-template': `# {{DATE}} — Daily Log
## Tasks Completed
-
## Decisions Made
-
## Lessons Learned
-
## Tomorrow
-
`
};
const commands = {
init: () => {
console.log('🧠 Initializing Elite Longterm Memory...\n');
// Create SESSION-STATE.md
if (!fs.existsSync('SESSION-STATE.md')) {
fs.writeFileSync('SESSION-STATE.md', TEMPLATES['session-state']);
console.log('✓ Created SESSION-STATE.md (Hot RAM)');
} else {
console.log('• SESSION-STATE.md already exists');
}
// Create MEMORY.md
if (!fs.existsSync('MEMORY.md')) {
fs.writeFileSync('MEMORY.md', TEMPLATES['memory-md']);
console.log('✓ Created MEMORY.md (Curated Archive)');
} else {
console.log('• MEMORY.md already exists');
}
// Create memory directory
if (!fs.existsSync('memory')) {
fs.mkdirSync('memory', { recursive: true });
console.log('✓ Created memory/ directory');
} else {
console.log('• memory/ directory already exists');
}
// Create today's log
const today = new Date().toISOString().split('T')[0];
const todayFile = `memory/${today}.md`;
if (!fs.existsSync(todayFile)) {
const content = TEMPLATES['daily-template'].replace('{{DATE}}', today);
fs.writeFileSync(todayFile, content);
console.log(`✓ Created ${todayFile}`);
}
console.log('\n🎉 Elite Longterm Memory initialized!');
console.log('\nNext steps:');
console.log('1. Add SESSION-STATE.md to your agent context');
console.log('2. Configure LanceDB plugin in clawdbot.json');
console.log('3. Review SKILL.md for full setup guide');
},
today: () => {
const today = new Date().toISOString().split('T')[0];
const todayFile = `memory/${today}.md`;
if (!fs.existsSync('memory')) {
fs.mkdirSync('memory', { recursive: true });
}
if (!fs.existsSync(todayFile)) {
const content = TEMPLATES['daily-template'].replace('{{DATE}}', today);
fs.writeFileSync(todayFile, content);
console.log(`✓ Created ${todayFile}`);
} else {
console.log(`${todayFile} already exists`);
}
},
status: () => {
console.log('🧠 Elite Longterm Memory Status\n');
// Check SESSION-STATE.md
if (fs.existsSync('SESSION-STATE.md')) {
const stat = fs.statSync('SESSION-STATE.md');
console.log(`✓ SESSION-STATE.md (${(stat.size / 1024).toFixed(1)}KB, modified ${stat.mtime.toLocaleString()})`);
} else {
console.log('✗ SESSION-STATE.md missing');
}
// Check MEMORY.md
if (fs.existsSync('MEMORY.md')) {
const stat = fs.statSync('MEMORY.md');
const lines = fs.readFileSync('MEMORY.md', 'utf8').split('\n').length;
console.log(`✓ MEMORY.md (${lines} lines, ${(stat.size / 1024).toFixed(1)}KB)`);
} else {
console.log('✗ MEMORY.md missing');
}
// Check memory directory
if (fs.existsSync('memory')) {
const files = fs.readdirSync('memory').filter(f => f.endsWith('.md'));
console.log(`✓ memory/ (${files.length} daily logs)`);
} else {
console.log('✗ memory/ directory missing');
}
// Check LanceDB
const lancedbPath = path.join(process.env.HOME, '.clawdbot/memory/lancedb');
if (fs.existsSync(lancedbPath)) {
console.log('✓ LanceDB vectors initialized');
} else {
console.log('• LanceDB not initialized (optional)');
}
},
help: () => {
console.log(`
🧠 Elite Longterm Memory CLI
Commands:
init Initialize memory system in current directory
today Create today's daily log file
status Check memory system health
help Show this help
Usage:
npx elite-longterm-memory init
npx elite-longterm-memory status
`);
}
};
const command = process.argv[2] || 'help';
if (commands[command]) {
commands[command]();
} else {
console.log(`Unknown command: ${command}`);
commands.help();
}

57
package.json Normal file
View File

@@ -0,0 +1,57 @@
{
"name": "elite-longterm-memory",
"version": "1.2.3",
"description": "Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol, vector search, git-based knowledge graphs, cloud backup. Never lose context again.",
"keywords": [
"memory",
"ai-agent",
"long-term-memory",
"vector-search",
"lancedb",
"git-notes",
"wal",
"persistent-context",
"claude",
"gpt",
"chatgpt",
"openclaw",
"moltbot",
"cursor",
"copilot",
"github-copilot",
"ai",
"llm",
"automation",
"context-management",
"mem0",
"auto-extraction",
"fact-extraction",
"vibe-coding",
"ai-tools",
"developer-tools",
"devtools",
"typescript"
],
"optionalDependencies": {
"mem0ai": "^1.0.0"
},
"author": "NextFrontierBuilds",
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/NextFrontierBuilds/elite-longterm-memory"
},
"homepage": "https://github.com/NextFrontierBuilds/elite-longterm-memory",
"bugs": {
"url": "https://github.com/NextFrontierBuilds/elite-longterm-memory/issues"
},
"bin": {
"elite-memory": "./bin/elite-memory.js"
},
"files": [
"SKILL.md",
"bin/",
"templates/",
"README.md"
]
}