Metadata-Version: 2.1
Name: rewind-memory
Version: 0.5.7
Summary: Bio-inspired persistent memory for AI agents. 5-layer architecture with FTS5, vector search, knowledge graph, and HybridRAG fusion.
Author-email: SARAI Defence <vova@saraidefence.com>
License: Source-Available-Noncommercial
Project-URL: Homepage, https://saraidefence.com
Project-URL: Documentation, https://saraidefence.com/docs
Project-URL: Repository, https://github.com/saraidefence/rewind-memory
Keywords: memory,ai,agents,rag,bio-inspired,knowledge-graph
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pyyaml>=6.0
Requires-Dist: httpx>=0.24
Requires-Dist: watchdog>=3.0
Provides-Extra: vector
Requires-Dist: sqlite-vec>=0.0.1; extra == "vector"
Provides-Extra: mcp
Requires-Dist: httpx>=0.24; extra == "mcp"
Provides-Extra: proxy
Requires-Dist: uvicorn>=0.27; extra == "proxy"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: ruff>=0.1; extra == "dev"

# 🧠 Rewind Memory

**Bio-inspired persistent memory for AI agents.**

Rewind gives AI agents structured, persistent memory that works like biological memory — keyword search, knowledge graphs, semantic vectors, score fusion, and intelligent retrieval. Every memory is classified by type (user, feedback, project, reference), weighted by recency, and checked for drift. Runs fully local on SQLite with zero external dependencies.

## Architecture

```
┌─────────────────────────────────────────┐
│            L2 Orchestrator              │
│    (fusion · ranking · deduplication)   │
├─────────┬─────────┬─────────┬──────────┤
│   L0    │   L1    │   L3    │    L4    │
│Sensory  │ System  │  Graph  │  Vector  │
│ Buffer  │  Files  │         │  Search  │
│         │         │         │          │
│ SQLite  │  File   │ SQLite  │sqlite-vec│
│  FTS5   │ system  │  Graph  │ (vec0)   │
└─────────┴─────────┴─────────┴──────────┘
```

## Layers

| Layer | Name | Bio Analogy | Backend | Purpose |
|-------|------|-------------|---------|---------|
| **L0** | Sensory Buffer | Sensory cortex | SQLite FTS5 | Fast keyword search, BM25 ranking |
| **L1** | System Memory | Working memory | File system | Identity, preferences, context — loaded every session |
| **L2** | Orchestrator | Prefrontal cortex | In-process | Fuses keyword + graph + vector results |
| **L3** | Knowledge Graph | Association cortex | SQLite | Entity relationships, spreading activation, Hebbian learning |
| **L4** | Vector Search | Episodic memory | sqlite-vec | Semantic similarity via local embeddings (Ollama) |

## Quick Start

```bash
pip install rewind-memory
rewind doctor                    # Auto-diagnose and fix memory stack
rewind ingest-chats              # Backfill historical OpenClaw conversations
rewind watch                     # Real-time conversation indexing (L0 keyword search)
rewind search "what did we discuss about X"
```

> **Full walkthrough:** [docs/QUICKSTART.md](docs/QUICKSTART.md) — 5 minutes to first search.

```bash
# 1. Install Ollama for vector search (optional but recommended)
curl -fsSL https://ollama.com/install.sh | sh
ollama pull nomic-embed-text  # ~274MB

# 2. Install Rewind
pip install rewind-memory

# 3. Ingest your files
rewind ingest ~/my-project/docs/

# 4. Search
rewind search "what was decided about auth"
```

## Commands

| Command | Description |
|---------|-------------|
| `rewind ingest <path>` | Ingest files into memory |
| `rewind search <query>` | Search memory |
| `rewind health` | Health check |
| `rewind doctor` | Auto-diagnose and fix issues |
| `rewind watch` | Real-time session watcher (**NEW in 0.4.7**) |
| `rewind ingest-chats` | Historical conversation backfill (**NEW in 0.4.7**) |
| `rewind watch-sessions` | Real-time conversation capture — indexes sessions as they happen |
| `rewind serve` | API server + background file watcher |
| `rewind remember <text>` | Store a manual note in memory |
| `rewind bench` | Run LoCoMo benchmark |
| `rewind proxy` | Memory-augmented LLM proxy |
| `rewind migrate` | Migrate backends (Pro) |

## Free vs Pro

| Feature | Free | Pro |
|---------|------|-----|
| L0 Keyword Search (FTS5) | ✅ | ✅ |
| L3 Knowledge Graph (SQLite) | ✅ | ✅ |
| L4 Vector Search (sqlite-vec) | ✅ (Ollama) | ✅ (NV-Embed-v2) |
| `rewind doctor` | ✅ | ✅ |
| `rewind watch` | ✅ (L0) | ✅ (L0 + L5) |
| `rewind watch-sessions` | ✅ (L0 + L3) | ✅ (L0 + L3 + L5) |
| `rewind ingest-chats` | ✅ (L0) | ✅ (L0 + L5) |
| OCPlatform gateway autopatcher | ✅ | ✅ |
| Memory Proxy | ✅ | ✅ |
| L5 Semantic Search (Qdrant) | — | ✅ |
| L6 Document Store | — | ✅ |
| Multi-channel awareness | — | ✅ |
| Cloud GPU embeddings | — | ✅ |
| Cross-encoder reranking | — | ✅ |
| Memory extraction (post-turn) | — | ✅ |
| Neo4j Knowledge Graph | — | ✅ |

### Usage

```python
from rewind.client import RewindClient
from rewind.config import default_config

client = RewindClient(default_config("free"))

# Search across all layers
results = client.search("what is Hebbian learning")
for r in results:
    print(f"{r.score:.2f} [{r.layer}] {r.text[:80]}")

# Ingest a document
client.ingest("notes/meeting.md")

# Health check
print(client.health())
```

### CLI

```bash
# Search
rewind search "memory consolidation" --limit 10

# Ingest files
rewind ingest ./documents/

# Health check
rewind health
```

### Integrations (Claude Code, Cursor, OpenClaw)

Rewind ships with an MCP server that works with any MCP-compatible tool:

**Claude Code** — add to `~/.claude/settings.json`:
```json
{
  "mcpServers": {
    "rewind": {
      "command": "rewind-mcp"
    }
  }
}
```

**OpenClaw** — native hook (recommended) or gateway patch:
```bash
# Route memory_search through Rewind
rewind-openclaw setup --url http://localhost:8031

# Native hook (recommended — survives OpenClaw updates)
rewind-openclaw hook
rewind-openclaw hook --verify
rewind-openclaw hook --remove

# Gateway patch (legacy — needs re-apply after updates)
rewind-openclaw patch
```

**Cursor / Windsurf / Cline** — add `rewind-mcp` as an MCP server in settings.

See [docs/INTEGRATIONS.md](docs/INTEGRATIONS.md) for full setup guides.

### Memory Proxy (zero-config, works with any tool)

The proxy auto-injects memory into every LLM call. No MCP needed — just change your API URL:

```bash
pip install rewind-memory[proxy]

# Ingest your project first
rewind ingest ./my-project/

# Start the memory proxy
rewind proxy --port 8080
```

Then point your tool at it:

```bash
# Cursor
OPENAI_BASE_URL=http://localhost:8080/v1 cursor .

# Aider
OPENAI_API_BASE=http://localhost:8080/v1 aider

# Any OpenAI-compatible tool
export OPENAI_BASE_URL=http://localhost:8080/v1
```

The proxy:
- Searches memory on every prompt (keyword + graph + vector fusion)
- Injects relevant context into the system message
- Auto-budgets injection size based on model context limits
- Passes through your API key to the upstream provider
- Supports streaming responses
- Works with OpenAI, Anthropic, NVIDIA, any OpenAI-compatible API

```bash
# Point at Anthropic instead of OpenAI
rewind proxy --port 8080 --upstream https://api.anthropic.com/v1

# Point at a local model
rewind proxy --port 8080 --upstream http://localhost:11434/v1
```

## Configuration

```yaml
# ~/.rewind/config.yaml
tier: free
workspace_path: ~/my-project

embedding:
  provider: ollama
  model: nomic-embed-text
  url: http://localhost:11434

layers:
  l0_sensory: true
  l1_stm: true
  l3_graph: true
  l4_workspace: true
```

## Upgrading

Rewind Core is free and fully functional as a local memory stack. For additional layers (communications, documents, bio lifecycle) and cloud GPU embeddings, see [Rewind Pro](https://saraidefence.com/#pricing).

Pro extends the Core package via a plugin — install both and Pro layers activate automatically:

```python
client = RewindClient(default_config("pro"), api_key="rw_live_...")
```

| Feature | Core (Free) | Pro ($9/mo) | MOS (Custom) |
|---------|-------------|-------------|--------------|
| Keyword search (FTS5) | ✅ | ✅ | ✅ |
| System memory | ✅ | ✅ | ✅ |
| Knowledge graph (SQLite) | ✅ | ✅ | ✅ |
| Semantic vector search | ✅ local Ollama | ✅ cloud GPU (A10G) | ✅ custom GPU |
| HybridRAG fusion | ✅ 5-layer (L0–L4) | full 7-layer (L0–L6) | full 7-layer |
| **Memory type taxonomy** | ✅ (user/feedback/project/reference) | ✅ | ✅ |
| **Recency weighting** | ✅ (type-aware decay) | ✅ | ✅ |
| **Query-intent matching** | ✅ (boosts matching types) | ✅ | ✅ |
| **Memory drift detection** | ✅ (flags stale references) | ✅ | ✅ |
| **OpenClaw gateway autopatcher** | ✅ (pre-turn memory injection) | ✅ | ✅ |
| Communications memory (L5) | — | ✅ | ✅ |
| Document store (L6) | — | ✅ | ✅ |
| **LLM relevance selection** | — | ✅ (cheap model side-query) | ✅ |
| **Cross-encoder reranking** | — | ✅ (GPU-accelerated) | ✅ |
| **Memory extraction (post-turn)** | — | ✅ (auto-extracts durable memories) | ✅ |
| **Partial compaction** | — | ✅ (preserves task intent) | ✅ |
| Cloud GPU embeddings (NV-Embed-v2) | — | 25K/mo included | custom volume rates |
| Retrieval feedback learning | — | ✅ | ✅ |
| Migration assist (→ Neo4j/Qdrant) | — | ✅ | ✅ |
| Bio lifecycle (decay, pruning) | — | — | ✅ |
| Air-gapped deployment | — | — | ✅ |
| Dedicated engineer + SLA | — | — | ✅ |

## Real-Time Conversation Capture

Capture conversations as they happen — no manual backfill needed:

```bash
# Watch all OCPlatform session files, index new turns into L0 + L3 + L5
rewind watch-sessions

# Custom session directory
rewind watch-sessions --session-dir /path/to/sessions

# With specific backends
rewind watch-sessions --qdrant-url http://localhost:6333 --embed-url http://localhost:8041/v1/embeddings
```

`watch-sessions` uses `watchdog` to monitor OpenClaw session JSONL files. When new conversation turns are written, they're immediately indexed into:
- **L0** (BM25 keyword search)
- **L3** (Knowledge graph — entity extraction + co-occurrence edges)
- **L5** (Qdrant semantic vectors — if available)

This complements the gateway pre-turn hook: the hook **reads** memory before each turn, `watch-sessions` **writes** new conversations into memory after each turn. Together they form a closed loop.

> **Requires:** `pip install 'watchdog>=3.0'`

## Development

```bash
git clone https://github.com/saraidefence/rewind-memory.git
cd rewind-memory
pip install -e ".[dev]"
pytest tests/
```

## Patent

The 7-layer bio-inspired memory architecture is patent pending.

## License

MIT — see [LICENSE](LICENSE).

---

Built by [SARAI Defence](https://saraidefence.com). Ukrainian-built. Patent pending.
