OpenClaw Setup Guide
Infrastructure

Local Memory Search

On-device semantic search for memory recall

Uses a local embedding model to search memory and workspace files semantically. When the agent needs to recall something, it searches locally -- no API calls, no latency, no extra cost.

Setup

Download a GGUF embedding model (I use Qwen3-Embedding-4B) and configure:

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "provider": "local",
        "local": {
          "modelPath": "~/.openclaw/models/Qwen3-Embedding-4B-Q8_0.gguf"
        },
        "sync": {
          "watch": true
        }
      }
    }
  }
}

The watch: true setting keeps the index updated as files change. The Qwen3 model is about 4GB -- adjust based on your machine's RAM.

On this page