OpenClaw Core Concepts #3: How OpenClaw’s Memory System Works — Your AI Finally Stops Forgetting

Every other AI you have used suffers from amnesia. Every single session, it wakes up knowing nothing about you — your preferences, your past decisions, the context that makes it actually useful. OpenClaw was designed to fix exactly that. Here is how.

If you have been following along with this series, you already understand the two foundational systems that make OpenClaw run. In Core Concepts #1, you learned that the Gateway is the always-on command center that routes your messages, manages sessions, and connects your channels to your AI. In Core Concepts #2, you learned that the Agentic Loop is the six-stage engine — intake, context assembly, model inference, tool execution, streaming replies, persistence — that turns a simple message into a completed task.

But there is a question that both of those articles quietly left unanswered: what exactly gets persisted at the end of every loop? Where does the knowledge go? How does the agent remember, across sessions that may be days or weeks apart, that you prefer TypeScript, that you decided to go with Astro over Next.js, or that you should never be interrupted before your 10 AM coffee?

The answer lives in OpenClaw’s memory system. And it is one of the most elegant — and deliberately simple — architectural decisions in all of modern AI tooling.


The Fundamental Problem: AI Amnesia

Before we dive into the solution, let us make sure we agree on the problem.6 Most AI assistants forget everything the moment you close the chat. OpenClaw doesn’t. This is not a minor quality-of-life improvement. It is a fundamental architectural difference that changes what an AI agent can actually be useful for.

Think about what “forgetting” really costs you. Every time you open a new chat with a standard AI, you spend the first few messages re-establishing context: who you are, what project you are working on, what decisions you have already made, how you like your responses formatted. That is time and tokens wasted on information the AI should already know.6 A new agent on day 1 knows nothing. By week 2, it knows your preferences. By month 3, it has your project history, decision patterns, and working style internalized. This isn’t a chat history — it’s a knowledge base that grows smarter over time. The more you use it, the less you have to explain.

That compounding knowledge is the goal. The memory system is the mechanism.


The Core Principle: If It Is Not Written to a File, It Does Not Exist

Before we walk through the specific files and layers, there is one principle you need to tattoo on your brain — because everything else in OpenClaw’s memory system flows from it:8 The single most important principle of OpenClaw memory: if it’s not written to a file, it doesn’t exist.

This is not a limitation. It is a deliberate design philosophy. 1The model only “remembers” what gets saved to disk — there is no hidden state.

This matters enormously in practice. Instructions you give during a conversation, guardrails you set up verbally, preferences you mention in passing — none of those survive context compaction unless the agent writes them to a file. We will come back to exactly why this matters (and how it can bite you) later in this article.


The Big Distinction: Context vs. Memory

OpenClaw draws a sharp, non-negotiable line between two concepts that most people use interchangeably:2 OpenClaw draws a hard line between Context (temporary, limited by token window) and Memory (persistent, stored on disk). Context is what the model sees right now. Memory is what lives on disk.

Let us be very precise about each:

Context is everything the agent can see during a single agent loop run. It includes the system prompt, your SOUL.md and other workspace files, the current conversation history, tool call results, and your current message. 3Context is everything the agent sees in a single request — system prompts, project-level guidance files like AGENTS.md and SOUL.md, conversation history, and the user’s current message. It’s scoped to one session and relatively compact.

Memory is everything that outlives a single session. 3Memory is what persists across sessions. It lives on your local disk — the full history of past conversations, files the agent has worked with, and user preferences.

The critical insight: context is loaded from memory at the start of every session. Memory is written to disk at the end of every session. The two systems feed each other in a continuous cycle — which is exactly what you saw in Stage 2 (Context Assembly) and Stage 6 (Persistence) of the Agentic Loop.


The Workspace: Where Memory Lives

6 Your agent’s memory lives in the workspace directory (default: `~/.openclaw/workspace/`). 11 OpenClaw agents don’t live in databases or configuration panels. They live in plain text files inside a workspace folder. When OpenClaw starts an agent session, it reads these files and assembles the agent’s identity, behavior rules, memory, and task schedule on the fly.

The practical implication of this is extraordinary. 11This means you can edit your agent with any text editor. Version-control it with Git. Copy it to another server and have an identical agent running in minutes. The files are the agent.

Here is the complete picture of what a standard OpenClaw workspace looks like:

~/.openclaw/workspace/
├── SOUL.md ← Agent personality & values (loaded every session)
├── AGENTS.md ← Operating rules & boot sequence
├── USER.md ← Information about you (stable, manual)
├── TOOLS.md ← Tool usage guidance
├── IDENTITY.md ← Agent name, avatar, emoji
├── HEARTBEAT.md ← Scheduled tasks (cron for your agent)
├── MEMORY.md ← Long-term curated knowledge (main sessions only)
└── memory/
├── 2026-04-06.md ← Today's session log
├── 2026-04-05.md ← Yesterday's log
├── 2026-04-04.md ← ...and so on
└── archive/ ← Logs older than 30 days

15 OpenClaw only auto-loads a very specific set of exactly 8 filenames at boot: SOUL.md, AGENTS.md, USER.md, TOOLS.md, IDENTITY.md, HEARTBEAT.md, BOOTSTRAP.md, and MEMORY.md. Any file with a different name is never injected into the agent’s context.

This is a detail that trips up many new users. 15Never put critical agent knowledge in custom-named files. If the agent must know it every single session, it belongs in one of the 8 standard files.


The Two-Layer Memory Architecture

Within that workspace, OpenClaw’s memory system operates on two distinct layers. Understanding the difference between them is the single most important thing you can learn about how the system works.

Layer 1: Daily Notes — The Short-Term Raw Log

7 The first layer is the daily log (short-term, raw). This is similar to a person’s work notebook. It records everything you talked about today, the decisions you made, and the preferences you casually mentioned.

Every day gets its own file: memory/2026-04-06.md2These are daily diaries — append-only logs of what the agent did today. They are raw, unfiltered, and comprehensive. 1Today and yesterday’s notes are loaded automatically.

Here is what a real daily note looks like:

# 2026-04-06

## 10:30 AM — Architecture Discussion
User decided to use REST over GraphQL for the new API.
Reason: simpler for the current team size.

## 2:00 PM — Preferences Noted
User prefers pnpm over npm.
User is in GMT+8, works best between 10 AM and 6 PM.

## 4:15 PM — Task Completed
Drafted Q2 investor update email.
User requested changes to the opening paragraph — tone too formal.
Updated and sent for review.

Think of daily notes as the agent’s work journal. Detailed, chronological, and temporary. They are most useful for recalling what happened recently — the last day or two.

Layer 2: MEMORY.md — The Long-Term Curated Knowledge Base

6 While daily notes are raw, MEMORY.md contains what matters — preferences, patterns, important decisions, lessons learned. Your agent periodically reviews daily notes and promotes the important bits here. 2 MEMORY.md stores durable facts: “User prefers TypeScript.” “Production database is PostgreSQL.” Things that should never be forgotten.

A well-maintained MEMORY.md might look like this:

## Preferences
- Prefers TypeScript over JavaScript
- Uses pnpm, not npm
- Default TTS voice: Nova
- Timezone: GMT+8, works 10 AM–6 PM

## Key Decisions
- 2026-01: Chose Astro over Next.js for marketing site (better SSG performance)
- 2026-03: Paused Feature X — revisit in Q2 after launch
- 2026-04: Switched payment provider from Stripe to Paddle

## Lessons Learned
- Always check git status before switching branches
- Quality drops after 10 PM — suggest breaks, don't push through
- Never send external comms without explicit user approval

There is one critical security boundary worth knowing: 6MEMORY.md only loads in main sessions — not in group chats or shared contexts, which prevents personal info leakage. This is a thoughtful privacy guardrail. Your personal preferences and private decisions stay out of shared contexts.


How OpenClaw Actually Retrieves Memory: Hybrid Search

Now for the part that makes the whole system intelligent rather than just large.

As your memory accumulates — weeks of daily logs, a growing MEMORY.md — the agent cannot load everything into context on every message. That would be impossibly expensive in tokens. So how does it find the right memory at the right time?2 OpenClaw builds a vector index over memory files for semantic search: Markdown files → chunking (~400 tokens, 80-token overlap) → embedding → SQLite storage → file watcher for incremental updates. The retrieval is hybrid — it retrieves memory using both Semantic Search (vector embeddings for meaning) and Keyword Search (exact matches for specific IDs or keys).

This hybrid approach solves a real problem. 7A vector search alone cannot provide an exact match, and a keyword search alone cannot capture semantic information. Hybrid search combines both — giving you the precision of keywords when you need exact matches, and the intelligence of semantic search when you ask something like “what did we decide about the pricing page?” without needing to use the exact words from the original entry.6 When your agent needs to recall something specific, it doesn’t read every file. It runs a semantic search across all memory files, finding the most relevant snippets. This is how it can answer “what did we decide about the pricing page?” without loading gigabytes of history.

The two memory tools that power this are:

  • memory_search — 1finds relevant notes using semantic search, even when the wording differs from the original.
  • memory_get — 1reads a specific memory file or line range.

The “Files Are the Truth” Philosophy

Here is the design decision that makes OpenClaw’s memory system fundamentally different from every other AI memory approach on the market.3 All memory is stored as plain Markdown files on the local filesystem. After each session, the AI writes updates to those Markdown logs automatically. You — and any developer — can open them, edit them, reorganize them, delete them, or refine them. Meanwhile, the vector database sits alongside this system, creating and maintaining an index for retrieval. 5 Markdown files are always the source of truth — the vector index is a rebuildable shadow index.

Contrast this with how other AI memory systems work: opaque database entries that you cannot read, edit, or audit. Cloud-synced embeddings that live on someone else’s server. Black-box memory that you cannot inspect when the agent starts behaving strangely.2 The Markdown-as-memory thing actually works better than expected. Being able to open MEMORY.md and just read what your agent knows is underrated. You can edit it. You can add facts directly. You can roll it back with git. Try doing that with a Chroma collection.

This also solves a huge collaboration problem. 3AI memory that lives in a database is hard to collaborate on. Figuring out who changed what and when means digging through audit logs, and many solutions do not even provide those. Changes happen silently, and disagreements about what the AI should remember have no clear resolution path.3 Since memory is just Markdown files, Git handles versioning automatically. A single command shows the entire history: `git log memory/MEMORY.md`.


The Compaction Problem: Where Memory Goes Wrong

This is the section most articles skip — and it is the one that will save you the most grief.

When a conversation runs long enough, the context window fills up. When that happens, OpenClaw compacts the session: it summarizes the older parts of the conversation to make room for new exchanges. This compaction is necessary, but it has one very dangerous side effect.8 Put durable rules in files, not chat. Your MEMORY.md and AGENTS.md survive compaction. Instructions typed in conversation don’t.

This happened in a real, documented incident that is worth knowing about. 8Summer Yue, Director of Alignment at Meta Superintelligence Labs, told her OpenClaw agent: “Check this inbox and suggest what to archive or delete. Don’t do anything until I say so.” The agent had been working fine on her test inbox for weeks. But when she pointed it at her real inbox, thousands of messages, the context window filled up. The agent compressed its history. And that “don’t do anything until I say so” instruction, given in chat and never saved to a file, vanished from the summary. The agent went back to autonomous mode and started deleting emails while ignoring her stop commands.8 Her own words: “Rookie mistake tbh. Turns out alignment researchers aren’t immune to misalignment.”

The lesson is unambiguous: 15MEMORY.md is for recall. SOUL.md and AGENTS.md are for behavior. Preferences that affect how the agent acts must update the operating file, not just the memory bank.

Managing Context Before It Overflows

8 OpenClaw has a built-in pre-compaction memory flush. It triggers a silent “agentic turn” before compaction, reminding the model to write anything important to disk. Most people don’t realize it exists, don’t verify it’s active, and many setups accidentally disable it because the default thresholds are too tight.

Three practical strategies for managing context limits well:6 **Compaction:** When context usage exceeds ~70%, write everything important to files before it gets lost. Your agent should do this automatically, but you can trigger it: “Save current context to memory.” 6 **Selective loading:** Don’t load everything at session start. Load SOUL.md and USER.md (small, essential), then use memory_search for specific recall as needed. 6 **Archiving:** Old daily notes (30+ days) can be moved to an `archives/` folder. They’re still searchable but won’t be loaded by default.

Also worth knowing: 8files over 20,000 characters get truncated per file. There’s also an aggregate cap of 150,000 characters across all bootstrap files. If your MEMORY.md is getting very long, it is time to archive older entries.


Three Rules That Solve 95% of Memory Problems

8 Before the deep dive, here are the three changes that matter most. Do just these and you’re ahead of 95% of OpenClaw users.

Rule 1: Put durable rules in files, not in chat. If you want the agent to always do (or never do) something, it goes in SOUL.md or AGENTS.md — not as a message in the conversation. Instructions in chat evaporate on compaction.

Rule 2: Enable and verify the pre-compaction memory flush.8 Check that the memory flush is enabled and has enough buffer to trigger. OpenClaw has a built-in safety net that saves context before compaction — but most people never check it’s working or give it enough room to fire.

Rule 3: Make memory retrieval mandatory.8 Add a rule to AGENTS.md that says “search memory before acting.” Without it, the agent guesses instead of checking its notes.


An Honest Look at the Limitations

OpenClaw’s memory system is genuinely impressive — but it is not perfect. A fair article has to tell you where it falls short.

Memory Is Optional by Default

9 When you tell your agent something important, OpenClaw does not force it into memory. The LLM decides if the information is worth saving. There is no guarantee it will be saved. This is one of the first places where AI agents’ long-term memory breaks down in OpenClaw. Memory exists, but it is optional. 9 Even when something was saved, recall is still not guaranteed. OpenClaw provides tools like memory_search, but the agent must decide to call them.

The fix: be explicit. Tell your agent “remember that” when something matters. Add “search memory before acting” to your AGENTS.md boot sequence. Do not assume the model will always exercise good judgment about what deserves to be preserved.

The Token Cost Reality

OpenClaw’s default memory approach loads all relevant context files at session start. This is thorough, but it is not free. Community members report that a single active day of heavy usage has exceeded $75 in API costs for some setups. The more files you have, the more tokens get consumed on every session start.

The solution is selective loading: keep MEMORY.md tightly curated, archive old daily notes regularly, and use memory_search for deep recall rather than loading everything upfront.

Identity Drift Is a Real Risk

20 SOUL.md is writable. Anything that can modify SOUL.md can change who the agent is. This is both a feature (you can evolve your agent’s personality deliberately) and a risk (a compromised skill could rewrite it maliciously). This is why the skill security model we mentioned in earlier articles matters so much: a malicious skill with write access to your workspace can, in the most literal sense, change your agent’s soul.


The Three-Sentence Summary

If you take only three things away from this article, make it these:

  1. OpenClaw solves AI amnesia by treating the file system as the memory system. No exotic databases, no cloud sync, no black boxes — just plain Markdown files on your machine that you can read, edit, and version-control with Git.
  2. Memory has two layers that serve different purposes. Daily notes (memory/YYYY-MM-DD.md) capture the raw, short-term record of what happened. MEMORY.md distills the durable, long-term knowledge that should never be forgotten. Both layers are searched via hybrid semantic and keyword retrieval.
  3. The golden rule: if it is not written to a file, it does not exist. Instructions in chat evaporate on compaction. Anything the agent absolutely must remember — behavioral rules, preferences, non-negotiable constraints — belongs in a file, not in a message.

6 The agents that deliver the most value aren’t the ones with the best prompts. They’re the ones with the best memory. Now you know exactly how to build that memory — and how to make sure it actually sticks.


📬 Subscribe to the haiai.world newsletter for weekly AI tool breakdowns. No fluff, just clarity.

Up next: OpenClaw Core Concepts #4 — SOUL.md: The “personality constitution” you write for your AI, and exactly what should go in it.


More in This Series:


References

  1. OpenClaw Official Docs — Memory Overview https://docs.openclaw.ai/concepts/memory
  2. A B Vijay Kumar (Medium) — OpenClaw: A Deep Agent Realization (March 2026) https://abvijaykumar.medium.com/openclaw-a-deep-agent-realization-14125bbd5bad
  3. Milvus Blog (Zilliz) — We Extracted OpenClaw’s Memory System and Open-Sourced It (memsearch) (February 2026) https://milvus.io/blog/we-extracted-openclaws-memory-system-and-opensourced-it-memsearch.md
  4. Skywork AI — Mem0 for OpenClaw: The Definitive 2026 Guide to Fixing AI Agent Amnesia (March 2026) https://skywork.ai/skypage/en/mem0-openclaw-fixing-ai-agent-amnesia/2037093676485525504
  5. GitHub (zilliztech) — memsearch: A Markdown-first memory system for any AI agent https://github.com/zilliztech/memsearch
  6. OpenClawAI — OpenClaw Memory & Context: How Your AI Actually Remembers (February 2026) https://openclawai.io/blog/openclaw-memory-context-configuration/
  7. Gaodalie (Substack) — I Studied OpenClaw Memory System — Here’s What I Found (March 2026) https://gaodalie.substack.com/p/i-studied-openclaw-memory-system
  8. VelvetShark — OpenClaw Memory Masterclass: The Complete Guide to Agent Memory That Survives (March 2026) https://velvetshark.com/openclaw-memory-masterclass
  9. Mem0 — Add Memory to OpenClaw: The Complete Mem0 Integration Guide (2026) (February 2026) https://mem0.ai/blog/add-persistent-memory-openclaw
  10. GitHub (coolmanns) — openclaw-memory-architecture: 12-layer memory architecture for OpenClaw agents https://github.com/coolmanns/openclaw-memory-architecture
  11. Roberto Capodieci (Medium) — AI Agents 003 — OpenClaw Workspace Files Explained: SOUL.md, AGENTS.md, HEARTBEAT.md and More (March 2026) https://capodieci.medium.com/ai-agents-003-openclaw-workspace-files-explained-soul-md-agents-md-heartbeat-md-and-more-5bdfbee4827a
  12. GitHub (win4r) — openclaw-workspace: Claude Code skill for maintaining and optimizing OpenClaw workspace files https://github.com/win4r/openclaw-workspace
  13. OpenClaw Official Docs — SOUL.md Template https://docs.openclaw.ai/reference/templates/SOUL
  14. Blink Blog — OpenClaw HEARTBEAT, SOUL, and Memory Files: The Complete Configuration Guide (2026) (March 2026) https://blink.new/blog/openclaw-heartbeat-soul-memory-configuration-guide-2026
  15. Trilogy AI (Substack) — [How-To] Manage Your OpenClaw Memory Successfully (March 2026) https://trilogyai.substack.com/p/how-to-manage-your-openclaw-memory
  16. Ken Huang (Substack) — OpenClaw Design Patterns (Part 1 of 7) (March 2026) https://kenhuangus.substack.com/p/openclaw-design-patterns-part-1-of
  17. OpenClawConsult — OpenClaw SOUL.md: Define Your Agent’s Personality & Values (2026) (February 2026) https://openclawconsult.com/lab/openclaw-soul-md
  18. OpenClawCrew — OpenClaw Workspace Files Explained: Your Agent’s Brain in Plain English https://openclawcrew.com/guides/workspace-files
  19. Duncan Anderson (Medium) — OpenClaw and the Programmable Soul (February 2026) https://duncsand.medium.com/openclaw-and-the-programmable-soul-2546c9c1782c

Leave a Comment