Replies: 2 comments 1 reply
-
|
Hey there, Since I value personality very strong in Agents, and I think that strong personality can only be archived through (inter-)action, reflection, AND remembering/forgetting the principals seem to align very much with cortex-engine! I mainly use openclaw, I could not see if you intended/tried to use it with openclaw? (Also thoughts about why not (if not ^^) would be valuable) I was thinking that it might be even combinable with two other memory systems:
Would really appreciate your thoughts on this! |
Beta Was this translation helpful? Give feedback.
-
|
Hey Horo — appreciate you reading deeply enough to see the alignment, and that follow-up is honestly the most important question. How long, and does it actually work: About a month of daily production use. The agent runs interactive sessions with me and autonomous sessions on a VPS (up to 12/day via cron). Right now there are ~625 consolidated memories, ~3,400 graph edges, and ~3,000 observations — all processed, zero backlog. So it's not a weekend experiment, but I'm also not going to pretend a month is years. The degradation problem you're describing — I know exactly what you mean, and it's the reason cortex has a forgetting system at all. Most agent memory projects hit a wall somewhere around the point where retrieval starts returning noise instead of signal. You search for something relevant and get back 10 results that are all technically related but none of them are the thing. That's what happens when everything is equally important forever. cortex fights that in a few ways. Memories have salience that decays when unused — stuff you don't reference fades naturally. There's a consolidation cycle (we call it On what cortex is actually for: The way I think about it — and this took a while to articulate — cortex is trying to shift the agent from assistant to coworker. Most memory systems help an agent recall facts. cortex is trying to help an agent understand context well enough to make judgment calls. In practice that looks like: the agent I run on cortex noticed through its own observations that I tend to start ambitious projects and not finish them. Nobody put that in a system prompt. It formed that belief from watching the pattern across sessions, and decided on its own that helping me close things out was going to be a long-term priority. That's a goal the agent set for itself based on accumulated experience — not one I assigned. It also builds up working knowledge of the user that a normal agent would need hand-fed. If it knows someone is strong in one language but less comfortable in another, it factors that into suggestions — sometimes recommending the unfamiliar language when the benefits clearly outweigh the friction, sometimes just going with what the user asked for because the task is simple and switching would waste more time than it saves. That judgment comes from experience, not a static config file. And when things change — like the user picks up a new language — that evolution gets tracked, not overwritten. The old context stays as history, so the agent can look back and notice "they used to avoid this, now they prefer it" and act on the trajectory, not just the current state. None of that requires you to write a long system prompt describing who your user is and what the agent should care about. The agent figures it out. And critically, it can change its mind — beliefs can be contradicted, validated, refined. Those changes leave an auditable trail. So it's not a black box that silently drifts; you can see exactly when and why the agent's understanding shifted. On OpenClaw: cortex started getting built right around the time OpenClaw was taking off. We were watching the ecosystem form and it was clear pretty quickly — if your agent's memory is a feature of one runtime, you're locked in. OpenClaw changes their memory API, your agent loses its mind. So cortex was built runtime-agnostic from the start. MCP server, REST API, works with whatever can make HTTP calls. There's a skill published on ClawHub ( On Lossless Claw and QMD: These fill gaps that cortex doesn't try to fill, which is what makes the composability interesting. Lossless Claw solves the context window problem — keeping earlier conversation alive during long sessions. cortex doesn't manage context windows at all; it manages what the agent knows and believes across sessions. QMD gives you high-quality retrieval over local markdown files. cortex stores structured knowledge — it can run fully local on SQLite or scale to Firestore when you want cloud features like cross-device sync or API access. So an agent running all three gets session continuity + local file search + a persistent identity that evolves with the user. Each one is doing something the other two genuinely can't — that's not a polite "they're all great," it's that they actually operate on different layers of the problem. Curious what your system looks like and where the degradation hit. That's the problem everyone building in this space runs into eventually, and I think the solutions end up being pretty different depending on whether your bottleneck is retrieval quality, storage scale, or context relevance. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey! If you've tried cortex-engine, we'd love to hear from you. Even a one-liner helps.
Three questions:
No wrong answers. Drop a comment below or open a separate issue if it's a specific bug.
Thanks for trying it out.
Beta Was this translation helpful? Give feedback.
All reactions