Skip to content

feat(openclaw-plugin): add topic-judge pre-filter to skip auto-recall on topic continuation#1379

Open
duoyidavid-eng wants to merge 1 commit intoMemTensor:mainfrom
duoyidavid-eng:feat/topic-judge-pre-filter
Open

feat(openclaw-plugin): add topic-judge pre-filter to skip auto-recall on topic continuation#1379
duoyidavid-eng wants to merge 1 commit intoMemTensor:mainfrom
duoyidavid-eng:feat/topic-judge-pre-filter

Conversation

@duoyidavid-eng
Copy link
Copy Markdown

Problem

The before_agent_start auto-recall hook fires on every user message, triggering embedding + vector search + LLM filter calls even when the user is simply continuing the current conversation (follow-up questions, "ok", typos, error feedback). This wastes API calls and adds latency on every turn.

Solution

Add a topic-judge pre-filter using the existing Summarizer.judgeNewTopic() method (already implemented in all providers), extracted as a standalone topicJudgePreFilter() function for clean separation from the main recall logic.

Flow:

before_agent_start fires
  → Extract last N rounds from event.messages (default: 4 rounds)
  → Strip injected metadata, merge consecutive same-role
  → If < 2 context lines → skip recall (conservative)
  → Call summarizer.judgeNewTopic(context, query)
  → SAME → skip recall entirely (saves API calls)
  → NEW  → proceed with search
  → ERROR → fallback to search (safe default)

Config

New option recall.topicJudgeRounds (default: 4). Set to 0 to disable:

{
  "plugins": {
    "entries": {
      "memos-local-openclaw-plugin": {
        "config": {
          "recall": {
            "topicJudgeRounds": 4
          }
        }
      }
    }
  }
}
Value Behavior
0 Disabled — recall runs on every turn
4 (default) Uses last 4 rounds as context for LLM topic judgment
N > 0 Uses last N rounds

Design decisions

  • Standalone function topicJudgePreFilter() returning "skip" | "proceed" — no inline logic in the hook
  • Configurable rounds — users can tune context window or disable entirely
  • Graceful fallback — LLM error → proceed with recall (never silently drops memories)
  • Cost — 1 small LLM call per turn (max_tokens=10, ~100 input tokens) vs saving the full embedding + search + filter pipeline (~2-3s)

Files changed

  • apps/memos-local-openclaw/index.ts — add topicJudgePreFilter(), call it before search
  • apps/memos-local-openclaw/src/types.ts — add recall.topicJudgeRounds type
  • apps/memos-local-openclaw/src/config.ts — resolve topicJudgeRounds with default 4

@duoyidavid-eng duoyidavid-eng force-pushed the feat/topic-judge-pre-filter branch from 977a4a9 to a28b435 Compare March 29, 2026 14:59
… on topic continuation

Add a lightweight LLM-based pre-filter before auto-recall search to avoid
unnecessary embedding + vector search + LLM filter calls when the user is
continuing the current conversation.

- Extract topicJudgePreFilter() as a standalone function returning 'skip'|'proceed'
- Configurable via recall.topicJudgeRounds (default: 4, set 0 to disable)
- Uses existing Summarizer.judgeNewTopic() (already implemented in all providers)
- Graceful fallback: too-few-lines → skip; LLM error → proceed with recall
- Only 1 small LLM call (max_tokens=10) vs full search pipeline saved on SAME
@duoyidavid-eng duoyidavid-eng force-pushed the feat/topic-judge-pre-filter branch from a28b435 to 57aaafd Compare March 29, 2026 15:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant