Behavioral guardrails for Claude Code — enforced workflows, persistent context, and quality gates for complex tasks.
Current version: 0.6.0 (2026-02-28) | Changelog
If Meridian helps your work, please star the repo and share it. Follow updates: X (@markmdev) • LinkedIn
Claude Code is powerful, but on complex tasks it struggles with:
| Problem | What happens |
|---|---|
| Context loss | After compaction, Claude forgets decisions, requirements, and what it was working on |
| No built-in memory | Claude can't remember lessons learned — it repeats the same mistakes because it doesn't know it already made them |
| Forgets prompt details | With large context, Claude starts ignoring parts of your CLAUDE.md instructions |
| Shallow planning | Plans lack depth, miss integration steps, and break during implementation |
| No task continuity | When you return to a task next session, Claude doesn't know what was done, decided, or tried |
You can write instructions in CLAUDE.md, but with large context Claude starts forgetting details from the prompt.
Meridian uses Claude Code's hooks system to enforce behaviors automatically:
| Capability | How it works |
|---|---|
| Context survives compaction | Hooks re-inject task state, guidelines, and your docs after every compaction |
| Session continuity | Agent workspace (WORKSPACE.md) tracks decisions, discoveries, and context across sessions — Claude picks up where it left off |
| Pre-compaction warning | Monitors token usage and prompts Claude to save context before compaction happens |
| Detailed plans that work | Planning skill guides Claude through thorough discovery, design, and integration planning |
| Quality gates | Plan-reviewer and code-reviewer agents validate work before proceeding |
| Your custom docs injected | Add docs to .meridian/docs/ with summary + read_when frontmatter — they're auto-discovered and injected when relevant |
Your behavior doesn't change. You talk to Claude the same way. Meridian works behind the scenes.
Meridian is designed for large, complex, long-running tasks where:
- Work spans multiple sessions
- Context loss would be costly
- Quality matters
- You want Claude to learn from past mistakes
For simple tasks (quick edits, one-off questions), Meridian won't help much — but it won't hurt either. It stays out of the way.
flowchart TB
subgraph Claude["Claude Code"]
User[Developer]
end
subgraph Hooks["Hooks (Enforce Workflow)"]
H1[SessionStart]
H2[PreToolUse]
H3[PostToolUse]
H4[Stop]
end
subgraph Skills["Skills (Structured Workflows)"]
S1[planning]
end
subgraph Agents["Agents (Quality Gates)"]
A1[plan-reviewer]
A2[code-reviewer]
A3[docs-researcher]
end
subgraph Files[".meridian/ (Persistent State)"]
F1[WORKSPACE.md]
F2[api-docs/]
F3[CODE_GUIDE.md]
end
User -->|talks to| Claude
Claude -->|triggers| Hooks
H1 -->|injects context| Files
H2 -->|enforces review| Agents
H3 -->|reminds task creation| Skills
H4 -->|requires updates| Files
Skills -->|read/write| Files
Agents -->|validate against| Files
sequenceDiagram
participant Dev as Developer
participant CC as Claude Code
participant Hook as Hooks
participant Skill as Skills
participant Agent as Agents
participant Files as .meridian/
Note over Dev,Files: Session Start
Dev->>CC: Opens project
CC->>Hook: SessionStart triggers
Hook->>Files: Reads tasks, guides, context
Hook->>CC: Injects context
Hook->>CC: Blocks until acknowledged
Note over Dev,Files: Planning Phase
Dev->>CC: Describes complex task
CC->>Skill: Uses planning skill
Skill->>CC: Guides through methodology
CC->>Hook: Tries to exit plan mode
Hook->>Agent: Spawns plan-reviewer
Agent->>Files: Reads CODE_GUIDE, context
Agent->>CC: Returns score + findings
alt Score < 9
CC->>CC: Iterates on plan
else Score >= 9
Hook->>CC: Allows exit
end
Note over Dev,Files: Implementation Phase
CC->>Hook: PreToolUse triggers
Hook->>Files: Checks token count
alt Approaching limit
Hook->>CC: Prompts to save context
CC->>Files: Updates workspace
end
CC->>CC: Implements plan
Note over Dev,Files: Completion
Dev->>CC: Requests stop
CC->>Hook: Stop triggers
Hook->>CC: Blocks until updates done
CC->>Agent: Spawns code-reviewer
Agent->>Files: Reviews changes
Agent->>CC: Returns issues (if any)
CC->>Files: Updates task status
CC->>Files: Updates workspace
Hook->>CC: Allows stop
# 1. Install the plugin
/plugin marketplace add markmdev/claude-plugins
/plugin install meridian@markmdev
# 2. Scaffold project files
cd /path/to/your/project
curl -fsSL https://raw.githubusercontent.com/markmdev/meridian/main/install.sh | bash# Update project scaffolding (.meridian/)
meridian-update
# Update hooks, agents, skills
/plugin update meridian@markmdevcat .meridian/.versionCLAUDE.md |
Meridian | |
|---|---|---|
| Large context | Claude forgets prompt details as context grows | Hooks reinforce key behaviors throughout the session |
| Task continuity | None — each session starts fresh | Context files track progress, decisions, next steps |
| Quality gates | None | Plan review + code review before proceeding |
| Custom docs | Must be read manually each session | Docs in .meridian/docs/ with read_when frontmatter hints, auto-discovered and injected |
CLAUDE.md is a static prompt. Meridian hooks actively enforce behaviors and inject context throughout the session.
Hooks — Enforce Workflow
Hooks are Python scripts triggered at Claude Code lifecycle events. They can inject context, block actions, or modify behavior.
| Hook | Trigger | What it does |
|---|---|---|
context-injector.py |
SessionStart | Injects workspace, tasks, CODE_GUIDE into context |
plan-review.py |
PreToolUse (ExitPlanMode) | Requires plan-reviewer before implementation |
action-counter.py |
PostToolUse | Tracks actions for stop hook threshold |
plan-approval-reminder.py |
PostToolUse (ExitPlanMode) | Reminds to create Pebble issues (if enabled) |
stop-checklist.py |
Stop | Requires context updates and code review |
plan-mode-tracker.py |
UserPromptSubmit | Prompts planning skill when entering Plan mode |
session-cleanup.py |
SessionEnd | Cleans up session state files |
Hooks are managed by the plugin system and share utilities from lib/meridian_config.py.
Skills — Structured Workflows
Skills are reusable instruction sets that activate when invoked.
Guides Claude through comprehensive planning so plans don't break during implementation:
- Requirements Interview — Up to 40 questions across multiple rounds to deeply understand the task
- Deep Discovery — Use direct tools (Glob, Grep, Read) to research the codebase; Explore agents only for conceptual questions
- Design — Choose approach, define target state, verify assumptions against actual code
- Decomposition — Break into subtasks with clear dependencies
- Integration — Explicitly plan how modules connect (mandatory for multi-module plans)
- Documentation — Each phase must include CLAUDE.md and human docs steps (mandatory)
Plans describe what and why, not how. The plan-reviewer agent validates plans against the actual codebase before implementation begins.
General-purpose guidance for writing effective prompts for any AI system:
- Remove redundancy — Merge overlapping content, deduplicate examples
- Remove noise — Cut excessive dividers, wrapper tags, verbose explanations
- Sharpen instructions — Make them direct and actionable
- Keep load-bearing content — Workflow steps, quality criteria, rules that matter
Works for Claude Code artifacts (skills, agents, hooks) and any other AI prompts.
Agents — Quality Gates
Agents are specialized subagents that validate work. All reviewers use an issue-based system — no scores, just issues or no issues. Loop until all issues are resolved.
Validates plans before implementation:
- Verifies file paths and API assumptions against codebase
- Checks for missing steps, dependencies, integration plan, documentation steps
- Trusts plan claims about packages/versions (user may have private access)
- Returns score (must reach 9+ to proceed) + findings
Deep code review that finds real bugs:
- Loads project context (workspace, CODE_GUIDE, active plan)
- Gets changes via git diff
- For each changed file: reads full file, traces data flow, checks callers/imports
- Classifies issues: p0 (crashes/security), p1 (bugs), p2 (minor)
- Returns structured findings — the main agent handles issue tracking
Focuses on issues that actually matter, not checklist items or style preferences.
Researches external tools, APIs, and products:
- Builds comprehensive knowledge docs in
.meridian/api-docs/ - Covers current versions, API operations, rate limits, best practices, gotchas
- Uses relevant MCPs or skills if available for web research
- Run before using any external library not already documented
Executes detailed implementation specs autonomously:
- Takes a specific spec and implements it precisely
- Reports ambiguity instead of asking questions (non-blocking for parallel execution)
- Runs typecheck/tests and fixes failures up to 3 iterations
- Spawn multiple in parallel for independent tasks
Creates Pebble issue hierarchy from plans (when Pebble enabled):
- Creates epic for overall plan
- Creates task per phase as children of epic
- Adds dependencies between sequential phases
- Invoked automatically after plan approval
Configuration
# Plan review behavior
plan_review_min_actions: 20 # Skip plan review if < N actions (default: 20)
# Pebble issue tracking
pebble_enabled: false
# Stop hook behavior
stop_hook_min_actions: 15 # Skip stop hook if < N actions since last user input
# Session learner
session_learner_mode: "project" # "project" (default) or "assistant"- Baseline (
CODE_GUIDE.md) — Default standards for Next.js/React + Node/TS - Hackathon Addon — Relaxes rules for fast prototypes
- Production Addon — Tightens rules for production systems
Precedence: Baseline → Project Type Addon
File Structure
your-project/
├── .meridian/
│ ├── config.yaml # Project configuration
│ ├── WORKSPACE.md # Agent's living knowledge base (always injected)
│ ├── workspace/ # Workspace sub-pages (linked from WORKSPACE.md)
│ ├── CODE_GUIDE.md # Coding standards
│ ├── SOUL.md # Agent identity
│ ├── api-docs/ # External API documentation
│ ├── docs/ # Project knowledge docs (auto-discovered)
│ ├── prompts/ # Injected prompts
│ │ └── agent-operating-manual.md
│ ├── scripts/ # User-facing utilities
│ │ ├── state-dir.sh # Resolve state directory
│ │ ├── setup-work-until.sh # Work-until loop setup
│ │ └── learner-log.py # Session learner log viewer
│ └── plans/ # Archived implementation plans
│ # Note: session state lives in ~/.meridian/state/ (not inside .meridian/)
│
│ # Plugin files (managed by /plugin — don't edit directly):
│ # hooks/hooks.json, scripts/*.py, agents/*.md, commands/*.md, skills/*/
└── your-code/
The /work-until command creates an iterative loop where Claude keeps working on a task until a completion condition is met.
# Basic: work until phrase is true
/work-until Fix all failing tests --completion-phrase "All tests pass"
# With iteration limit
/work-until Implement auth feature --completion-phrase "Feature complete" --max-iterations 10
# Just iteration limit (no phrase)
/work-until Refactor the API layer --max-iterations 5- Start:
/work-untilcreates a loop state file with your task - Work: Claude works on the task normally
- Stop blocked: When Claude tries to stop, the hook intercepts
- Check completion: Hook looks for
<complete>PHRASE</complete>in output - Continue or exit:
- If phrase found (and TRUE) → loop ends
- If max iterations reached → loop ends
- Otherwise → task is resent, Claude continues
- Workspace preserves history: Between iterations, Claude writes to its workspace, so it knows what was tried
- Normal stop checks still run: Workspace updates, tests/lint/build — all enforced each iteration
- Completion phrase must be TRUE: Claude cannot lie to escape the loop
- Monitor progress:
cat $(.meridian/scripts/state-dir.sh)/loop-stateshows current iteration
You: /work-until Fix the auth bug --completion-phrase "All auth tests pass" --max-iterations 5
Claude: [Works on fix, runs tests, some fail]
Claude: [Tries to stop]
→ Hook blocks: "Iteration 2 of 5 — continue working"
Claude: [Reads workspace, sees what was tried]
Claude: [Fixes another issue, runs tests, all pass]
Claude: <complete>All auth tests pass</complete>
→ Hook allows stop: "✅ Completion phrase detected"
Since v0.4.0, Meridian stores session state (counters, flags, locks) in ~/.meridian/state/ instead of inside .meridian/. This means you can symlink the entire .meridian/ directory across worktrees — shared docs, plans, and workspace with isolated session state.
# Create a worktree as usual
git worktree add ../my-feature feature-branch
# Symlink .meridian/ from the main worktree
ln -s /absolute/path/to/main/.meridian ../my-feature/.meridianThat's it. Each worktree gets its own session state automatically (based on its absolute path), while sharing all docs, plans, workspace, and config.
| Content | Shared? | Why |
|---|---|---|
docs/ |
Yes | Reference material should be visible everywhere |
plans/ |
Yes | Plans are project-wide |
workspace/ |
Yes | Accumulated knowledge benefits all sessions |
WORKSPACE.md |
Yes | Project knowledge base |
config.yaml |
Yes | Project settings |
CODE_GUIDE.md |
Yes | Coding standards |
| Session state | No | Counters, flags, locks are per-session (in ~/.meridian/state/) |
- The session learner updates the shared
WORKSPACE.md— lessons from any worktree flow to all others - Concurrent workspace updates from parallel sessions are rare; Claude Code's file conflict detection handles the edge case
- The plugin handles hooks, agents, and skills automatically — no need to symlink
.claude/across worktrees
Who is Meridian for?
Anyone using Claude Code for complex, multi-session work. Solo developers and teams alike benefit from enforced workflows and persistent context.
Does Meridian change how I interact with Claude?
No. You talk to Claude the same way. Meridian works behind the scenes through hooks.
What happens on simple tasks?
Nothing. Hooks fire but don't block anything meaningful. The overhead is minimal.
Can I customize the CODE_GUIDE?
Yes. Edit .meridian/CODE_GUIDE.md to add project-specific rules. It's injected every session.
Can I disable features?
Yes. In .meridian/config.yaml:
pebble_enabled: false # Disable Pebble issue tracking integration
stop_hook_min_actions: 15 # Skip stop hook if < N actions (default: 15)
plan_review_min_actions: 20 # Skip plan review if < N actions (default: 20)
session_learner_mode: "assistant" # "project" (default) or "assistant"How is this different from subagents?
Subagents don't share live context, re-read docs (token waste), and can't be resumed after interrupts. Meridian keeps Claude as the primary agent and injects context directly.
PRs and issues welcome at github.com/markmdev/meridian
License: MIT
If Meridian improves your Claude Code sessions:
- Star this repo so others can find it
- Share your experience on X (@markmdev) or LinkedIn
