Cache-aware orchestration for LLM agents. Fork helpers that share cached prefixes, detect cache breaks, and cut token costs by 38%+.
python agent research deep orchestration multi-agent openai swarm agents ai-agents cache-optimization llm prompt-caching anthropic litellm task-dag
-
Updated
Apr 2, 2026 - Python