- π€ LLM usage: $0.6000 (4 commits)
- π€ Human dev: ~$375 (3.7h @ $100/h, 30min dedup)
Generated on 2026-04-06 using openrouter/qwen/qwen3-coder-next
metrun doesn't just show you data β it tells you what the problem is and how to fix it.
metrun is a Python performance analysis library that turns raw profiling data into an intelligible execution report: bottleneck scores, dependency graphs, critical path highlighting, and actionable fix suggestions β all in one tool.
β traditional profilers β "here is your data"
β
metrun β "here is your problem and why it exists"
| Feature | Description |
|---|---|
| π§ Bottleneck Engine | Builds an execution graph, computes score = time + calls + nested amplification, ranks hotspots |
| π Human Report Generator | Emoji-annotated report with time %, call count, score and diagnosis per function |
| 𧨠Critical Path | Finds the hottest nested call chain root β leaf |
| π‘ Fix Suggestion Engine | Library-specific advice per diagnosis: lru_cache, asyncio, numba, viztracer, scalene β¦ |
| π₯ ASCII Flamegraph | Terminal-friendly proportional bar chart, zero extra dependencies |
| πΌοΈ SVG Flamegraph | Interactive SVG via flameprof |
| π cProfile Bridge | Use stdlib cProfile as the profiling backend; feed results into the Bottleneck Engine |
| β¨οΈ CLI | metrun profile, metrun inspect, metrun flame commands |
pip install metrun # core (click included)
pip install metrun[flamegraph] # + SVG flamegraph support (flameprof)from metrun import trace, get_records, analyse, print_report
@trace
def slow_query(n):
return sum(i * i for i in range(n))
@trace
def handler(items):
return [slow_query(i) for i in items]
handler(list(range(100)))
bottlenecks = analyse(get_records())
print_report(bottlenecks)from metrun import section, get_records, analyse, print_report
with section("data_load"):
data = load_from_db()
with section("transform"):
result = process(data)
print_report(analyse(get_records()))from metrun import analyse, get_records, print_report
records = get_records()
bottlenecks = analyse(records)
print_report(
bottlenecks,
show_graph=True, # dependency graph
show_critical_path=True, # hottest call chain
records=records,
show_suggestions=True, # fix advice
)π₯ METRUN PERFORMANCE REPORT
=============================
π΄ slow_query
β time: 0.8200s (78.2%)
β calls: 12,430
β score: 12.9
β diagnosis: π₯ loop hotspot
ββ Critical Path βββββββββββββββββββββββββββββ
𧨠Critical Path (depth=2, hottest leaf: 0.8200s)
handler [1.0500s, 1 calls]
ββ slow_query [0.8200s, 12430 calls] β π₯ hottest leaf (0.8200s)
ββ Fix Suggestions βββββββββββββββββββββββββββ
π‘ Fix suggestions for: slow_query
1. Cache repeated results with lru_cache [functools]
from functools import lru_cache
@lru_cache(maxsize=None)
def slow_query(x): ...
2. Vectorise the loop with NumPy [numpy]
import numpy as np
result = np.sum(arr ** 2)
| Label | Trigger |
|---|---|
π₯ loop hotspot |
calls β₯ 1 000 |
π² dependency bottleneck |
β₯ 3 direct children in the execution graph |
π’ slow execution |
β₯ 30 % of total wall time (time_pct β₯ 0.30), low calls |
β
nominal |
below all thresholds |
Score formula:
score = (total_time / max_time) Γ 10 + log10(calls + 1) + n_children Γ 0.5
from metrun import render_ascii, print_ascii
print_ascii(bottlenecks, title="My App Flamegraph")π₯ My App Flamegraph
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
slow_query ββββββββββββββββββββββββββββββββββββββββ 78.2% score=12.9
handler βββββββββββββββββββββββββββββββββββββββββ 100.0% score=9.4
serialize βββββββββββββββββββββββββββββββββββββββββ 5.1% score=2.1
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
from metrun.cprofile_bridge import CProfileBridge
from metrun import render_svg
bridge = CProfileBridge()
with bridge.profile_block():
my_function()
render_svg(bridge.get_stats(), "flame.svg")
# Open flame.svg in a browser for the interactive flamegraphIntegrate with stdlib cProfile or any existing .prof dump:
from metrun.cprofile_bridge import CProfileBridge
from metrun import analyse, print_report
bridge = CProfileBridge()
@bridge.profile_func
def my_function():
...
my_function()
# Analyse with the Bottleneck Engine
bottlenecks = analyse(bridge.to_records())
print_report(bottlenecks)
# Save for snakeviz / flameprof CLI
bridge.save("profile.prof")Compatible with these popular tools (no code changes needed):
| Tool | Command |
|---|---|
| snakeviz β interactive web viewer | snakeviz profile.prof |
| flameprof β SVG flamegraph | flameprof profile.prof > flame.svg |
| py-spy β sampling profiler | py-spy record -o flame.svg -- python script.py |
| viztracer β full trace + HTML flamegraph | see below |
| scalene β line-level CPU+memory | python -m scalene script.py |
# pip install viztracer
from viztracer import VizTracer
with VizTracer(output_file="trace.json"):
my_function()
# vizviewer trace.json β opens interactive HTML flamegraphfrom metrun import find_critical_path, print_critical_path, get_records
path = find_critical_path(get_records())
print_critical_path(path)𧨠Critical Path (depth=3, hottest leaf: 0.4200s)
handler [0.9100s, 1 calls]
ββ db_query [0.6300s, 50 calls]
ββ serialize [0.4200s, 50 calls] β π₯ hottest leaf (0.4200s)
from metrun import analyse, get_records, suggest, format_suggestions
for b in analyse(get_records()):
tips = suggest(b)
print(format_suggestions(b.name, tips))Suggestion catalogue per diagnosis:
| Diagnosis | Suggestions |
|---|---|
| π₯ loop hotspot | functools.lru_cache, numpy vectorisation, numba @jit |
| π² dependency bottleneck | concurrent.futures, asyncio.gather, batching |
| π’ slow execution | cProfile + snakeviz, algorithmic review, joblib.Memory |
| Score β₯ 8 (any) | scalene, viztracer |
# Profile a script β bottleneck report
metrun profile my_script.py
# Profile + ASCII flamegraph in terminal
metrun profile my_script.py --ascii-flame
# Profile + save SVG flamegraph
metrun profile my_script.py --flame flame.svg
# Full enhanced report: bottlenecks + critical path + suggestions
metrun inspect my_script.py
# Convert existing .prof dump to SVG
metrun flame profile.prof -o flame.svg @trace / section() cProfile.Profile
β β
βΌ βΌ
ExecutionTracer CProfileBridge
(FunctionRecord) .to_records()
β β
ββββββββββββ¬ββββββββββββββββββ
βΌ
BottleneckEngine.analyse()
score + diagnosis + rank
β
ββββββββββββΌβββββββββββββββ
βΌ βΌ βΌ
print_report find_critical suggest()
(report.py) _path() (suggestions.py)
ASCII/SVG flamegraph β flamegraph.py
The two tracing backends (ExecutionTracer for decorator/section API and CProfileBridge for cProfile API) both produce the same Dict[str, FunctionRecord] structure consumed by the engine.
metrun/
βββ profiler.py # ExecutionTracer β decorator + context-manager tracing
βββ bottleneck.py # BottleneckEngine β score, diagnosis, ranking
βββ report.py # Human Report Generator
βββ critical_path.py # Critical path analysis (DFS on call graph)
βββ suggestions.py # Fix Suggestion Engine
βββ flamegraph.py # ASCII + SVG (flameprof) flamegraphs
βββ cprofile_bridge.py # cProfile β metrun bridge
βββ cli.py # Click CLI entry-point
| Limitation | Detail |
|---|---|
| Name collisions in cProfile mode | CProfileBridge.to_records() uses function name only as key (no file:lineno) β functions with the same name in different modules are merged |
| Decorator tracing is opt-in | Only functions decorated with @trace or wrapped in section() appear in get_records() β not the full call tree |
| Thread-local call stack | Each thread has an independent call stack; cross-thread parentβchild links are not recorded |
| No async support | asyncio coroutines are not automatically traced by the decorator backend |
Licensed under Apache-2.0.