Skip to content

semcod/metrun

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

metrun β€” Execution Intelligence Tool

AI Cost Tracking

PyPI Version Python License AI Cost Human Time Model

  • πŸ€– LLM usage: $0.6000 (4 commits)
  • πŸ‘€ Human dev: ~$375 (3.7h @ $100/h, 30min dedup)

Generated on 2026-04-06 using openrouter/qwen/qwen3-coder-next


metrun doesn't just show you data β€” it tells you what the problem is and how to fix it.

What is metrun?

metrun is a Python performance analysis library that turns raw profiling data into an intelligible execution report: bottleneck scores, dependency graphs, critical path highlighting, and actionable fix suggestions β€” all in one tool.

❌ traditional profilers β†’ "here is your data"
βœ… metrun               β†’ "here is your problem and why it exists"

Features

Feature Description
🧠 Bottleneck Engine Builds an execution graph, computes score = time + calls + nested amplification, ranks hotspots
πŸ“Š Human Report Generator Emoji-annotated report with time %, call count, score and diagnosis per function
🧨 Critical Path Finds the hottest nested call chain root β†’ leaf
πŸ’‘ Fix Suggestion Engine Library-specific advice per diagnosis: lru_cache, asyncio, numba, viztracer, scalene …
πŸ”₯ ASCII Flamegraph Terminal-friendly proportional bar chart, zero extra dependencies
πŸ–ΌοΈ SVG Flamegraph Interactive SVG via flameprof
πŸ”Œ cProfile Bridge Use stdlib cProfile as the profiling backend; feed results into the Bottleneck Engine
⌨️ CLI metrun profile, metrun inspect, metrun flame commands

Installation

pip install metrun            # core (click included)
pip install metrun[flamegraph] # + SVG flamegraph support (flameprof)

Quick Start

Decorator tracing

from metrun import trace, get_records, analyse, print_report

@trace
def slow_query(n):
    return sum(i * i for i in range(n))

@trace
def handler(items):
    return [slow_query(i) for i in items]

handler(list(range(100)))

bottlenecks = analyse(get_records())
print_report(bottlenecks)

Context-manager tracing

from metrun import section, get_records, analyse, print_report

with section("data_load"):
    data = load_from_db()

with section("transform"):
    result = process(data)

print_report(analyse(get_records()))

Full enhanced report

from metrun import analyse, get_records, print_report

records = get_records()
bottlenecks = analyse(records)

print_report(
    bottlenecks,
    show_graph=True,           # dependency graph
    show_critical_path=True,   # hottest call chain
    records=records,
    show_suggestions=True,     # fix advice
)

Example output

πŸ”₯ METRUN PERFORMANCE REPORT
=============================

πŸ”΄ slow_query
   β†’ time:      0.8200s  (78.2%)
   β†’ calls:     12,430
   β†’ score:     12.9
   β†’ diagnosis: πŸ”₯ loop hotspot

── Critical Path ─────────────────────────────
🧨 Critical Path  (depth=2, hottest leaf: 0.8200s)

  handler  [1.0500s, 1 calls]
    └─ slow_query  [0.8200s, 12430 calls]   ← πŸ”₯ hottest leaf (0.8200s)

── Fix Suggestions ───────────────────────────
  πŸ’‘ Fix suggestions for: slow_query
     1. Cache repeated results with lru_cache [functools]
           from functools import lru_cache

           @lru_cache(maxsize=None)
           def slow_query(x): ...

     2. Vectorise the loop with NumPy [numpy]
           import numpy as np
           result = np.sum(arr ** 2)

Auto-diagnosis labels

Label Trigger
πŸ”₯ loop hotspot calls β‰₯ 1 000
🌲 dependency bottleneck β‰₯ 3 direct children in the execution graph
🐒 slow execution β‰₯ 30 % of total wall time (time_pct β‰₯ 0.30), low calls
βœ… nominal below all thresholds

Score formula:

score = (total_time / max_time) Γ— 10  +  log10(calls + 1)  +  n_children Γ— 0.5

ASCII Flamegraph

from metrun import render_ascii, print_ascii

print_ascii(bottlenecks, title="My App Flamegraph")
πŸ”₯ My App Flamegraph
────────────────────────────────────────────────────────
  slow_query    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  78.2%  score=12.9
  handler       β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100.0%  score=9.4
  serialize     β–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘   5.1%  score=2.1
────────────────────────────────────────────────────────

SVG Flamegraph (via flameprof)

from metrun.cprofile_bridge import CProfileBridge
from metrun import render_svg

bridge = CProfileBridge()
with bridge.profile_block():
    my_function()

render_svg(bridge.get_stats(), "flame.svg")
# Open flame.svg in a browser for the interactive flamegraph

cProfile Bridge

Integrate with stdlib cProfile or any existing .prof dump:

from metrun.cprofile_bridge import CProfileBridge
from metrun import analyse, print_report

bridge = CProfileBridge()

@bridge.profile_func
def my_function():
    ...

my_function()

# Analyse with the Bottleneck Engine
bottlenecks = analyse(bridge.to_records())
print_report(bottlenecks)

# Save for snakeviz / flameprof CLI
bridge.save("profile.prof")

Compatible with these popular tools (no code changes needed):

Tool Command
snakeviz β€” interactive web viewer snakeviz profile.prof
flameprof β€” SVG flamegraph flameprof profile.prof > flame.svg
py-spy β€” sampling profiler py-spy record -o flame.svg -- python script.py
viztracer β€” full trace + HTML flamegraph see below
scalene β€” line-level CPU+memory python -m scalene script.py

VizTracer integration

# pip install viztracer
from viztracer import VizTracer

with VizTracer(output_file="trace.json"):
    my_function()

# vizviewer trace.json  β†’  opens interactive HTML flamegraph

Critical Path

from metrun import find_critical_path, print_critical_path, get_records

path = find_critical_path(get_records())
print_critical_path(path)
🧨 Critical Path  (depth=3, hottest leaf: 0.4200s)

  handler  [0.9100s, 1 calls]
    └─ db_query  [0.6300s, 50 calls]
      └─ serialize  [0.4200s, 50 calls]   ← πŸ”₯ hottest leaf (0.4200s)

Fix Suggestion Engine

from metrun import analyse, get_records, suggest, format_suggestions

for b in analyse(get_records()):
    tips = suggest(b)
    print(format_suggestions(b.name, tips))

Suggestion catalogue per diagnosis:

Diagnosis Suggestions
πŸ”₯ loop hotspot functools.lru_cache, numpy vectorisation, numba @jit
🌲 dependency bottleneck concurrent.futures, asyncio.gather, batching
🐒 slow execution cProfile + snakeviz, algorithmic review, joblib.Memory
Score β‰₯ 8 (any) scalene, viztracer

CLI

# Profile a script β€” bottleneck report
metrun profile my_script.py

# Profile + ASCII flamegraph in terminal
metrun profile my_script.py --ascii-flame

# Profile + save SVG flamegraph
metrun profile my_script.py --flame flame.svg

# Full enhanced report: bottlenecks + critical path + suggestions
metrun inspect my_script.py

# Convert existing .prof dump to SVG
metrun flame profile.prof -o flame.svg

Architecture

  @trace / section()          cProfile.Profile
       β”‚                            β”‚
       β–Ό                            β–Ό
 ExecutionTracer              CProfileBridge
  (FunctionRecord)             .to_records()
       β”‚                            β”‚
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β–Ό
         BottleneckEngine.analyse()
          score + diagnosis + rank
                  β”‚
       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
       β–Ό          β–Ό              β–Ό
  print_report  find_critical  suggest()
  (report.py)    _path()      (suggestions.py)
                            
  ASCII/SVG flamegraph ← flamegraph.py

The two tracing backends (ExecutionTracer for decorator/section API and CProfileBridge for cProfile API) both produce the same Dict[str, FunctionRecord] structure consumed by the engine.

Module overview

metrun/
β”œβ”€β”€ profiler.py        # ExecutionTracer β€” decorator + context-manager tracing
β”œβ”€β”€ bottleneck.py      # BottleneckEngine β€” score, diagnosis, ranking
β”œβ”€β”€ report.py          # Human Report Generator
β”œβ”€β”€ critical_path.py   # Critical path analysis (DFS on call graph)
β”œβ”€β”€ suggestions.py     # Fix Suggestion Engine
β”œβ”€β”€ flamegraph.py      # ASCII + SVG (flameprof) flamegraphs
β”œβ”€β”€ cprofile_bridge.py # cProfile ↔ metrun bridge
└── cli.py             # Click CLI entry-point

Known limitations

Limitation Detail
Name collisions in cProfile mode CProfileBridge.to_records() uses function name only as key (no file:lineno) β€” functions with the same name in different modules are merged
Decorator tracing is opt-in Only functions decorated with @trace or wrapped in section() appear in get_records() β€” not the full call tree
Thread-local call stack Each thread has an independent call stack; cross-thread parent→child links are not recorded
No async support asyncio coroutines are not automatically traced by the decorator backend

License

Licensed under Apache-2.0.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors