Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 25 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,6 @@ OpenTelemetry-native run-level cost attribution for AI workflows.

Botanu adds **runs** on top of distributed tracing. A run represents a single business transaction that may span multiple LLM calls, database queries, and services. By correlating all operations to a stable `run_id`, you get accurate cost attribution without sampling artifacts.

**Key features:**
- **Run-level attribution** — Link all costs to business outcomes
- **Cross-service correlation** — W3C Baggage propagation
- **OTel-native** — Works with any OpenTelemetry-compatible backend
- **GenAI support** — OpenAI, Anthropic, Vertex AI, and more

## Quick Start

```python
Expand All @@ -26,64 +20,57 @@ enable(service_name="my-app")

@botanu_use_case(name="Customer Support")
async def handle_ticket(ticket_id: str):
# All LLM calls, DB queries, and HTTP requests inside
# are automatically instrumented and linked to this run
context = await fetch_context(ticket_id)
response = await generate_response(context)
emit_outcome("success", value_type="tickets_resolved", value_amount=1)
return response
```

That's it. All operations within the use case are automatically tracked.

## Installation

```bash
pip install botanu # Core SDK
pip install "botanu[sdk]" # With OTel SDK + OTLP exporter
pip install "botanu[genai]" # With GenAI instrumentation
pip install "botanu[all]" # Everything included
pip install "botanu[all]"
```

| Extra | Description |
|-------|-------------|
| `sdk` | OpenTelemetry SDK + OTLP HTTP exporter |
| `sdk` | OpenTelemetry SDK + OTLP exporter |
| `instruments` | Auto-instrumentation for HTTP, databases |
| `genai` | GenAI provider instrumentation |
| `carriers` | Cross-service propagation (Celery, Kafka) |
| `all` | All extras |
| `genai` | Auto-instrumentation for LLM providers |
| `all` | All of the above (recommended) |

## LLM Tracking
## What Gets Auto-Instrumented

```python
from botanu.tracking.llm import track_llm_call

with track_llm_call(provider="openai", model="gpt-4") as tracker:
response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
tracker.set_tokens(
input_tokens=response.usage.prompt_tokens,
output_tokens=response.usage.completion_tokens,
)
```
When you install `botanu[all]`, the following are automatically tracked:

## Data Tracking
- **LLM Providers** — OpenAI, Anthropic, Vertex AI, Bedrock, Azure OpenAI
- **Databases** — PostgreSQL, MySQL, SQLite, MongoDB, Redis
- **HTTP** — requests, httpx, urllib3, aiohttp
- **Frameworks** — FastAPI, Flask, Django, Starlette
- **Messaging** — Celery, Kafka

```python
from botanu.tracking.data import track_db_operation, track_storage_operation
No manual instrumentation required.

## Kubernetes Deployment

with track_db_operation(system="postgresql", operation="SELECT") as db:
result = await cursor.execute(query)
db.set_result(rows_returned=len(result))
For large-scale deployments, use zero-code instrumentation via OTel Operator:

with track_storage_operation(system="s3", operation="PUT") as storage:
await s3.put_object(Bucket="bucket", Key="key", Body=data)
storage.set_result(bytes_written=len(data))
```yaml
metadata:
annotations:
instrumentation.opentelemetry.io/inject-python: "true"
```

See [Kubernetes Deployment Guide](./docs/integration/kubernetes.md) for details.

## Documentation

- [Getting Started](./docs/getting-started/)
- [Concepts](./docs/concepts/)
- [Tracking Guides](./docs/tracking/)
- [Integration](./docs/integration/)
- [API Reference](./docs/api/)

Expand Down
23 changes: 11 additions & 12 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ Botanu introduces **run-level attribution**: a unique `run_id` that follows your
### Integration

- [Auto-Instrumentation](integration/auto-instrumentation.md) - Automatic instrumentation for common libraries
- [Kubernetes Deployment](integration/kubernetes.md) - Zero-code instrumentation at scale
- [Existing OTel Setup](integration/existing-otel.md) - Integrate with existing OpenTelemetry deployments
- [Collector Configuration](integration/collector.md) - Configure the OpenTelemetry Collector

Expand All @@ -48,21 +49,19 @@ Botanu introduces **run-level attribution**: a unique `run_id` that follows your
## Quick Example

```python
from botanu import init_botanu, botanu_use_case
from botanu.tracking.llm import track_llm_call
from botanu import enable, botanu_use_case, emit_outcome

# Initialize once at startup
init_botanu(service_name="support-agent")
enable(service_name="support-agent")

@botanu_use_case("Customer Support")
def handle_ticket(ticket_id: str):
# Every operation inside gets the same run_id
with track_llm_call(provider="openai", model="gpt-4") as tracker:
response = openai.chat.completions.create(...)
tracker.set_tokens(
input_tokens=response.usage.prompt_tokens,
output_tokens=response.usage.completion_tokens,
)
async def handle_ticket(ticket_id: str):
# All LLM calls, DB queries, and HTTP requests are auto-instrumented
context = await fetch_context(ticket_id)
response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": context}]
)
emit_outcome("success", value_type="tickets_resolved", value_amount=1)
return response
```

Expand Down
Loading
Loading