Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
243 changes: 242 additions & 1 deletion docs/examples/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,242 @@
This is a place for us to store interesting examples. These should be incorporated into documentation and tutorials prior to release.
# Mellea Examples

This directory contains comprehensive examples demonstrating Mellea's features and capabilities. Examples are organized by topic and complexity level.

## 🚀 Getting Started

**New to Mellea?** Start here:
1. [tutorial/simple_email.py](tutorial/) - Your first Mellea program
2. [instruct_validate_repair/](instruct_validate_repair/) - Core paradigm
3. [generative_slots/](generative_slots/) - Type-safe LLM functions
4. [notebooks/](notebooks/) - Interactive tutorials

## 📚 Example Categories

### Core Concepts

**[instruct_validate_repair/](instruct_validate_repair/)**
Learn Mellea's core instruct-validate-repair paradigm for reliable LLM outputs.
- Basic instruction without requirements
- Adding validation constraints
- Automatic repair on validation failure
- Custom validation functions

**[generative_slots/](generative_slots/)**
Type-safe, composable LLM functions using the `@generative` decorator.
- Sentiment classification
- Text summarization
- Function composition
- Type-constrained outputs

**[context/](context/)**
Understanding and working with Mellea's context system.
- Context inspection
- Sampling with contexts
- Context trees and navigation

**[sessions/](sessions/)**
Creating and customizing Mellea sessions.
- Session configuration
- Custom session types
- Backend selection

### Advanced Features

**[aLora/](aLora/)**
Adaptive Low-Rank Adaptation for fast constraint checking.
- Training custom aLoRA adapters
- Performance optimization
- Constraint validation speedup

**[intrinsics/](intrinsics/)**
Specialized model capabilities through adapters.
- Answer relevance checking
- Hallucination detection
- Citation validation
- Context relevance assessment

**[sofai/](sofai/)**
Two-tier sampling with fast and slow models.
- Cost optimization
- Iterative refinement with fast models
- Escalation to slow models
- Constraint satisfaction problems

### Data & Documents

**[information_extraction/](information_extraction/)**
Extracting structured information from unstructured text.
- Named entity recognition
- Type-safe extraction
- Structured output generation

**[mobject/](mobject/)**
Working with structured data types (tables, documents).
- Table queries and transformations
- Document processing
- Structured data operations

**[mify/](mify/)**
Making custom Python objects work with LLMs.
- Object integration with `@mify`
- Custom string representations
- Template integration
- Tool generation from methods

**[rag/](rag/)**
Retrieval-Augmented Generation pipelines.
- Vector search with FAISS
- Relevance filtering
- Grounded answer generation
- Multi-stage RAG pipelines

### Agents & Tools

**[agents/](agents/)**
Implementing agent patterns (ReACT).
- Reasoning and acting loops
- Tool selection and execution
- Multi-turn agent workflows

**[tools/](tools/)**
Tool calling and code execution.
- Code interpreter integration
- Custom tool definition
- Tool argument validation
- Safe code execution

### Safety & Validation

**[safety/](safety/)**
Content safety with Granite Guardian models.
- Harm detection
- Jailbreak prevention
- Bias checking
- Groundedness validation
- Function call hallucination detection

### Integration & Deployment

**[m_serve/](m_serve/)**
Deploying Mellea programs as REST APIs.
- API service creation
- Production deployment patterns
- Client integration

**[library_interop/](library_interop/)**
Integrating with other LLM libraries.
- LangChain message conversion
- OpenAI format compatibility
- Cross-library workflows

**[mcp/](mcp/)**
Model Context Protocol integration.
- MCP tool creation
- Claude Desktop integration
- Langflow integration

### Multimodal

**[image_text_models/](image_text_models/)**
Working with vision-language models.
- Image understanding
- Multimodal prompting
- Vision model backends

### Complete Applications

**[mini_researcher/](mini_researcher/)**
Full-featured research assistant with RAG and validation.
- Multi-model architecture
- Document retrieval
- Safety checks
- Custom validation pipeline

### Interactive Learning

**[notebooks/](notebooks/)**
Jupyter notebooks for interactive exploration.
- Step-by-step tutorials
- Immediate feedback
- Visualization of results

**[tutorial/](tutorial/)**
Python script versions of tutorials.
- Non-interactive examples
- Easy to run and modify
- Version control friendly

### Experimental

**[melp/](melp/)**
⚠️ Experimental lazy evaluation system.
- Lazy computation
- Thunks and deferred execution
- Advanced control flow

### Utilities

**[helper/](helper/)**
Utility functions used across examples.
- Text formatting helpers
- Common utilities

## 🎯 Examples by Use Case

### Text Generation
- [instruct_validate_repair/](instruct_validate_repair/) - Email generation
- [generative_slots/](generative_slots/) - Summarization
- [tutorial/sentiment_classifier.py](tutorial/) - Classification

### Data Processing
- [information_extraction/](information_extraction/) - Entity extraction
- [mobject/](mobject/) - Table operations
- [rag/](rag/) - Document retrieval

### Agent Systems
- [agents/](agents/) - ReACT agents
- [tools/](tools/) - Tool-using agents
- [mini_researcher/](mini_researcher/) - Research assistant

### Production Deployment
- [m_serve/](m_serve/) - API services
- [safety/](safety/) - Content moderation
- [library_interop/](library_interop/) - Integration

### Performance Optimization
- [aLora/](aLora/) - Fast validation
- [sofai/](sofai/) - Cost optimization
- [intrinsics/](intrinsics/) - Specialized tasks

## 📖 Documentation

- **Main README**: [../../README.md](../../README.md)
- **Agent Guidelines**: [../../AGENTS.md](../../AGENTS.md)
- **Dev Docs**: [../dev/](../dev/)

## 🏃 Running Examples

```bash
# Run any Python example
python docs/examples/tutorial/simple_email.py

# Or with uv
uv run docs/examples/tutorial/simple_email.py

# Run notebooks
jupyter notebook docs/examples/notebooks/

# Run tests
uv run pytest test/
```

## 💡 Tips

- Start with [tutorial/](tutorial/) for basics
- Check [notebooks/](notebooks/) for interactive learning
- See [mini_researcher/](mini_researcher/) for complete application patterns
- Refer to individual README.md files in each directory for details

## 🤝 Contributing

Found a bug or have an improvement? See [../../AGENTS.md](../../AGENTS.md) for contribution guidelines.
48 changes: 48 additions & 0 deletions docs/examples/aLora/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# aLoRA Examples

This directory contains examples demonstrating Adaptive Low-Rank Adaptation (aLoRA) for efficient constraint checking and requirement validation.

## Files

### 101_example.py
A comprehensive example showing how to use aLoRA adapters for fast constraint checking with Granite models.

**Key Features:**
- Loading and using custom aLoRA adapters for constraint checking
- Comparing validation speed with and without aLoRA
- Using `ALoraRequirement` for efficient requirement validation
- Demonstrates significant speedup when using aLoRA adapters

**Usage:**
```bash
python docs/examples/aLora/101_example.py
```

### Supporting Files

- **prompt_config.json**: Configuration for training aLoRA adapters
- **stembolt_failure_dataset.jsonl**: Training dataset for the failure mode constraint
- **checkpoints/alora_adapter/**: Pre-trained aLoRA adapter checkpoint

## Concepts Demonstrated

- **aLoRA Adapters**: Using specialized adapters for constraint checking
- **Constraint Validation**: Fast requirement checking with aLoRA
- **Performance Optimization**: Comparing validation times with/without aLoRA
- **Custom Requirements**: Creating domain-specific validation requirements
- **Backend Integration**: Adding aLoRA adapters to HuggingFace backends

## Training Your Own aLoRA

To train custom aLoRA adapters for your constraints:

```bash
m alora train --config docs/examples/aLora/prompt_config.json
```

See `cli/alora/` for more details on training aLoRA adapters.

## Related Documentation

- See `docs/dev/requirement_aLoRA_rerouting.md` for aLoRA architecture details
- See `mellea/backends/adapters/` for adapter implementation
40 changes: 40 additions & 0 deletions docs/examples/agents/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Agent Examples

This directory contains examples of implementing agent patterns with Mellea, specifically the ReACT (Reasoning and Acting) pattern.

## Files

### react.py
A complete implementation of the ReACT agent pattern that combines reasoning and action in an iterative loop. The agent:
- Thinks about what to do next
- Selects an appropriate tool/action
- Generates arguments for the tool
- Observes the result
- Determines if the goal is achieved

**Key Features:**
- Custom `ReactTool` and `ReactToolbox` classes for tool management
- Dynamic tool selection using Pydantic schemas
- Iterative thought-action-observation loop
- Example with weather lookup tools

**Usage:**
```python
python docs/examples/agents/react.py
```

### react_instruct.py
An alternative implementation of the ReACT pattern using Mellea's instruct-validate-repair paradigm.

## Concepts Demonstrated

- **Tool Management**: Creating and organizing tools for agent use
- **Dynamic Prompting**: Building system prompts with tool descriptions
- **Chat Context**: Using `ChatContext` for multi-turn conversations
- **Structured Output**: Using Pydantic models for type-safe responses
- **Iterative Reasoning**: Implementing thought-action-observation loops

## Related Documentation

- See `docs/dev/tool_calling.md` for more on tool integration
- See `mellea/stdlib/requirements/tool_reqs.py` for tool requirements
55 changes: 55 additions & 0 deletions docs/examples/context/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Context Examples

This directory contains examples demonstrating how to work with Mellea's context system, particularly when using sampling strategies and validation.

## Files

### contexts_with_sampling.py
Shows how to retrieve and inspect context information when using sampling strategies and validation.

**Key Features:**
- Using `RejectionSamplingStrategy` with requirements
- Accessing `SamplingResult` objects to inspect generation attempts
- Retrieving context for different generation attempts
- Examining validation contexts for each requirement
- Understanding the context tree structure

**Usage:**
```bash
python docs/examples/context/contexts_with_sampling.py
```

## Concepts Demonstrated

- **Sampling Results**: Working with `SamplingResult` objects
- **Context Inspection**: Accessing generation and validation contexts
- **Multiple Attempts**: Examining different generation attempts
- **Context Trees**: Understanding how contexts link together
- **Validation Context**: Inspecting how requirements were evaluated

## Key APIs

```python
# Get sampling result with full context information
res = m.instruct(
"Write a sentence.",
requirements=[...],
strategy=RejectionSamplingStrategy(loop_budget=3),
return_sampling_results=True
)

# Access different generation attempts
res.sample_generations[index]
res.sample_contexts[index]
res.sample_validations[index]

# Navigate context tree
gen_ctx.previous_node.node_data
val_ctx.node_data
```

## Related Documentation

- See `mellea/stdlib/context.py` for context implementation
- See `mellea/stdlib/sampling/` for sampling strategies
- See `docs/dev/spans.md` for context architecture details
Loading