This guide explains how to build, integrate, and contribute new agents to ValueCell's multi-agent financial platform.
Want to quickly create a new agent? You can use an AI coding assistant like GitHub Copilot, Cursor, or other Agent Coders to bootstrap your agent automatically!
Simply share this guide with your AI assistant and ask:
"Please create a HelloAgent following this guide."
The AI will read through this documentation and generate all necessary files:
- Agent module (
core.py,__main__.py,__init__.py) - Configuration files (YAML and JSON)
- Agent card registration (JSON)
This is the fastest way to get started and learn the agent structure hands-on!
- Architecture Overview
- Create a New Agent
- Add an Agent Configuration
- Run Your Agent
- Use Models and Tools
- Event System
- Launch Backend
- Debugging Agent Behavior
Understanding the system architecture is crucial for building agents:
- API backend:
valuecell.server(FastAPI/uvicorn). Entry:valuecell.server.main - Agents: Located under
valuecell.agents.<agent_name>with a__main__.pyforpython -mexecution - Core contracts:
valuecell.core.typesdefine response events and data shapes - Streaming helpers:
valuecell.core.agent.streamfor emitting events
For more details, see
Creating a new agent involves three core steps:
- Implement the Agent Module - Create the Python module with your agent's logic
- Add Agent Card - Define metadata of agents
- Add Agent Configuration - Configure models parameters
Let's walk through each step in detail.
Create a new directory for your agent under python/valuecell/agents/:
mkdir -p python/valuecell/agents/hello_agent
touch python/valuecell/agents/hello_agent/__init__.py
touch python/valuecell/agents/hello_agent/__main__.py
touch python/valuecell/agents/hello_agent/core.pyIn core.py, subclass BaseAgent and implement the stream() method:
# file: valuecell/agents/hello_agent/core.py
from typing import AsyncGenerator, Optional, Dict
from valuecell.core.types import BaseAgent, StreamResponse
from valuecell.core.agent import streaming
class HelloAgent(BaseAgent):
async def stream(
self,
query: str, # User query content
conversation_id: str, # Conversation ID
task_id: str, # Task ID
dependencies: Optional[Dict] = None, # Optional context (language, timezone, etc.)
) -> AsyncGenerator[StreamResponse, None]:
"""
Process user queries and return streaming responses.
Args:
query: User query content
conversation_id: Unique identifier for the conversation
task_id: Unique identifier for the task
dependencies: Optional dependencies containing language, timezone, and other context
Yields:
StreamResponse: Stream response containing content and completion status
"""
# Send a few chunks, then finish
yield streaming.message_chunk("Thinking…")
yield streaming.message_chunk(f"You said: {query}")
yield streaming.done()Agent Processing Flow Essentials:
- Return Text Content: Use
streaming.message_chunk()to return text responses. You can send complete messages or split them into smaller chunks for better streaming UX. - Signal Completion: Always end with
streaming.done()to indicate the agent has finished processing.
This simple flow enables real-time communication with the UI, displaying responses as they're generated.
In __main__.py, wrap your agent for standalone execution. This file enables launching your agent with uv run -m:
# file: valuecell/agents/hello_agent/__main__.py
import asyncio
from valuecell.core.agent import create_wrapped_agent
from .core import HelloAgent
if __name__ == "__main__":
agent = create_wrapped_agent(HelloAgent)
asyncio.run(agent.serve())Important
Always place the wrap and serve logic in __main__.py. This pattern enables:
- Consistent agent launching via
uv run -m valuecell.agents.your_agent - Automatic discovery by the ValueCell backend server
- Standardized transport and event emission
Run your agent:
cd python
uv run -m valuecell.agents.hello_agentTip
The wrapper standardizes transport and event emission so your agent integrates with the UI and logs consistently.
Agent configurations define how your agent uses models, embeddings, and runtime parameters. Create a YAML file in python/configs/agents/.
Create python/configs/agents/hello_agent.yaml:
name: "Hello Agent"
enabled: true
# Model configuration
models:
# Primary model
primary:
model_id: "anthropic/claude-haiku-4.5"
provider: "openrouter"
# Environment variable overrides
env_overrides:
HELLO_AGENT_MODEL_ID: "models.primary.model_id"
HELLO_AGENT_PROVIDER: "models.primary.provider"Tip
The YAML filename should match your agent's module name (e.g., hello_agent.yaml for hello_agent module). This naming convention helps maintain consistency across the codebase.
For detailed configuration options including embedding models, fallback providers, and advanced patterns, see CONFIGURATION_GUIDE.
Load your agent's configuration using the config manager. The agent name passed to get_model_for_agent() must match the YAML filename (without the .yaml extension):
from valuecell.utils.model import get_model_for_agent
class HelloAgent(BaseAgent):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Automatically loads configuration from hello_agent.yaml
# Agent name "hello_agent" must match the YAML filename
self.model = get_model_for_agent("hello_agent")
async def stream(self, query, conversation_id, task_id, dependencies=None):
# Use your configured model
response = await self.model.generate(query)
yield streaming.message_chunk(response)
yield streaming.done()You can override configuration via environment variables:
# Override model at runtime
export HELLO_AGENT_MODEL_ID="anthropic/claude-3.5-sonnet"
export HELLO_AGENT_TEMPERATURE="0.9"
# Run your agent with overrides
uv run -m valuecell.agents.hello_agentTip
For detailed configuration options including embedding models, fallback providers, and advanced patterns, see CONFIGURATION_GUIDE.
Agent Cards declare how your agent is discovered and served. Place a JSON file under:
python/configs/agent_cards/
The name must match your agent class name (e.g., HelloAgent). The url decides the host/port your wrapped agent will bind to.
{
"name": "HelloAgent",
"url": "http://localhost:10010",
"description": "A minimal example agent that echoes input.",
"capabilities": { "streaming": true, "push_notifications": false },
"default_input_modes": ["text"],
"default_output_modes": ["text"],
"version": "1.0.0",
"skills": [
{
"id": "echo",
"name": "Echo",
"description": "Echo user input back as streaming chunks.",
"tags": ["example", "echo"]
}
]
}Tip
- Filename can be anything (e.g.,
hello_agent.json), butnamemust equal your agent class (used bycreate_wrapped_agent) - Optional
enabled: falsewill disable loading. Extra fields likedisplay_nameormetadataare ignored - Change the
urlport if it's occupied. The wrapper reads host/port from this URL when serving - If you see "No agent configuration found … in agent cards", check the
nameand the JSON location
For local web development, simply start the backend server which will automatically load all agents:
# Start the full stack (frontend + backend with all agents)
bash start.sh
# Or start backend only
bash start.sh --no-frontendThe backend will automatically discover and initialize your agent based on the agent card configuration.
You can also run your agent directly using Python module syntax:
cd python
uv run python -m valuecell.agents.hello_agentFor the packaged client application (Tauri):
- The agent will be automatically included in the build
- No additional registration is required
- Test using workflow builds:
.github/workflows/mac_build.yml
Tip
Environment variables are loaded from system application directory:
- macOS:
~/Library/Application Support/ValueCell/.env - Linux:
~/.config/valuecell/.env - Windows:
%APPDATA%\ValueCell\.env
The .env file will be auto-created from .env.example on first run if it doesn't exist.
Both local development and packaged client use the same location.
Agents can use tools to extend their capabilities. Tools are Python functions that the agent can call during execution.
from agno.agent import Agent
from agno.db.in_memory import InMemoryDb
from valuecell.utils.model import get_model_for_agent
def search_stock_info(ticker: str) -> str:
"""
Search for stock information by ticker symbol.
Args:
ticker: Stock ticker symbol (e.g., "AAPL", "GOOGL")
Returns:
Stock information as a string
"""
# Your tool implementation here
return f"Stock info for {ticker}"
def calculate_metrics(data: dict) -> dict:
"""
Calculate financial metrics from stock data.
Args:
data: Dictionary containing financial data
Returns:
Dictionary with calculated metrics
"""
# Your calculation logic here
return {"pe_ratio": 25.5, "market_cap": "2.5T"}
class MyAgent(BaseAgent):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.inner = Agent(
...
tools=[search_stock_info, calculate_metrics], # Register your tools
...
)- Clear docstrings: Tools should have descriptive docstrings that explain their purpose and parameters
- Type hints: Use type hints for all parameters and return values
- Error handling: Implement proper error handling within tools
- Focused functionality: Each tool should do one thing well
Tip
For more information, refer to Tools - Agno.
The event system enables real-time communication between agents and the UI. All events are defined in valuecell.core.types.
Events for streaming agent responses:
MESSAGE_CHUNK- A chunk of the agent's response messageTOOL_CALL_STARTED- Agent begins executing a toolTOOL_CALL_COMPLETED- Tool execution finishedCOMPONENT_GENERATOR- Rich format components (charts, tables, reports, etc.)DONE- Indicating the streaming is finished
The COMPONENT_GENERATOR event allows agents to send rich UI components beyond plain text. This enables interactive visualizations, structured data displays, and custom widgets.
Supported Component Types:
report- Research reports with formatted contentprofile- Company or stock profilesfiltered_line_chart- Interactive line charts with data filteringfiltered_card_push_notification- Notification cards with filter optionsscheduled_task_controller- UI for managing scheduled tasksscheduled_task_result- Display results of scheduled tasks
Example: Emitting a Component
from valuecell.core.agent import streaming
# Create a line chart component
yield streaming.component_generator(
component_type="filtered_line_chart",
content={
"title": "Stock Price Trends",
"data": [
["Date", "AAPL", "GOOGL", "MSFT"],
["2025-01-01", 150.5, 2800.3, 380.2],
["2025-01-02", 152.1, 2815.7, 382.5],
],
"create_time": "2025-01-15 10:30:00"
}
)
# Create a report component
yield streaming.component_generator(
component_type="report",
content={
"title": "Q4 2024 Financial Analysis",
"data": "## Executive Summary\n\nRevenue increased by 15%...",
"url": "https://example.com/reports/q4-2024",
"create_time": "2025-01-15 10:30:00"
}
)Tip
Component data structures are defined in valuecell.core.types. See ReportComponentData, FilteredLineChartComponentData, and other component payload classes for required fields.
Use the streaming.* helpers to emit events. Here's a practical example based on the Research Agent implementation:
from agno.agent import Agent
from valuecell.core.agent import streaming
from valuecell.utils.model import get_model_for_agent
class MyAgent(BaseAgent):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.inner = Agent(
model=get_model_for_agent("my_agent"),
tools=[...], # your tool functions
# ... other configuration
)
async def stream(self, query, conversation_id, task_id, dependencies=None):
# Stream responses from the inner agent
response_stream = self.inner.arun(
query,
stream=True,
stream_intermediate_steps=True,
session_id=conversation_id,
)
# Process and forward events from the inner agent
async for event in response_stream:
if event.event == "RunContent":
# Emit message chunks as they arrive
yield streaming.message_chunk(event.content)
elif event.event == "ToolCallStarted":
# Notify UI that a tool is being called
yield streaming.tool_call_started(
event.tool.tool_call_id,
event.tool.tool_name
)
elif event.event == "ToolCallCompleted":
# Send tool results back to UI
yield streaming.tool_call_completed(
event.tool.result,
event.tool.tool_call_id,
event.tool.tool_name
)
# Signal completion
yield streaming.done()Tip
Refer to Running Agents - Agno for details
Tip
The UI automatically renders different event types appropriately - messages as text, tool calls with icons, etc. See the complete Research Agent implementation in python/valuecell/agents/research_agent/core.py.
From the python/ folder:
cd python
python -m valuecell.server.mainRun the Hello Agent as a standalone service:
cd python
python -m valuecell.agents.hello_agentTip
Set your environment first. At minimum, configure SILICONFLOW_API_KEY (and OPENROUTER_API_KEY) and SEC_EMAIL. See CONFIGURATION_GUIDE.
Optional: set AGENT_DEBUG_MODE=true to trace model behavior locally.
Use AGENT_DEBUG_MODE to enable verbose traces from agents and planners:
- Logs prompts, tool calls, intermediate steps, and provider response metadata
- Helpful to investigate planning decisions and tool routing during development
Enable in your .env:
AGENT_DEBUG_MODE=trueCaution
Debug mode can log sensitive inputs/outputs and increases log volume/latency. Enable only in local/dev environments; keep it off in production.
If you have questions:
- 💬 Join our Discord
- 📧 Email us at public@valuecell.ai
- 🐛 Open an issue for bug reports
Thank you for contributing to ValueCell! 🚀🚀🚀