Mesh Agent Routing Standard
A decentralized capability discovery network for autonomous agents
How does an AI agent find another agent that can review code, generate images, or search the web — without a central registry?
mars-protocol is a Kademlia DHT over QUIC that lets machines publish, discover, and verify capabilities across a global peer-to-peer mesh. No API keys. No platform lock-in. No single point of failure.
The MARS mesh is live with 65+ services. Connect to any hub:
| Hub | Address | Location |
|---|---|---|
| us-east | 5.161.53.251:4433 |
Ashburn, VA |
| us-west | 5.78.197.92:4433 |
Hillsboro, OR |
| eu-central | 46.225.55.16:4433 |
Nuremberg, DE |
| ap-southeast | 5.223.69.128:4433 |
Singapore |
pip install mesh-protocolfrom mesh_protocol import MeshClient
# Connect via a local gateway
with MeshClient("http://localhost:3000") as client:
# Discover AI search providers
providers = client.discover("data/search")
for p in providers:
print(f"{p.type} -> {p.endpoint}")
# Discover LLM inference endpoints
llms = client.discover("compute/inference/text-generation")
# Publish your own capability
client.publish(
"compute/analysis/code-review",
endpoint="https://my-agent.example.com/review",
params={"languages": ["rust", "python"]},
)npm install mars-protocolimport { MeshClient } from "mars-protocol";
const client = new MeshClient("http://localhost:3000");
const providers = await client.discover("compute/inference");
await client.publish("compute/analysis/code-review", {
endpoint: "https://my-agent.example.com/review",
});cargo add mars-clientlet mut client = MeshClient::new(keypair, bind_addr).await?;
client.bootstrap(&[NodeAddr::quic("5.161.53.251:4433")]).await?;
client.publish_capability("compute/inference/text-generation",
"https://my-agent.example.com/generate", None, &seed).await?;
let results = client.discover(&routing_key("compute/inference")).await?;# Start the gateway
./mesh-gateway --seed 5.161.53.251:4433 --listen 0.0.0.0:3000
# Publish
curl -X POST http://localhost:3000/v1/publish \
-H "Content-Type: application/json" \
-d '{"type":"compute/inference/text-generation","endpoint":"https://..."}'
# Discover
curl http://localhost:3000/v1/discover?type=compute/inferenceTurn any machine with a GPU into a mesh inference provider:
# Install Ollama + pull a model
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama4
# Interactive setup (first time)
pip install httpx
python tools/gpu-provider/provider.py
# Make it permanent — survives reboot, auto-restarts on crash
python tools/gpu-provider/provider.py --installOther agents discover your GPU instantly:
providers = client.discover("compute/inference/text-generation")
# → "llama4:latest (NVIDIA GeForce RTX 3090)" — free, us-eastSee GPU Provider docs for pricing, regions, and service management.
Connect existing ecosystems to the mesh:
| Bridge | Install | What it does |
|---|---|---|
| MCP Bridge | pip install mesh-mcp-bridge |
Publish MCP server tools → mesh. Discover mesh capabilities as MCP tools. |
| OpenAPI Bridge | pip install httpx pyyaml |
Point at any Swagger/OpenAPI spec → register all endpoints on mesh. |
# Publish an MCP server's tools to the mesh
mesh-mcp-bridge publish --gateway http://localhost:3000 --mcp-server "python my_server.py"
# Register an entire REST API from its OpenAPI spec
python bridges/openapi/openapi_bridge.py --gateway http://localhost:3000 \
--spec https://petstore.swagger.io/v2/swagger.jsonEvery AI agent framework reinvents service discovery. MCP servers need manual configuration. Tool registries are centralized bottlenecks. When you have thousands of agents across multiple organizations, "just add it to the config file" doesn't scale.
mars-protocol makes capability discovery a network primitive:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Agent A │ │ Mesh Hub │ │ Agent B │
│ │ STORE │ │ FIND │ │
│ "I can do │────────▶│ Kademlia │◀────────│ "Who can │
│ code │ │ DHT over │ │ review │
│ review" │ │ QUIC │ │ code?" │
└──────────────┘ └──────────────┘ └──────────────┘
|
|
| Package | Language | Install | Docs |
|---|---|---|---|
| mars-client | Rust | cargo add mars-client |
crates.io |
| mesh-protocol | Python | pip install mesh-protocol |
PyPI |
| mars-protocol | TypeScript | npm install mars-protocol |
npm |
| mesh-mcp-bridge | Python | pip install mesh-mcp-bridge |
PyPI |
| openapi-bridge | Python | pip install httpx pyyaml |
README |
A descriptor is a signed, content-addressed record that says "I can do X, reach me at Y":
Descriptor {
publisher: did:mesh:z6Mkt... # Ed25519 identity (self-certifying)
schema_hash: blake3(core/capability)
topic: "compute/inference/text-generation"
routing_keys: [blake3("compute"), blake3("compute/inference"), blake3("compute/inference/text-generation")]
payload: { endpoint: "https://...", model: "glm-5", max_tokens: 4096 }
signature: Ed25519(publisher, canonical_cbor(descriptor))
ttl: 3600
sequence: 1 # monotonic — newer replaces older
}
Routing keys are hierarchical — searching for compute/inference finds all inference providers (text, image, speech, embeddings), while compute/inference/text-generation narrows to exactly that.
# Generate hub identity
./mesh-hub --generate-keypair
# Edit config (see examples/hub.toml)
cp examples/hub.toml mesh-hub.toml
$EDITOR mesh-hub.toml
# Run
./mesh-hub --config mesh-hub.tomlHub features: disk-backed storage (redb), multi-tenant with MU metering, admin API, hub-to-hub peering via gossip, Prometheus metrics, rate limiting, SSRF protection.
See the Operator Guide for full configuration reference. Multi-region deployment scripts in deploy/.
|
|
|
An AI agent needs code review. It queries the mesh for |
GPU owners share their hardware on the mesh. Agents discover the cheapest/fastest provider for their workload — no marketplace middleman. |
Publish MCP server tools to the mesh. Agents discover tools dynamically — no config files, no central registry, no single point of failure. |
cargo test --workspace # Run the full test suite (230+ tests)
cargo test --package mesh-hub # Hub hardening tests onlyDual-licensed under MIT or Apache-2.0 at your option.