Skip to content

Announce cluster nodes via query param instead#923

Merged
kixelated merged 10 commits intomainfrom
cluster-primary
Feb 6, 2026
Merged

Announce cluster nodes via query param instead#923
kixelated merged 10 commits intomainfrom
cluster-primary

Conversation

@kixelated
Copy link
Collaborator

@kixelated kixelated commented Feb 6, 2026

The node announcing itself via a broadcast didn't actually work in practice... unless the wrong token was being used.

The root node would get announcements but it wouldn't forward them to other nodes. So leaf1 wouldn't know about leaf2.

@kixelated kixelated mentioned this pull request Feb 6, 2026
@kixelated
Copy link
Collaborator Author

@jhiatt-verkada compare notes?

@kixelated kixelated marked this pull request as ready for review February 6, 2026 21:09
@kixelated kixelated changed the title (untested) Announce cluster nodes via query param instead Announce cluster nodes via query param instead Feb 6, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 6, 2026

Walkthrough

Introduces a public AuthParams struct (path: String, jwt: Option<String>, register: Option<String>) with constructors and changes Auth::verify to accept &AuthParams. AuthToken.cluster becomes Option<String> and AuthToken.register was added. AuthError gains ExpectedCluster. Verification derives root from params.path, computes register from params.register and token claims, and narrows permissions based on matched suffix. Adds ClusterRegistration and Cluster::register(). Call sites (cluster, connection, web, tests, and justfile) updated to use AuthParams and to manage registration lifetimes.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title directly describes the main technical change: shifting from broadcast-based node announcements to query parameter-based announcements.
Description check ✅ Passed The description explains the problem with the broadcast approach and indicates the solution involves query parameters, aligning with the changeset modifications to auth, cluster, connection, and web modules.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cluster-primary

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
rs/moq-relay/src/auth.rs (1)

318-816: 🛠️ Refactor suggestion | 🟠 Major

Missing test coverage for the new cluster/node authentication logic.

The verify() method gained a significant new code path at lines 302-307 handling the four (params.node, claims.cluster) combinations, but none of the existing tests exercise it. Consider adding tests for:

  1. claims.cluster=true + node=Some(...)token.cluster == Some(node)
  2. claims.cluster=true + node=NoneErr(ExpectedNode)
  3. claims.cluster=false + node=Some(...)Err(ExpectedCluster)
  4. claims.cluster=false + node=Nonetoken.cluster == None

This is especially important given the PR title says "(untested)".

As per coding guidelines, "Rust tests should be integrated within source files using inline test modules".

rs/moq-relay/src/web.rs (1)

329-329: ⚠️ Potential issue | 🟡 Minor

Typo: "containts" → "contains".

-	// The path containts a broadcast/track
+	// The path contains a broadcast/track
🤖 Fix all issues with AI agents
In `@rs/moq-relay/src/auth.rs`:
- Around line 302-307: The current branch in auth.rs trusts the client-supplied
params.node when claims.cluster==true, allowing spoofing; update the logic that
builds `cluster` (the match on `params.node.as_deref()` and `claims.cluster`) to
validate `params.node` before accepting it: either require a node identity
embedded in the JWT claims (e.g., add `claims.node` and compare equality to
`params.node`) or check `params.node` against your server-side
allowlist/registry of valid node IDs, and return a new AuthError (e.g.,
AuthError::InvalidNode) if validation fails; ensure the code paths that return
AuthError::ExpectedNode / ExpectedCluster remain but replace the verbatim
acceptance of `Some(node)` with this validation step so downstream code (e.g.,
cluster registration) cannot be hijacked.

In `@rs/moq-relay/src/cluster.rs`:
- Around line 108-124: The current guard in register() uses
self.config.node.is_some() which wrongly treats a node that sets config.node
equal to config.root as non-root; change the check to detect whether this
process is actually a leaf vs the real root: compute is_root by comparing
config.node (Option<String>) to config.root (String) — treat missing config.node
as root and treat Some(n) equal to config.root as root; return None only when
this process is a leaf (i.e., config.node is Some and != config.root). Update
the early-return in register() accordingly so only non-root nodes are skipped;
keep the rest of register() (creating Broadcast, path, publish_broadcast,
ClusterRegistration::new) unchanged.
🧹 Nitpick comments (3)
rs/moq-relay/src/cluster.rs (2)

250-254: local node name is interpolated into the URL without percent-encoding.

If local or token contain characters like &, =, or spaces, the URL will be malformed. While hostnames typically won't contain these, a misconfigured cluster-node value could cause silent breakage.

Proposed fix using url::Url's query-pair builder
-		let url = match token.is_empty() {
-			true => Url::parse(&format!("https://{remote}/?node={local}"))?,
-			false => Url::parse(&format!("https://{remote}/?jwt={token}&node={local}"))?,
-		};
+		let mut url = Url::parse(&format!("https://{remote}/"))?;
+		{
+			let mut q = url.query_pairs_mut();
+			if !token.is_empty() {
+				q.append_pair("jwt", &token);
+			}
+			q.append_pair("node", local);
+		}

160-163: Noop broadcast keeps run_remote alive but leaks intent.

The comment on line 162 says the dummy broadcast prevents run_remote from closing, because run_remote breaks out of its loop when origin.closed() resolves. This works but couples run_remote's lifecycle to an unrelated empty broadcast. A clearer approach might be to pass a tokio::sync::watch or CancellationToken for shutdown signaling instead.

This is fine for now given the "(untested)" status—just flagging for future cleanup.

rs/moq-relay/src/web.rs (1)

232-237: Consider a From impl to reduce AuthParams construction boilerplate.

The same AuthParams { path, jwt: query.jwt, node: query.node } pattern appears three times (lines 232, 302, 339). A small From<(String, AuthQuery)> for AuthParams (or a method on AuthQuery) would DRY this up.

Example helper
impl AuthQuery {
    fn into_params(self, path: String) -> AuthParams {
        AuthParams {
            path,
            jwt: self.jwt,
            node: self.node,
        }
    }
}

Also applies to: 302-307, 339-344

Comment on lines 302 to 307
let cluster = match (params.node.as_deref(), claims.cluster) {
(Some(node), true) => Some(node.to_owned()),
(None, true) => return Err(AuthError::ExpectedNode),
(Some(_), false) => return Err(AuthError::ExpectedCluster),
(None, false) => None,
};
Copy link
Contributor

@coderabbitai coderabbitai bot Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security consideration: the node parameter is fully client-supplied.

A client with a valid cluster JWT (claims.cluster == true) can set node to any arbitrary string, and that string is trusted verbatim as the cluster node identity. This means a compromised or malicious cluster leaf can impersonate another node's identity by passing node=<victim-node>. The root node will then register a broadcast under that victim's path (see cluster.rs Line 120-121), potentially hijacking or shadowing its announcements.

Consider validating node against a known allowlist, or embedding the expected node identity inside the JWT claims themselves so it can't be forged.

🤖 Prompt for AI Agents
In `@rs/moq-relay/src/auth.rs` around lines 302 - 307, The current branch in
auth.rs trusts the client-supplied params.node when claims.cluster==true,
allowing spoofing; update the logic that builds `cluster` (the match on
`params.node.as_deref()` and `claims.cluster`) to validate `params.node` before
accepting it: either require a node identity embedded in the JWT claims (e.g.,
add `claims.node` and compare equality to `params.node`) or check `params.node`
against your server-side allowlist/registry of valid node IDs, and return a new
AuthError (e.g., AuthError::InvalidNode) if validation fails; ensure the code
paths that return AuthError::ExpectedNode / ExpectedCluster remain but replace
the verbatim acceptance of `Some(node)` with this validation step so downstream
code (e.g., cluster registration) cannot be hijacked.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cluster == true is only for cluster nodes, so they have extra permissions. But yes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kixelated, thanks for confirming! You're right that cluster == true grants elevated permissions specifically for cluster nodes. The concern is that even with that elevated trust, a compromised cluster node could still impersonate another cluster node by supplying any node value in the query parameter.

If you'd like to address this, I can help you:

  • Add a node field to the JWT claims so the expected node identity is cryptographically bound
  • Or implement validation against a server-side allowlist of known node IDs
  • Or create a follow-up issue to track this for a future PR

Let me know if you'd like any assistance, or if you prefer to handle this separately! 🔒

Comment on lines 108 to 124
// Register a cluster node's presence.
//
// Returns a [ClusterRegistration] that should be kept alive for the duration of the session.
pub fn register(&self, token: &AuthToken) -> Option<ClusterRegistration> {
if self.config.node.is_some() {
// Only the root node can register other nodes.
return None;
}

let node = token.cluster.clone()?;
let broadcast = Broadcast::produce();

let path = moq_lite::Path::new(&self.config.prefix).join(&node);
self.primary.publish_broadcast(path, broadcast.consume());

Some(ClusterRegistration::new(node, broadcast))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Root node with config.node set will silently skip all cluster registrations.

The guard if self.config.node.is_some() intends to restrict registration to the root node only. However, a root node can also have config.node set (when config.root == config.node, per the filter at line 139). In that case, register() always returns None, so no leaf node can ever register with the root.

The condition should check whether this node is actually a leaf (i.e., has a different root to connect to), not simply whether config.node is present.

Proposed fix
 	pub fn register(&self, token: &AuthToken) -> Option<ClusterRegistration> {
-		if self.config.node.is_some() {
-			// Only the root node can register other nodes.
+		// Only the root node can register other nodes.
+		// A leaf node has a root that differs from its own node name.
+		if self.config.root.as_ref().is_some_and(|root| self.config.node.as_ref() != Some(root)) {
 			return None;
 		}
#!/bin/bash
# Verify how config.node and config.root interact to determine root vs leaf
# Check if there are integration tests or other callers that set both config.root and config.node
rg -n 'cluster.*node|cluster.*root|ClusterConfig' --type=rust -C3
🤖 Prompt for AI Agents
In `@rs/moq-relay/src/cluster.rs` around lines 108 - 124, The current guard in
register() uses self.config.node.is_some() which wrongly treats a node that sets
config.node equal to config.root as non-root; change the check to detect whether
this process is actually a leaf vs the real root: compute is_root by comparing
config.node (Option<String>) to config.root (String) — treat missing config.node
as root and treat Some(n) equal to config.root as root; return None only when
this process is a leaf (i.e., config.node is Some and != config.root). Update
the early-return in register() accordingly so only non-root nodes are skipped;
keep the rest of register() (creating Broadcast, path, publish_broadcast,
ClusterRegistration::new) unchanged.

Use url::Url's query_pairs_mut() instead of string interpolation
to properly encode node names and tokens in cluster URLs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
rs/moq-relay/src/cluster.rs (1)

209-218: ⚠️ Potential issue | 🟠 Major

Potential panic: cancellation of a skipped node hits unwrap() on missing key.

When node == local at line 210, we continue without inserting into active. If the origin later emits a cancellation (node, None) for that same local node, line 217's .unwrap() panics because the key doesn't exist in the map.

Proposed fix
 			let Some(origin) = origin else {
 				tracing::info!(%node, "origin cancelled");
-				active.remove(node.as_str()).unwrap().abort();
+				if let Some(handle) = active.remove(node.as_str()) {
+					handle.abort();
+				}
 				continue;
 			};
🤖 Fix all issues with AI agents
In `@rs/moq-relay/src/cluster.rs`:
- Around line 252-259: The code appends the JWT into the URL query (via
Url::parse and query_pairs_mut) and then logs the full Url
(tracing::info!(%url,...)), leaking the token; fix by never logging the query
containing "jwt" — after building the Url and before any tracing::info! call,
create a local display_url (clone the Url or build from host/path) and call
display_url.set_query(None) or remove only the "jwt" pair, then log that
sanitized display_url (or log url.host()/url.path() explicitly) instead of the
original Url to ensure the token is not emitted.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
rs/moq-relay/src/cluster.rs (1)

212-221: ⚠️ Potential issue | 🔴 Critical

unwrap() on active.remove() can panic for skipped nodes.

If a node that was skipped (e.g., local at line 213) later gets cancelled, origins.announced() will yield (node, None). The code reaches line 220 and calls active.remove(...).unwrap(), but since the node was never inserted into active, this panics.

Proposed fix
 let Some(origin) = origin else {
     tracing::info!(%node, "origin cancelled");
-    active.remove(node.as_str()).unwrap().abort();
+    if let Some(handle) = active.remove(node.as_str()) {
+        handle.abort();
+    }
     continue;
 };
🧹 Nitpick comments (2)
rs/moq-relay/src/cluster.rs (2)

164-167: Consider documenting why noop must outlive the select!.

The noop producer keeps the consumer alive so run_remote doesn't exit on origin.closed(). This works because noop lives in the run() stack frame alongside the select!, but the coupling is implicit. A brief doc-comment or naming like _keepalive would make the intent more resilient to future refactoring.


263-285: Backoff grows monotonically and never resets on success.

If run_remote_once succeeds for a while then fails, the backoff resumes from its last value rather than resetting. The TODO on line 280 acknowledges this. Just noting that this can cause premature bail-out after transient failures following a long-lived connection.

Would you like me to draft a backoff-reset implementation (e.g., reset when the connection survives longer than the current backoff period)?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@dev/leaf1.toml`:
- Line 25: Fix the typo in the comment string in dev/leaf1.toml: replace the
misspelled word "identifical" with "identical" in the comment "# The root node
is either missing the "root" field below or it's identifical to the "node"
field." so the sentence reads correctly; update that comment text in the file
(search for the exact comment line to locate it).
- Line 7: The inline comment currently says "Listen for QUIC connections on
UDP:4444" but the configuration's listen directive (the listen = ... line) binds
to port 4445; update the comment to "Listen for QUIC connections on UDP:4445" so
it matches the listen directive, or if the intended port is 4444 change the
listen directive instead—ensure the comment and the listen directive are
consistent (refer to the listen directive to locate the setting).
🧹 Nitpick comments (3)
rs/moq-relay/src/web.rs (1)

292-307: serve_announced passes register but never creates a registration.

The register field from the query is passed into AuthParams and verified, but unlike serve_ws, no ClusterRegistration is created. If a client sends ?register=... to the /announced endpoint, verify() will succeed and populate token.register, but nothing acts on it. This is likely fine since /announced is a short-lived HTTP request, but it means a non-cluster client could probe the endpoint with a register param without error (as long as they have a cluster JWT).

Consider whether register should be ignored/stripped for non-WebSocket endpoints to keep the API surface clean.

rs/moq-relay/src/cluster.rs (1)

258-283: Backoff is never reset after a successful connection.

If run_remote_once succeeds (connection established then cleanly closed), backoff remains at whatever value it reached during previous errors. The TODO on line 275 acknowledges this, but it could cause premature bail-out: a transient burst of failures raises backoff, then a brief success doesn't reset it, and the next failure immediately exceeds the 300s cap.

Consider resetting backoff = 1 when run_remote_once returns Ok.

Proposed fix
 		let res = tokio::select! {
 			biased;
 			_ = origin.closed() => break,
 			res = self.run_remote_once(&url) => res,
 		};

-		if let Err(err) = res {
+		if let Err(err) = &res {
 			backoff *= 2;
 			tracing::error!(%err, "remote error");
+		} else {
+			backoff = 1;
 		}
rs/moq-relay/src/auth.rs (1)

300-304: No test coverage for the register/cluster interaction.

Lines 300-304 introduce three distinct code paths (register + cluster → Some, register + no cluster → error, no register → None), but none of the tests exercise them. Consider adding tests for:

  1. A cluster JWT with ?register=node1token.register == Some("node1")
  2. A non-cluster JWT with ?register=node1Err(ExpectedCluster)
  3. A cluster JWT without ?registertoken.register == None

Would you like me to generate the test cases?

@kixelated kixelated enabled auto-merge (squash) February 6, 2026 23:00
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@dev/leaf1.toml`:
- Around line 1-69: Fix the two typos in the TOML comments: in the [client]
comment replace "indiciated" with "indicated" (the sentence describing
certificate domain matching, e.g. "ex. a cert for *.moq.dev will only be allowed
if the root/node configuration below matches") and in the [auth] comment replace
"publically" with "publicly" (the sentence explaining anonymous
publishing/subscribing). Leave all other text and formatting unchanged.
🧹 Nitpick comments (1)
rs/moq-relay/src/cluster.rs (1)

158-165: The noop producer's lifetime is implicitly tied to the select! scope — consider adding a brief comment.

The noop producer keeps the root connection's origin.closed() from firing, which is the intent. However, if this function is refactored later and noop is accidentally moved or dropped early, the root connection would silently break. A let _noop = ... binding or a slightly more explicit comment would make the intent clearer.

Suggested clarification
-		// Create a dummy broadcast that we don't close so run_remote doesn't close.
-		let noop = Broadcast::produce();
+		// Create a dummy broadcast whose consumer is passed to run_remote for the root connection.
+		// The producer must be kept alive for the duration of this scope to prevent the connection from closing.
+		let _noop_keepalive = Broadcast::produce();
+		let noop_consumer = _noop_keepalive.consume();

Then pass noop_consumer instead of noop.consume() on line 165.

Comment on lines +1 to +69
[log]
# Enable debug logging for development.
# The RUST_LOG environment variable will take precedence.
level = "debug"

[server]
# Listen for QUIC connections on UDP:4445
# Sometimes IPv6 causes issues; try 127.0.0.1:4445 instead.
listen = "[::]:4445"

# Generate a self-signed certificate for the server.
# You should use a real certificate in production.
tls.generate = ["localhost"]

[web.http]
# Listen for HTTP and WebSocket (TCP) connections on the given address.
listen = "[::]:4445"

# This clustering scheme is very very simple for now.
#
# There is a root node that is used to connect leaf nodes together.
# Announcements flow from leaf -> root -> leaf, but any subscriptions are leaf -> leaf.
# The root node can serve (user) subscriptions too.
#
# The root node is either missing the "root" field below or it's identical to the "node" field.
# This node acts a server only, accepting incoming connections from leaf nodes and users alike.
#
# There can be any number of leaf nodes.
# These nodes will connect to the specified root address and announce themselves via MoQ as a "broadcast".
# All nodes will discover these broadcasts and connect to other nodes.
#
# This forms an NxN mesh of nodes.
# Broadcasts are announced between all nodes with no collision detection, so duplicates are possible.
# Subscriptions will be relayed from leaf to leaf, so at most you can have:
# user -> leaf -> leaf -> user
[cluster]
# Connect to this hostname in order to discover other nodes.
connect = "localhost:4443"

# Use the token in this file when connecting to other nodes.
# `just auth-token` will populate this file.
token = "dev/root.jwt"

# My hostname, which must be accessible from other nodes.
node = "localhost:4445"

# Each leaf node will connect to the root node and other nodes, using this configuration.
[client]
# QUIC uses TLS to have the client verify the server's identity.
# However if you're not worried about man-in-the-middle attacks, you can disable verification:
tls.disable_verify = true

# A better approach is to generate a server certificate and configure the client to accept it.
#
# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indiciated domain.
# ex. a cert for *.moq.dev will only be allowed if the root/node configuration below matches.
#
# Alternatively, you can generate a self-signed root CA and configure the client to accept it.
# tls.root = ["/path/to/root.pem"]
#
# This can be much more secure because the server doesn't need to be publically accessible.
# ex. You could host the root at a private `.internal` domain and generate a matching certificate.

[auth]
# `just auth-key` will populate this file.
key = "dev/root.jwk"

# Allow anonymous publishing and subscribing for this prefix.
public = "anon"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

LGTM — configuration is well-structured and thoroughly documented.

The leaf node configuration looks correct and aligns with the new cluster registration approach. Two minor typos in the comments:

  • Line 55: "indiciated" → "indicated"
  • Line 61: "publically" → "publicly"
Proposed fix
-# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indiciated domain.
+# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indicated domain.
-# This can be much more secure because the server doesn't need to be publically accessible.
+# This can be much more secure because the server doesn't need to be publicly accessible.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[log]
# Enable debug logging for development.
# The RUST_LOG environment variable will take precedence.
level = "debug"
[server]
# Listen for QUIC connections on UDP:4445
# Sometimes IPv6 causes issues; try 127.0.0.1:4445 instead.
listen = "[::]:4445"
# Generate a self-signed certificate for the server.
# You should use a real certificate in production.
tls.generate = ["localhost"]
[web.http]
# Listen for HTTP and WebSocket (TCP) connections on the given address.
listen = "[::]:4445"
# This clustering scheme is very very simple for now.
#
# There is a root node that is used to connect leaf nodes together.
# Announcements flow from leaf -> root -> leaf, but any subscriptions are leaf -> leaf.
# The root node can serve (user) subscriptions too.
#
# The root node is either missing the "root" field below or it's identical to the "node" field.
# This node acts a server only, accepting incoming connections from leaf nodes and users alike.
#
# There can be any number of leaf nodes.
# These nodes will connect to the specified root address and announce themselves via MoQ as a "broadcast".
# All nodes will discover these broadcasts and connect to other nodes.
#
# This forms an NxN mesh of nodes.
# Broadcasts are announced between all nodes with no collision detection, so duplicates are possible.
# Subscriptions will be relayed from leaf to leaf, so at most you can have:
# user -> leaf -> leaf -> user
[cluster]
# Connect to this hostname in order to discover other nodes.
connect = "localhost:4443"
# Use the token in this file when connecting to other nodes.
# `just auth-token` will populate this file.
token = "dev/root.jwt"
# My hostname, which must be accessible from other nodes.
node = "localhost:4445"
# Each leaf node will connect to the root node and other nodes, using this configuration.
[client]
# QUIC uses TLS to have the client verify the server's identity.
# However if you're not worried about man-in-the-middle attacks, you can disable verification:
tls.disable_verify = true
# A better approach is to generate a server certificate and configure the client to accept it.
#
# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indiciated domain.
# ex. a cert for *.moq.dev will only be allowed if the root/node configuration below matches.
#
# Alternatively, you can generate a self-signed root CA and configure the client to accept it.
# tls.root = ["/path/to/root.pem"]
#
# This can be much more secure because the server doesn't need to be publically accessible.
# ex. You could host the root at a private `.internal` domain and generate a matching certificate.
[auth]
# `just auth-key` will populate this file.
key = "dev/root.jwk"
# Allow anonymous publishing and subscribing for this prefix.
public = "anon"
[log]
# Enable debug logging for development.
# The RUST_LOG environment variable will take precedence.
level = "debug"
[server]
# Listen for QUIC connections on UDP:4445
# Sometimes IPv6 causes issues; try 127.0.0.1:4445 instead.
listen = "[::]:4445"
# Generate a self-signed certificate for the server.
# You should use a real certificate in production.
tls.generate = ["localhost"]
[web.http]
# Listen for HTTP and WebSocket (TCP) connections on the given address.
listen = "[::]:4445"
# This clustering scheme is very very simple for now.
#
# There is a root node that is used to connect leaf nodes together.
# Announcements flow from leaf -> root -> leaf, but any subscriptions are leaf -> leaf.
# The root node can serve (user) subscriptions too.
#
# The root node is either missing the "root" field below or it's identical to the "node" field.
# This node acts a server only, accepting incoming connections from leaf nodes and users alike.
#
# There can be any number of leaf nodes.
# These nodes will connect to the specified root address and announce themselves via MoQ as a "broadcast".
# All nodes will discover these broadcasts and connect to other nodes.
#
# This forms an NxN mesh of nodes.
# Broadcasts are announced between all nodes with no collision detection, so duplicates are possible.
# Subscriptions will be relayed from leaf to leaf, so at most you can have:
# user -> leaf -> leaf -> user
[cluster]
# Connect to this hostname in order to discover other nodes.
connect = "localhost:4443"
# Use the token in this file when connecting to other nodes.
# `just auth-token` will populate this file.
token = "dev/root.jwt"
# My hostname, which must be accessible from other nodes.
node = "localhost:4445"
# Each leaf node will connect to the root node and other nodes, using this configuration.
[client]
# QUIC uses TLS to have the client verify the server's identity.
# However if you're not worried about man-in-the-middle attacks, you can disable verification:
tls.disable_verify = true
# A better approach is to generate a server certificate and configure the client to accept it.
#
# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indicated domain.
# ex. a cert for *.moq.dev will only be allowed if the root/node configuration below matches.
#
# Alternatively, you can generate a self-signed root CA and configure the client to accept it.
# tls.root = ["/path/to/root.pem"]
#
# This can be much more secure because the server doesn't need to be publicly accessible.
# ex. You could host the root at a private `.internal` domain and generate a matching certificate.
[auth]
# `just auth-key` will populate this file.
key = "dev/root.jwk"
# Allow anonymous publishing and subscribing for this prefix.
public = "anon"
🤖 Prompt for AI Agents
In `@dev/leaf1.toml` around lines 1 - 69, Fix the two typos in the TOML comments:
in the [client] comment replace "indiciated" with "indicated" (the sentence
describing certificate domain matching, e.g. "ex. a cert for *.moq.dev will only
be allowed if the root/node configuration below matches") and in the [auth]
comment replace "publically" with "publicly" (the sentence explaining anonymous
publishing/subscribing). Leave all other text and formatting unchanged.

@kixelated kixelated merged commit d76ca42 into main Feb 6, 2026
1 check passed
@kixelated kixelated deleted the cluster-primary branch February 6, 2026 23:05
@moq-bot moq-bot bot mentioned this pull request Feb 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant