Announce cluster nodes via query param instead#923
Conversation
|
@jhiatt-verkada compare notes? |
WalkthroughIntroduces a public 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
rs/moq-relay/src/auth.rs (1)
318-816: 🛠️ Refactor suggestion | 🟠 MajorMissing test coverage for the new cluster/node authentication logic.
The
verify()method gained a significant new code path at lines 302-307 handling the four(params.node, claims.cluster)combinations, but none of the existing tests exercise it. Consider adding tests for:
claims.cluster=true+node=Some(...)→token.cluster == Some(node)claims.cluster=true+node=None→Err(ExpectedNode)claims.cluster=false+node=Some(...)→Err(ExpectedCluster)claims.cluster=false+node=None→token.cluster == NoneThis is especially important given the PR title says "(untested)".
As per coding guidelines, "Rust tests should be integrated within source files using inline test modules".
rs/moq-relay/src/web.rs (1)
329-329:⚠️ Potential issue | 🟡 MinorTypo: "containts" → "contains".
- // The path containts a broadcast/track + // The path contains a broadcast/track
🤖 Fix all issues with AI agents
In `@rs/moq-relay/src/auth.rs`:
- Around line 302-307: The current branch in auth.rs trusts the client-supplied
params.node when claims.cluster==true, allowing spoofing; update the logic that
builds `cluster` (the match on `params.node.as_deref()` and `claims.cluster`) to
validate `params.node` before accepting it: either require a node identity
embedded in the JWT claims (e.g., add `claims.node` and compare equality to
`params.node`) or check `params.node` against your server-side
allowlist/registry of valid node IDs, and return a new AuthError (e.g.,
AuthError::InvalidNode) if validation fails; ensure the code paths that return
AuthError::ExpectedNode / ExpectedCluster remain but replace the verbatim
acceptance of `Some(node)` with this validation step so downstream code (e.g.,
cluster registration) cannot be hijacked.
In `@rs/moq-relay/src/cluster.rs`:
- Around line 108-124: The current guard in register() uses
self.config.node.is_some() which wrongly treats a node that sets config.node
equal to config.root as non-root; change the check to detect whether this
process is actually a leaf vs the real root: compute is_root by comparing
config.node (Option<String>) to config.root (String) — treat missing config.node
as root and treat Some(n) equal to config.root as root; return None only when
this process is a leaf (i.e., config.node is Some and != config.root). Update
the early-return in register() accordingly so only non-root nodes are skipped;
keep the rest of register() (creating Broadcast, path, publish_broadcast,
ClusterRegistration::new) unchanged.
🧹 Nitpick comments (3)
rs/moq-relay/src/cluster.rs (2)
250-254:localnode name is interpolated into the URL without percent-encoding.If
localortokencontain characters like&,=, or spaces, the URL will be malformed. While hostnames typically won't contain these, a misconfiguredcluster-nodevalue could cause silent breakage.Proposed fix using url::Url's query-pair builder
- let url = match token.is_empty() { - true => Url::parse(&format!("https://{remote}/?node={local}"))?, - false => Url::parse(&format!("https://{remote}/?jwt={token}&node={local}"))?, - }; + let mut url = Url::parse(&format!("https://{remote}/"))?; + { + let mut q = url.query_pairs_mut(); + if !token.is_empty() { + q.append_pair("jwt", &token); + } + q.append_pair("node", local); + }
160-163: Noop broadcast keepsrun_remotealive but leaks intent.The comment on line 162 says the dummy broadcast prevents
run_remotefrom closing, becauserun_remotebreaks out of its loop whenorigin.closed()resolves. This works but couplesrun_remote's lifecycle to an unrelated empty broadcast. A clearer approach might be to pass atokio::sync::watchorCancellationTokenfor shutdown signaling instead.This is fine for now given the "(untested)" status—just flagging for future cleanup.
rs/moq-relay/src/web.rs (1)
232-237: Consider aFromimpl to reduce AuthParams construction boilerplate.The same
AuthParams { path, jwt: query.jwt, node: query.node }pattern appears three times (lines 232, 302, 339). A smallFrom<(String, AuthQuery)> for AuthParams(or a method onAuthQuery) would DRY this up.Example helper
impl AuthQuery { fn into_params(self, path: String) -> AuthParams { AuthParams { path, jwt: self.jwt, node: self.node, } } }Also applies to: 302-307, 339-344
rs/moq-relay/src/auth.rs
Outdated
| let cluster = match (params.node.as_deref(), claims.cluster) { | ||
| (Some(node), true) => Some(node.to_owned()), | ||
| (None, true) => return Err(AuthError::ExpectedNode), | ||
| (Some(_), false) => return Err(AuthError::ExpectedCluster), | ||
| (None, false) => None, | ||
| }; |
There was a problem hiding this comment.
Security consideration: the node parameter is fully client-supplied.
A client with a valid cluster JWT (claims.cluster == true) can set node to any arbitrary string, and that string is trusted verbatim as the cluster node identity. This means a compromised or malicious cluster leaf can impersonate another node's identity by passing node=<victim-node>. The root node will then register a broadcast under that victim's path (see cluster.rs Line 120-121), potentially hijacking or shadowing its announcements.
Consider validating node against a known allowlist, or embedding the expected node identity inside the JWT claims themselves so it can't be forged.
🤖 Prompt for AI Agents
In `@rs/moq-relay/src/auth.rs` around lines 302 - 307, The current branch in
auth.rs trusts the client-supplied params.node when claims.cluster==true,
allowing spoofing; update the logic that builds `cluster` (the match on
`params.node.as_deref()` and `claims.cluster`) to validate `params.node` before
accepting it: either require a node identity embedded in the JWT claims (e.g.,
add `claims.node` and compare equality to `params.node`) or check `params.node`
against your server-side allowlist/registry of valid node IDs, and return a new
AuthError (e.g., AuthError::InvalidNode) if validation fails; ensure the code
paths that return AuthError::ExpectedNode / ExpectedCluster remain but replace
the verbatim acceptance of `Some(node)` with this validation step so downstream
code (e.g., cluster registration) cannot be hijacked.
There was a problem hiding this comment.
cluster == true is only for cluster nodes, so they have extra permissions. But yes.
There was a problem hiding this comment.
@kixelated, thanks for confirming! You're right that cluster == true grants elevated permissions specifically for cluster nodes. The concern is that even with that elevated trust, a compromised cluster node could still impersonate another cluster node by supplying any node value in the query parameter.
If you'd like to address this, I can help you:
- Add a
nodefield to the JWT claims so the expected node identity is cryptographically bound - Or implement validation against a server-side allowlist of known node IDs
- Or create a follow-up issue to track this for a future PR
Let me know if you'd like any assistance, or if you prefer to handle this separately! 🔒
| // Register a cluster node's presence. | ||
| // | ||
| // Returns a [ClusterRegistration] that should be kept alive for the duration of the session. | ||
| pub fn register(&self, token: &AuthToken) -> Option<ClusterRegistration> { | ||
| if self.config.node.is_some() { | ||
| // Only the root node can register other nodes. | ||
| return None; | ||
| } | ||
|
|
||
| let node = token.cluster.clone()?; | ||
| let broadcast = Broadcast::produce(); | ||
|
|
||
| let path = moq_lite::Path::new(&self.config.prefix).join(&node); | ||
| self.primary.publish_broadcast(path, broadcast.consume()); | ||
|
|
||
| Some(ClusterRegistration::new(node, broadcast)) | ||
| } |
There was a problem hiding this comment.
Root node with config.node set will silently skip all cluster registrations.
The guard if self.config.node.is_some() intends to restrict registration to the root node only. However, a root node can also have config.node set (when config.root == config.node, per the filter at line 139). In that case, register() always returns None, so no leaf node can ever register with the root.
The condition should check whether this node is actually a leaf (i.e., has a different root to connect to), not simply whether config.node is present.
Proposed fix
pub fn register(&self, token: &AuthToken) -> Option<ClusterRegistration> {
- if self.config.node.is_some() {
- // Only the root node can register other nodes.
+ // Only the root node can register other nodes.
+ // A leaf node has a root that differs from its own node name.
+ if self.config.root.as_ref().is_some_and(|root| self.config.node.as_ref() != Some(root)) {
return None;
}#!/bin/bash
# Verify how config.node and config.root interact to determine root vs leaf
# Check if there are integration tests or other callers that set both config.root and config.node
rg -n 'cluster.*node|cluster.*root|ClusterConfig' --type=rust -C3🤖 Prompt for AI Agents
In `@rs/moq-relay/src/cluster.rs` around lines 108 - 124, The current guard in
register() uses self.config.node.is_some() which wrongly treats a node that sets
config.node equal to config.root as non-root; change the check to detect whether
this process is actually a leaf vs the real root: compute is_root by comparing
config.node (Option<String>) to config.root (String) — treat missing config.node
as root and treat Some(n) equal to config.root as root; return None only when
this process is a leaf (i.e., config.node is Some and != config.root). Update
the early-return in register() accordingly so only non-root nodes are skipped;
keep the rest of register() (creating Broadcast, path, publish_broadcast,
ClusterRegistration::new) unchanged.
Use url::Url's query_pairs_mut() instead of string interpolation to properly encode node names and tokens in cluster URLs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
rs/moq-relay/src/cluster.rs (1)
209-218:⚠️ Potential issue | 🟠 MajorPotential panic: cancellation of a skipped node hits
unwrap()on missing key.When
node == localat line 210, wecontinuewithout inserting intoactive. If the origin later emits a cancellation(node, None)for that same local node, line 217's.unwrap()panics because the key doesn't exist in the map.Proposed fix
let Some(origin) = origin else { tracing::info!(%node, "origin cancelled"); - active.remove(node.as_str()).unwrap().abort(); + if let Some(handle) = active.remove(node.as_str()) { + handle.abort(); + } continue; };
🤖 Fix all issues with AI agents
In `@rs/moq-relay/src/cluster.rs`:
- Around line 252-259: The code appends the JWT into the URL query (via
Url::parse and query_pairs_mut) and then logs the full Url
(tracing::info!(%url,...)), leaking the token; fix by never logging the query
containing "jwt" — after building the Url and before any tracing::info! call,
create a local display_url (clone the Url or build from host/path) and call
display_url.set_query(None) or remove only the "jwt" pair, then log that
sanitized display_url (or log url.host()/url.path() explicitly) instead of the
original Url to ensure the token is not emitted.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
rs/moq-relay/src/cluster.rs (1)
212-221:⚠️ Potential issue | 🔴 Critical
unwrap()onactive.remove()can panic for skipped nodes.If a node that was skipped (e.g.,
localat line 213) later gets cancelled,origins.announced()will yield(node, None). The code reaches line 220 and callsactive.remove(...).unwrap(), but since the node was never inserted intoactive, this panics.Proposed fix
let Some(origin) = origin else { tracing::info!(%node, "origin cancelled"); - active.remove(node.as_str()).unwrap().abort(); + if let Some(handle) = active.remove(node.as_str()) { + handle.abort(); + } continue; };
🧹 Nitpick comments (2)
rs/moq-relay/src/cluster.rs (2)
164-167: Consider documenting whynoopmust outlive theselect!.The
noopproducer keeps the consumer alive sorun_remotedoesn't exit onorigin.closed(). This works becausenooplives in therun()stack frame alongside theselect!, but the coupling is implicit. A brief doc-comment or naming like_keepalivewould make the intent more resilient to future refactoring.
263-285: Backoff grows monotonically and never resets on success.If
run_remote_oncesucceeds for a while then fails, the backoff resumes from its last value rather than resetting. The TODO on line 280 acknowledges this. Just noting that this can cause premature bail-out after transient failures following a long-lived connection.Would you like me to draft a backoff-reset implementation (e.g., reset when the connection survives longer than the current backoff period)?
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@dev/leaf1.toml`:
- Line 25: Fix the typo in the comment string in dev/leaf1.toml: replace the
misspelled word "identifical" with "identical" in the comment "# The root node
is either missing the "root" field below or it's identifical to the "node"
field." so the sentence reads correctly; update that comment text in the file
(search for the exact comment line to locate it).
- Line 7: The inline comment currently says "Listen for QUIC connections on
UDP:4444" but the configuration's listen directive (the listen = ... line) binds
to port 4445; update the comment to "Listen for QUIC connections on UDP:4445" so
it matches the listen directive, or if the intended port is 4444 change the
listen directive instead—ensure the comment and the listen directive are
consistent (refer to the listen directive to locate the setting).
🧹 Nitpick comments (3)
rs/moq-relay/src/web.rs (1)
292-307:serve_announcedpassesregisterbut never creates a registration.The
registerfield from the query is passed intoAuthParamsand verified, but unlikeserve_ws, noClusterRegistrationis created. If a client sends?register=...to the/announcedendpoint,verify()will succeed and populatetoken.register, but nothing acts on it. This is likely fine since/announcedis a short-lived HTTP request, but it means a non-cluster client could probe the endpoint with aregisterparam without error (as long as they have a cluster JWT).Consider whether
registershould be ignored/stripped for non-WebSocket endpoints to keep the API surface clean.rs/moq-relay/src/cluster.rs (1)
258-283: Backoff is never reset after a successful connection.If
run_remote_oncesucceeds (connection established then cleanly closed),backoffremains at whatever value it reached during previous errors. The TODO on line 275 acknowledges this, but it could cause premature bail-out: a transient burst of failures raises backoff, then a brief success doesn't reset it, and the next failure immediately exceeds the 300s cap.Consider resetting
backoff = 1whenrun_remote_oncereturnsOk.Proposed fix
let res = tokio::select! { biased; _ = origin.closed() => break, res = self.run_remote_once(&url) => res, }; - if let Err(err) = res { + if let Err(err) = &res { backoff *= 2; tracing::error!(%err, "remote error"); + } else { + backoff = 1; }rs/moq-relay/src/auth.rs (1)
300-304: No test coverage for theregister/clusterinteraction.Lines 300-304 introduce three distinct code paths (register + cluster →
Some, register + no cluster → error, no register →None), but none of the tests exercise them. Consider adding tests for:
- A cluster JWT with
?register=node1→token.register == Some("node1")- A non-cluster JWT with
?register=node1→Err(ExpectedCluster)- A cluster JWT without
?register→token.register == NoneWould you like me to generate the test cases?
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@dev/leaf1.toml`:
- Around line 1-69: Fix the two typos in the TOML comments: in the [client]
comment replace "indiciated" with "indicated" (the sentence describing
certificate domain matching, e.g. "ex. a cert for *.moq.dev will only be allowed
if the root/node configuration below matches") and in the [auth] comment replace
"publically" with "publicly" (the sentence explaining anonymous
publishing/subscribing). Leave all other text and formatting unchanged.
🧹 Nitpick comments (1)
rs/moq-relay/src/cluster.rs (1)
158-165: Thenoopproducer's lifetime is implicitly tied to theselect!scope — consider adding a brief comment.The
noopproducer keeps the root connection'sorigin.closed()from firing, which is the intent. However, if this function is refactored later andnoopis accidentally moved or dropped early, the root connection would silently break. Alet _noop = ...binding or a slightly more explicit comment would make the intent clearer.Suggested clarification
- // Create a dummy broadcast that we don't close so run_remote doesn't close. - let noop = Broadcast::produce(); + // Create a dummy broadcast whose consumer is passed to run_remote for the root connection. + // The producer must be kept alive for the duration of this scope to prevent the connection from closing. + let _noop_keepalive = Broadcast::produce(); + let noop_consumer = _noop_keepalive.consume();Then pass
noop_consumerinstead ofnoop.consume()on line 165.
| [log] | ||
| # Enable debug logging for development. | ||
| # The RUST_LOG environment variable will take precedence. | ||
| level = "debug" | ||
|
|
||
| [server] | ||
| # Listen for QUIC connections on UDP:4445 | ||
| # Sometimes IPv6 causes issues; try 127.0.0.1:4445 instead. | ||
| listen = "[::]:4445" | ||
|
|
||
| # Generate a self-signed certificate for the server. | ||
| # You should use a real certificate in production. | ||
| tls.generate = ["localhost"] | ||
|
|
||
| [web.http] | ||
| # Listen for HTTP and WebSocket (TCP) connections on the given address. | ||
| listen = "[::]:4445" | ||
|
|
||
| # This clustering scheme is very very simple for now. | ||
| # | ||
| # There is a root node that is used to connect leaf nodes together. | ||
| # Announcements flow from leaf -> root -> leaf, but any subscriptions are leaf -> leaf. | ||
| # The root node can serve (user) subscriptions too. | ||
| # | ||
| # The root node is either missing the "root" field below or it's identical to the "node" field. | ||
| # This node acts a server only, accepting incoming connections from leaf nodes and users alike. | ||
| # | ||
| # There can be any number of leaf nodes. | ||
| # These nodes will connect to the specified root address and announce themselves via MoQ as a "broadcast". | ||
| # All nodes will discover these broadcasts and connect to other nodes. | ||
| # | ||
| # This forms an NxN mesh of nodes. | ||
| # Broadcasts are announced between all nodes with no collision detection, so duplicates are possible. | ||
| # Subscriptions will be relayed from leaf to leaf, so at most you can have: | ||
| # user -> leaf -> leaf -> user | ||
| [cluster] | ||
| # Connect to this hostname in order to discover other nodes. | ||
| connect = "localhost:4443" | ||
|
|
||
| # Use the token in this file when connecting to other nodes. | ||
| # `just auth-token` will populate this file. | ||
| token = "dev/root.jwt" | ||
|
|
||
| # My hostname, which must be accessible from other nodes. | ||
| node = "localhost:4445" | ||
|
|
||
| # Each leaf node will connect to the root node and other nodes, using this configuration. | ||
| [client] | ||
| # QUIC uses TLS to have the client verify the server's identity. | ||
| # However if you're not worried about man-in-the-middle attacks, you can disable verification: | ||
| tls.disable_verify = true | ||
|
|
||
| # A better approach is to generate a server certificate and configure the client to accept it. | ||
| # | ||
| # If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indiciated domain. | ||
| # ex. a cert for *.moq.dev will only be allowed if the root/node configuration below matches. | ||
| # | ||
| # Alternatively, you can generate a self-signed root CA and configure the client to accept it. | ||
| # tls.root = ["/path/to/root.pem"] | ||
| # | ||
| # This can be much more secure because the server doesn't need to be publically accessible. | ||
| # ex. You could host the root at a private `.internal` domain and generate a matching certificate. | ||
|
|
||
| [auth] | ||
| # `just auth-key` will populate this file. | ||
| key = "dev/root.jwk" | ||
|
|
||
| # Allow anonymous publishing and subscribing for this prefix. | ||
| public = "anon" |
There was a problem hiding this comment.
LGTM — configuration is well-structured and thoroughly documented.
The leaf node configuration looks correct and aligns with the new cluster registration approach. Two minor typos in the comments:
- Line 55: "indiciated" → "indicated"
- Line 61: "publically" → "publicly"
Proposed fix
-# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indiciated domain.
+# If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indicated domain.-# This can be much more secure because the server doesn't need to be publically accessible.
+# This can be much more secure because the server doesn't need to be publicly accessible.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [log] | |
| # Enable debug logging for development. | |
| # The RUST_LOG environment variable will take precedence. | |
| level = "debug" | |
| [server] | |
| # Listen for QUIC connections on UDP:4445 | |
| # Sometimes IPv6 causes issues; try 127.0.0.1:4445 instead. | |
| listen = "[::]:4445" | |
| # Generate a self-signed certificate for the server. | |
| # You should use a real certificate in production. | |
| tls.generate = ["localhost"] | |
| [web.http] | |
| # Listen for HTTP and WebSocket (TCP) connections on the given address. | |
| listen = "[::]:4445" | |
| # This clustering scheme is very very simple for now. | |
| # | |
| # There is a root node that is used to connect leaf nodes together. | |
| # Announcements flow from leaf -> root -> leaf, but any subscriptions are leaf -> leaf. | |
| # The root node can serve (user) subscriptions too. | |
| # | |
| # The root node is either missing the "root" field below or it's identical to the "node" field. | |
| # This node acts a server only, accepting incoming connections from leaf nodes and users alike. | |
| # | |
| # There can be any number of leaf nodes. | |
| # These nodes will connect to the specified root address and announce themselves via MoQ as a "broadcast". | |
| # All nodes will discover these broadcasts and connect to other nodes. | |
| # | |
| # This forms an NxN mesh of nodes. | |
| # Broadcasts are announced between all nodes with no collision detection, so duplicates are possible. | |
| # Subscriptions will be relayed from leaf to leaf, so at most you can have: | |
| # user -> leaf -> leaf -> user | |
| [cluster] | |
| # Connect to this hostname in order to discover other nodes. | |
| connect = "localhost:4443" | |
| # Use the token in this file when connecting to other nodes. | |
| # `just auth-token` will populate this file. | |
| token = "dev/root.jwt" | |
| # My hostname, which must be accessible from other nodes. | |
| node = "localhost:4445" | |
| # Each leaf node will connect to the root node and other nodes, using this configuration. | |
| [client] | |
| # QUIC uses TLS to have the client verify the server's identity. | |
| # However if you're not worried about man-in-the-middle attacks, you can disable verification: | |
| tls.disable_verify = true | |
| # A better approach is to generate a server certificate and configure the client to accept it. | |
| # | |
| # If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indiciated domain. | |
| # ex. a cert for *.moq.dev will only be allowed if the root/node configuration below matches. | |
| # | |
| # Alternatively, you can generate a self-signed root CA and configure the client to accept it. | |
| # tls.root = ["/path/to/root.pem"] | |
| # | |
| # This can be much more secure because the server doesn't need to be publically accessible. | |
| # ex. You could host the root at a private `.internal` domain and generate a matching certificate. | |
| [auth] | |
| # `just auth-key` will populate this file. | |
| key = "dev/root.jwk" | |
| # Allow anonymous publishing and subscribing for this prefix. | |
| public = "anon" | |
| [log] | |
| # Enable debug logging for development. | |
| # The RUST_LOG environment variable will take precedence. | |
| level = "debug" | |
| [server] | |
| # Listen for QUIC connections on UDP:4445 | |
| # Sometimes IPv6 causes issues; try 127.0.0.1:4445 instead. | |
| listen = "[::]:4445" | |
| # Generate a self-signed certificate for the server. | |
| # You should use a real certificate in production. | |
| tls.generate = ["localhost"] | |
| [web.http] | |
| # Listen for HTTP and WebSocket (TCP) connections on the given address. | |
| listen = "[::]:4445" | |
| # This clustering scheme is very very simple for now. | |
| # | |
| # There is a root node that is used to connect leaf nodes together. | |
| # Announcements flow from leaf -> root -> leaf, but any subscriptions are leaf -> leaf. | |
| # The root node can serve (user) subscriptions too. | |
| # | |
| # The root node is either missing the "root" field below or it's identical to the "node" field. | |
| # This node acts a server only, accepting incoming connections from leaf nodes and users alike. | |
| # | |
| # There can be any number of leaf nodes. | |
| # These nodes will connect to the specified root address and announce themselves via MoQ as a "broadcast". | |
| # All nodes will discover these broadcasts and connect to other nodes. | |
| # | |
| # This forms an NxN mesh of nodes. | |
| # Broadcasts are announced between all nodes with no collision detection, so duplicates are possible. | |
| # Subscriptions will be relayed from leaf to leaf, so at most you can have: | |
| # user -> leaf -> leaf -> user | |
| [cluster] | |
| # Connect to this hostname in order to discover other nodes. | |
| connect = "localhost:4443" | |
| # Use the token in this file when connecting to other nodes. | |
| # `just auth-token` will populate this file. | |
| token = "dev/root.jwt" | |
| # My hostname, which must be accessible from other nodes. | |
| node = "localhost:4445" | |
| # Each leaf node will connect to the root node and other nodes, using this configuration. | |
| [client] | |
| # QUIC uses TLS to have the client verify the server's identity. | |
| # However if you're not worried about man-in-the-middle attacks, you can disable verification: | |
| tls.disable_verify = true | |
| # A better approach is to generate a server certificate and configure the client to accept it. | |
| # | |
| # If the server has a certificate generated by a public root CA (ex. Let's Encrypt), then that will work if the client connects to the indicated domain. | |
| # ex. a cert for *.moq.dev will only be allowed if the root/node configuration below matches. | |
| # | |
| # Alternatively, you can generate a self-signed root CA and configure the client to accept it. | |
| # tls.root = ["/path/to/root.pem"] | |
| # | |
| # This can be much more secure because the server doesn't need to be publicly accessible. | |
| # ex. You could host the root at a private `.internal` domain and generate a matching certificate. | |
| [auth] | |
| # `just auth-key` will populate this file. | |
| key = "dev/root.jwk" | |
| # Allow anonymous publishing and subscribing for this prefix. | |
| public = "anon" |
🤖 Prompt for AI Agents
In `@dev/leaf1.toml` around lines 1 - 69, Fix the two typos in the TOML comments:
in the [client] comment replace "indiciated" with "indicated" (the sentence
describing certificate domain matching, e.g. "ex. a cert for *.moq.dev will only
be allowed if the root/node configuration below matches") and in the [auth]
comment replace "publically" with "publicly" (the sentence explaining anonymous
publishing/subscribing). Leave all other text and formatting unchanged.
The node announcing itself via a broadcast didn't actually work in practice... unless the wrong token was being used.
The root node would get announcements but it wouldn't forward them to other nodes. So leaf1 wouldn't know about leaf2.