Conversation
📝 WalkthroughWalkthroughThis PR upgrades the go-ethereum dependency across multiple modules to a newer commit and refactors fork-handling terminology from MPT to Jade throughout the node core logic. It also introduces L2NodeV2 interface methods for block data retrieval and application in sequencer mode, and updates transaction type handling in blob decoding. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
node/core/executor.go (1)
347-353: Inconsistent terminology: error message still references "MPT".The error message at line 351 references "MPT-compatible geth" while the rest of this PR refactors to "Jade fork" terminology. Similar issue at lines 390-391.
Consider updating for consistency, or clarify if "MPT" is intentionally retained for user-facing messages.
♻️ Suggested terminology update
e.logger.Error("!!! CRITICAL: Geth is behind - node BLOCKED !!!", "consensus_block", wrappedBlock.Number, "geth_height", height, - "action", "Switch to MPT-compatible geth IMMEDIATELY") + "action", "Switch to Jade-compatible geth IMMEDIATELY")And at lines 390-391:
- e.logger.Error("HINT: If this occurs after MPT upgrade, your geth node may not support MPT blocks. " + - "Please ensure you are running an MPT-compatible geth node.") + e.logger.Error("HINT: If this occurs after Jade upgrade, your geth node may not support Jade blocks. " + + "Please ensure you are running a Jade-compatible geth node.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/core/executor.go` around lines 347 - 353, Update the user-facing error strings to use the new "Jade fork" terminology instead of "MPT" for consistency: modify the logger.Error calls that reference "MPT-compatible geth" (the call that logs "!!! CRITICAL: Geth is behind - node BLOCKED !!!" using wrappedBlock.Number, height and returns types.ErrWrongBlockNumber) and the similar messages around the block checks at the later error paths (the logger.Error usage around lines ~390-391) so they mention "Jade fork" (or the agreed new phrase) consistently; keep the same context parameters (consensus_block, geth_height, action) and return types.ErrWrongBlockNumber as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Makefile`:
- Around line 2-3: The Makefile currently sets ETHEREUM_SUBMODULE_COMMIT_OR_TAG
to a non-existent tag `test_3_13`, which causes git checkout failures; update
the variable ETHEREUM_SUBMODULE_COMMIT_OR_TAG to a valid go-ethereum tag or
commit hash (and optionally align ETHEREUM_TARGET_VERSION if it must match) by
replacing `test_3_13` with an actual tag/commit present in the go-ethereum repo
and verify by running the same git checkout command used in the Makefile to
ensure it succeeds.
In `@node/types/blob.go`:
- Around line 156-160: The code references non-existent symbols MorphTxType and
MorphTx (seen where you read firstByte and set innerTx = new(eth.MorphTx)),
which will fail to compile; replace these with the correct go-ethereum custom tx
symbols AltFeeTxType and AltFeeTx (or update the go-ethereum dependency if you
intend to use Morph types), and ensure the switch case uses eth.AltFeeTxType and
innerTx = new(eth.AltFeeTx) so binary.Read and subsequent handling use the
proper types.
---
Nitpick comments:
In `@node/core/executor.go`:
- Around line 347-353: Update the user-facing error strings to use the new "Jade
fork" terminology instead of "MPT" for consistency: modify the logger.Error
calls that reference "MPT-compatible geth" (the call that logs "!!! CRITICAL:
Geth is behind - node BLOCKED !!!" using wrappedBlock.Number, height and returns
types.ErrWrongBlockNumber) and the similar messages around the block checks at
the later error paths (the logger.Error usage around lines ~390-391) so they
mention "Jade fork" (or the agreed new phrase) consistently; keep the same
context parameters (consensus_block, geth_height, action) and return
types.ErrWrongBlockNumber as before.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (9)
bindings/go.sumis excluded by!**/*.sumcontracts/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sumnode/go.sumis excluded by!**/*.sumops/l2-genesis/go.sumis excluded by!**/*.sumops/tools/go.sumis excluded by!**/*.sumoracle/go.sumis excluded by!**/*.sumtoken-price-oracle/go.sumis excluded by!**/*.sumtx-submitter/go.sumis excluded by!**/*.sum
📒 Files selected for processing (14)
Makefilebindings/go.modcontracts/go.modgo-ethereumnode/core/batch.gonode/core/executor.gonode/go.modnode/types/blob.gonode/types/retryable_client.goops/l2-genesis/go.modops/tools/go.modoracle/go.modtoken-price-oracle/go.modtx-submitter/go.mod
Makefile
Outdated
| ETHEREUM_SUBMODULE_COMMIT_OR_TAG := test_3_13 | ||
| ETHEREUM_TARGET_VERSION := v1.10.14-0.20260227040115-e6c501a8d680 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, explore the repository structure
echo "=== Repository structure ==="
ls -la | head -20
echo ""
echo "=== Check for go-ethereum directory ==="
ls -d go-ethereum 2>/dev/null && echo "go-ethereum directory exists" || echo "go-ethereum directory not found"
echo ""
echo "=== Check if this is a git repo ==="
git rev-parse --git-dir 2>/dev/null && echo "This is a git repository" || echo "Not a git repository"
echo ""
echo "=== List Makefile ==="
cat Makefile | head -20Repository: morph-l2/morph
Length of output: 2265
🏁 Script executed:
#!/bin/bash
# Check git log for Makefile changes to understand context
echo "=== Recent commits touching Makefile ==="
git log --oneline -10 -- Makefile 2>/dev/null || echo "Could not retrieve git log"
echo ""
echo "=== Current branch and status ==="
git status 2>/dev/null || echo "Could not retrieve git status"
echo ""
echo "=== Check if test_3_13 is referenced elsewhere ==="
rg "test_3_13" --type-list | head -20
rg "test_3_13" 2>/dev/null || echo "Pattern not found in codebase"Repository: morph-l2/morph
Length of output: 736
🏁 Script executed:
#!/bin/bash
# Check if the previous tag or any related context exists
echo "=== Search for semantic version tags or similar patterns ==="
rg "morph-v|v[0-9]+\.[0-9]+\.[0-9]+" --type make 2>/dev/null | head -20
echo ""
echo "=== Check entire Makefile for version patterns ==="
cat Makefile 2>/dev/null | grep -E "VERSION|COMMIT|TAG|=.*[0-9]" || echo "Could not read Makefile"Repository: morph-l2/morph
Length of output: 1309
🏁 Script executed:
#!/bin/bash
# Check the go-ethereum submodule for the tag
echo "=== Check git submodule status ==="
git submodule status
echo ""
echo "=== Navigate to go-ethereum and check tags ==="
cd go-ethereum 2>/dev/null && {
echo "Successfully changed to go-ethereum directory"
echo ""
echo "=== Check if test_3_13 tag exists ==="
git tag | grep "test_3_13" || echo "Tag test_3_13 not found"
echo ""
echo "=== List all tags (last 15) ==="
git tag --sort=-version:refname | head -15
echo ""
echo "=== Check current HEAD in submodule ==="
git rev-parse HEAD
git log --oneline -5
} || echo "Could not access go-ethereum directory or not a git repo"Repository: morph-l2/morph
Length of output: 441
🏁 Script executed:
#!/bin/bash
# Check git history of Makefile for the specific change and previous values
echo "=== Git history of Makefile for ETHEREUM_SUBMODULE_COMMIT_OR_TAG ==="
git log -p -- Makefile | grep -A 3 -B 3 "ETHEREUM_SUBMODULE_COMMIT_OR_TAG" | head -50
echo ""
echo "=== Specific commit that changed the tag ==="
git log --all --oneline -- Makefile | head -10
echo ""
echo "=== Show the recent commit details ==="
git show --stat bb760d0 2>/dev/null || git log -1 --statRepository: morph-l2/morph
Length of output: 50370
🏁 Script executed:
#!/bin/bash
# Check what git operations use this variable and if it would fail
echo "=== Check how ETHEREUM_SUBMODULE_COMMIT_OR_TAG is used ==="
grep -n "ETHEREUM_SUBMODULE_COMMIT_OR_TAG" Makefile
echo ""
echo "=== Try to understand the checkout target ==="
cd go-ethereum 2>/dev/null && {
echo "Attempting to checkout test_3_13..."
git checkout test_3_13 2>&1 || echo "FAILED: Could not checkout test_3_13"
echo ""
echo "Current HEAD state:"
git rev-parse --short HEAD
git log --oneline -3
} || echo "Could not navigate to go-ethereum"Repository: morph-l2/morph
Length of output: 554
test_3_13 tag does not exist in the go-ethereum submodule and will cause build failures.
The ETHEREUM_SUBMODULE_COMMIT_OR_TAG is set to test_3_13, but this tag does not exist in the go-ethereum repository. Attempting to git checkout test_3_13 (as specified in line 48 of the Makefile) fails with "pathspec 'test_3_13' did not match any file(s) known to git". Replace test_3_13 with a valid commit hash or tag that exists in the go-ethereum repository.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Makefile` around lines 2 - 3, The Makefile currently sets
ETHEREUM_SUBMODULE_COMMIT_OR_TAG to a non-existent tag `test_3_13`, which causes
git checkout failures; update the variable ETHEREUM_SUBMODULE_COMMIT_OR_TAG to a
valid go-ethereum tag or commit hash (and optionally align
ETHEREUM_TARGET_VERSION if it must match) by replacing `test_3_13` with an
actual tag/commit present in the go-ethereum repo and verify by running the same
git checkout command used in the Makefile to ensure it succeeds.
| case eth.MorphTxType: | ||
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| return nil, err | ||
| } | ||
| innerTx = new(eth.AltFeeTx) | ||
| innerTx = new(eth.MorphTx) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== Searching for MorphTxType definition ==="
rg -n "MorphTxType" --type go | head -20
echo ""
echo "=== Searching for MorphTx struct definition ==="
rg -n "type MorphTx struct" --type go
echo ""
echo "=== Checking the blob.go file context ==="
sed -n '150,165p' node/types/blob.go
echo ""
echo "=== Searching for any remaining AltFeeTx references ==="
rg -n "AltFeeTx" --type go
echo ""
echo "=== Searching for MorphTx usage in blob.go ==="
rg -n "MorphTx" node/types/blob.goRepository: morph-l2/morph
Length of output: 954
🏁 Script executed:
#!/bin/bash
echo "=== Checking imports in blob.go ==="
head -30 node/types/blob.go
echo ""
echo "=== Checking if go.mod exists and contains go-ethereum dependency ==="
if [ -f go.mod ]; then
echo "Found go.mod:"
rg "morph-l2/go-ethereum|ethereum" go.mod
fi
echo ""
echo "=== Searching for eth alias definitions ==="
rg -B5 "case eth\." node/types/blob.go | head -20Repository: morph-l2/morph
Length of output: 1559
🏁 Script executed:
#!/bin/bash
echo "=== Finding and examining go.mod ==="
find . -name "go.mod" -type f | head -5
echo ""
echo "=== Checking go.mod content for go-ethereum ==="
if [ -f ./go.mod ]; then
rg "go-ethereum" ./go.mod -A 1
fi
echo ""
echo "=== Verifying other transaction types exist (AccessListTx, DynamicFeeTx, SetCodeTx) ==="
rg "AccessListTxType|DynamicFeeTxType|SetCodeTxType" node/types/blob.goRepository: morph-l2/morph
Length of output: 569
🏁 Script executed:
#!/bin/bash
echo "=== Examining node/go.mod for go-ethereum dependency ==="
cat node/go.mod | grep -A 2 "go-ethereum"
echo ""
echo "=== Looking for any morph-l2/go-ethereum references in node/go.mod ==="
rg "morph-l2/go-ethereum" node/go.mod
echo ""
echo "=== Checking if MorphTxType is referenced elsewhere in the codebase ==="
rg -n "MorphTxType|MorphTx" --type go | grep -v "Binary"
echo ""
echo "=== Checking git info to understand the dependency version ==="
if [ -f node/go.mod ]; then
tail -50 node/go.mod | head -30
fiRepository: morph-l2/morph
Length of output: 2473
🌐 Web query:
morph-l2/go-ethereum MorphTxType MorphTx core/types
💡 Result:
In morph-l2/go-ethereum there isn’t a MorphTxType / MorphTx in core/types.
Morph-specific transaction types are defined in core/types/transaction.go as: L1MessageTxType = 0x7E and AltFeeTxType = 0x7F (alongside the standard geth tx types). The typed-tx decoder (decodeTyped) instantiates L1MessageTx or AltFeeTx for these type bytes. [1]
Implementations live in:
core/types/l1_message_tx.go:type L1MessageTx struct { ... }withtxType() byte { return L1MessageTxType }. [2]core/types/alt_fee_tx.go:type AltFeeTx struct { ... }withtxType() byte { return AltFeeTxType }. [3]
If you saw MorphTxType in older discussions/patches, it’s likely an older/renamed name for one of these custom typed transactions (most commonly AltFeeTx). [1][3]
Sources: [1] [2] [3]
MorphTxType and MorphTx do not exist in the go-ethereum dependency and will cause a compilation error.
The morph-l2/go-ethereum library (v1.10.14-0.20260227040115-e6c501a8d680) defines custom transaction types as AltFeeTxType (0x7F) with corresponding AltFeeTx struct in core/types/alt_fee_tx.go, not MorphTxType/MorphTx. Either update the go-ethereum dependency to include these new types, or revert to the original AltFeeTxType/AltFeeTx references.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node/types/blob.go` around lines 156 - 160, The code references non-existent
symbols MorphTxType and MorphTx (seen where you read firstByte and set innerTx =
new(eth.MorphTx)), which will fail to compile; replace these with the correct
go-ethereum custom tx symbols AltFeeTxType and AltFeeTx (or update the
go-ethereum dependency if you intend to use Morph types), and ensure the switch
case uses eth.AltFeeTxType and innerTx = new(eth.AltFeeTx) so binary.Read and
subsequent handling use the proper types.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
node/types/retryable_client.go (1)
445-461:⚠️ Potential issue | 🟠 MajorBug: BlockByNumber bypasses client switching logic.
BlockByNumberusesrc.ethClientdirectly instead ofrc.eClient(). This bypasses the client switching mechanism, which could cause the method to query the wrong client after a switch has occurred.🐛 Proposed fix
func (rc *RetryableClient) BlockByNumber(ctx context.Context, blockNumber *big.Int) (ret *eth.Block, err error) { if retryErr := backoff.Retry(func() error { - resp, respErr := rc.ethClient.BlockByNumber(ctx, blockNumber) + resp, respErr := rc.eClient().BlockByNumber(ctx, blockNumber) if respErr != nil { rc.logger.Info("failed to call BlockByNumber", "error", respErr)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/types/retryable_client.go` around lines 445 - 461, BlockByNumber currently calls rc.ethClient directly, bypassing the client-switching logic in rc.eClient(); change the call to use rc.eClient().BlockByNumber so the method always uses the active client returned by eClient(), preserve the existing retry/backoff flow and error handling (including retryableError checks and setting err/ret), and ensure ret is only assigned when resp is non-nil; update references inside the anonymous retry func to use eClient() instead of ethClient.
🧹 Nitpick comments (16)
node/sync/bridge_client.go (1)
27-30: Optional: Consider returning a read-only interface instead of concrete*ethclient.Client.While the current callers only read from the returned client and no
.Close()calls are present in the codebase, returning a concrete type exposes unnecessary mutability. The existing comment documents the intent ("for sharing with other services"), but narrowing the API surface to a read-only interface would make ownership boundaries clearer and prevent accidental misuse by future callers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/sync/bridge_client.go` around lines 27 - 30, Change BridgeClient.L1Client to return a narrow read-only interface instead of *ethclient.Client: declare a new interface (e.g., ReadOnlyEthClient or L1ClientReader) in the same package that lists only the methods callers actually use, update the method signature func (c *BridgeClient) L1Client() ReadOnlyEthClient, and return c.l1Client (ethclient.Client implements the interface). This preserves behavior but prevents callers from calling mutating methods like Close(); ensure the interface includes all methods currently invoked on the returned client so the code continues to compile.ops/docker-sequencer-test/run-test.sh (4)
471-471: Fix inconsistent indentation.Line 471 has an extra leading space before
echo.♻️ Proposed fix
- echo "Node 0 (seq-0): Block $(get_block_number http://127.0.0.1:8545)" + echo "Node 0 (seq-0): Block $(get_block_number http://127.0.0.1:8545)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/run-test.sh` at line 471, The echo line has an extra leading space causing inconsistent indentation; remove the extra leading space so the line reads starting at column 1 (i.e., align `echo "Node 0 (seq-0): Block $(get_block_number http://127.0.0.1:8545)"` with surrounding lines), keeping the same content and using the existing `get_block_number` call.
274-274: Document the well-known test private key.This is the Anvil/Hardhat account
#0private key (a widely-used development key). Add a comment to clarify this is intentional and test-only.📝 Proposed documentation
+ # Anvil/Hardhat account `#0` - well-known test key, DO NOT use in production export SEQUENCER_PRIVATE_KEY="0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/run-test.sh` at line 274, The exported SEQUENCER_PRIVATE_KEY value in run-test.sh is the well-known Anvil/Hardhat account `#0` private key used for local/dev testing; add an inline comment next to the export of SEQUENCER_PRIVATE_KEY clarifying that this is the intentional, widely-used development/test-only key (not for production) so readers understand it's safe and expected in tests.
68-68: Shellcheck SC2155: Declare and assign separately.Multiple lines combine
localdeclaration with command substitution, which masks return values. While this is low-risk for a test script, separating declaration and assignment improves robustness.♻️ Example fix pattern
- local current=$(get_block_number "$rpc_url") + local current + current=$(get_block_number "$rpc_url")Also applies to: 305-305, 313-314, 333-333, 342-342, 345-346, 360-360, 378-378
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/run-test.sh` at line 68, Several local variables are declared with command substitution in one step (e.g., local current=$(get_block_number "$rpc_url")), which triggers ShellCheck SC2155; change each to declare the variable first and assign separately to avoid masking return values. For each occurrence (functions like get_block_number and variables such as current, target, next, etc.), replace the single-line "local var=$(...)" with two lines: "local var" followed by "var=$(...)" ensuring you preserve any quoting and command arguments (e.g., "$rpc_url"); apply this pattern to all listed occurrences.
142-240: Embedded Python script is complex but functional.The inline Python for contract deployment and genesis generation is extensive. Consider extracting this to a separate Python script file for better maintainability and testability. However, keeping it inline does ensure the script is self-contained.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/run-test.sh` around lines 142 - 240, Extract the large inline Python payload into a separate script (e.g., generate_genesis.py) and replace the embedded python3 -c "..." block in run-test.sh with a call to that script; move all top-level symbols and logic (sys.path.insert(0, '$DEVNET_DIR'), imports devnet, functions/variables like run_command, read_json, write_json, log, devnet_cfg_orig, temp_deploy_config, genesis_batch_header, addresses, etc.) into the new file, accept required environment values (DEVNET_DIR, MORPH_ROOT, OPS_DIR, CONTRACTS_DIR or pass them as args/env), add a proper shebang and make it executable, and ensure the shell script invokes python3 generate_genesis.py with the same environment so return codes and side-effects (files written to devnet_dir, deployment_dir, .env updates) are preserved.ops/docker-sequencer-test/docker-compose.override.yml (2)
3-3:versionfield is obsolete in Docker Compose v2+.The
versionkey is ignored by modern Docker Compose and produces a warning. Consider removing it.♻️ Proposed fix
# Override file to use test images # Copy this to ops/docker/docker-compose.override.yml before running test -version: '3.8' services:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/docker-compose.override.yml` at line 3, Remove the obsolete top-level "version:" key from docker-compose.override.yml (the "version: '3.8'" line) because Compose v2+ ignores it and emits a warning; simply delete that key and its value so the file uses the newer schema without the version field.
27-27: Document that these are test-only private keys.Gitleaks flagged these as potential API key leaks. While these appear to be pre-generated test keys (similar to well-known Anvil/Hardhat development keys), adding a comment will prevent future confusion and silence security scanners.
📝 Proposed documentation
node-0: image: morph-node-test:latest build: context: ../.. dockerfile: ops/docker-sequencer-test/Dockerfile.l2-node-test environment: + # Test-only sequencer key - DO NOT use in production - MORPH_NODE_SEQUENCER_PRIVATE_KEY=0xd99870855d97327d20c666abc78588f1449b1fac76ed0c86c1afb9ce2db85f32node-1: image: morph-node-test:latest environment: + # Test-only sequencer key - DO NOT use in production - MORPH_NODE_SEQUENCER_PRIVATE_KEY=0x0890c388c3bf5e04fee1d8f3c117e5f44f435ced7baf7bfd66c10e1f3a3f4b10Also applies to: 36-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/docker-compose.override.yml` at line 27, Add an inline comment next to the MORPH_NODE_SEQUENCER_PRIVATE_KEY environment variable to clearly state that this value is a pre-generated test/development private key (not for production) so scanners and future readers know it's safe to commit; update the same comment for the other identical key occurrence (the second MORPH_NODE_SEQUENCER_PRIVATE_KEY entry) to match.ops/docker-sequencer-test/Dockerfile.l2-geth-test (1)
12-17: Consider using a minimal runtime image.The runtime stage uses the full
go-ubuntu-builderimage which includes Go toolchain and build dependencies. For a smaller image, consider using a minimal base likeubuntu:22.04ordebian:bookworm-slimsince only the compiled binary is needed at runtime.💡 Example using minimal base
# Runtime stage -FROM ghcr.io/morph-l2/go-ubuntu-builder:go-1.24-ubuntu +FROM ubuntu:22.04 RUN apt-get -qq update && apt-get -qq install -y --no-install-recommends \ ca-certificates bash curl \ && rm -rf /var/lib/apt/lists/*🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/Dockerfile.l2-geth-test` around lines 12 - 17, The runtime stage currently uses the full builder image (ghcr.io/morph-l2/go-ubuntu-builder:go-1.24-ubuntu); replace it with a minimal runtime base such as ubuntu:22.04 or debian:bookworm-slim, install only the runtime packages (ca-certificates, bash, curl) as already present, and adjust any subsequent COPY/USER instructions that expect the built binary (ensure the compiled binary built in the builder stage is copied into the new minimal stage and ownership/permissions are set appropriately). Keep the existing RUN apt-get lines but run them against the new minimal base and remove any leftover build-tool assumptions from the runtime stage.ops/docker-sequencer-test/Dockerfile.l2-node-test (1)
37-47: Consider using a minimal runtime image (consistent with geth Dockerfile).Same as the geth test Dockerfile, the runtime stage uses the full builder image. For consistency and smaller images, consider using a minimal base since only the compiled binaries are needed at runtime.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/Dockerfile.l2-node-test` around lines 37 - 47, The final stage currently uses the full builder image (FROM ghcr.io/morph-l2/go-ubuntu-builder:go-1.24-ubuntu) even though only the compiled binaries are needed; change the final stage to a minimal runtime base (e.g., a distroless or slim image) and re-add any required runtime packages (ca-certificates or libc) if needed, then keep the COPY lines (COPY --from=builder /bitget/morph/node/build/bin/tendermint /usr/local/bin/ and COPY --from=builder /bitget/morph/node/build/bin/morphnode /usr/local/bin/) and the CMD (["morphnode","--home","/data"]) so only the two binaries are included in the smaller image.ops/docker-sequencer-test/scripts/tx-generator.sh (1)
31-36: Code duplication withrun-test.sh.This
get_block_numberfunction is nearly identical to the one inrun-test.sh(lines 53-59). Consider extracting shared utilities into a common script that both can source to avoid maintenance drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/scripts/tx-generator.sh` around lines 31 - 36, The get_block_number function is duplicated between tx-generator.sh and run-test.sh; extract this function into a shared utility script (e.g., a sourced common util script) and replace the duplicate definitions by sourcing that shared script from both tx-generator.sh and run-test.sh, ensuring the function name get_block_number remains unchanged so callers keep working and update any shebangs/execute permissions as needed.node/l1sequencer/tracker.go (1)
54-68: Consider an immediate check on startup.The loop only triggers on ticker ticks, so the first sync lag check happens 1 minute after start. Consider adding an immediate check before entering the loop to detect issues earlier.
🔧 Suggested improvement
func (t *L1Tracker) loop() { defer close(t.stop) + // Immediate check on startup + t.checkL1SyncLag() + ticker := time.NewTicker(1 * time.Minute) defer ticker.Stop()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/l1sequencer/tracker.go` around lines 54 - 68, The loop in L1Tracker.loop delays the first sync check by up to one minute; call t.checkL1SyncLag() once immediately before entering the for/select loop so the first check runs on startup (ensure you still respect t.ctx.Done() by returning if context is canceled), then keep the existing ticker-driven checks; reference the L1Tracker.loop function and the checkL1SyncLag method to locate where to add the immediate invocation.node/flags/flags.go (1)
3-6: Import order should follow Go conventions.Standard library packages should be grouped before third-party packages. Consider reordering:
import ( - "github.com/urfave/cli" "time" + + "github.com/urfave/cli" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/flags/flags.go` around lines 3 - 6, The import block in flags.go violates Go convention by placing the third-party package github.com/urfave/cli before the standard library package time; reorder the imports so standard library packages (time) appear first, then a blank line, then third-party packages (github.com/urfave/cli) to follow Go import grouping conventions.contracts/contracts/l1/L1Sequencer.sol (1)
9-13: Consider adding a storage gap for future upgrades.For upgradeable contracts, it's a best practice to reserve storage slots for future state variables. Without a gap, adding new variables in future upgrades could corrupt existing storage layout.
🛡️ Suggested addition
contract L1Sequencer is OwnableUpgradeable { // ============ Storage ============ /// `@notice` Current sequencer address address public sequencer; + + /// `@dev` Reserved storage gap for future upgrades + uint256[49] private __gap; // ============ Events ============🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/contracts/l1/L1Sequencer.sol` around lines 9 - 13, The contract L1Sequencer (which inherits OwnableUpgradeable) lacks a reserved storage gap and so adding new state vars in upgrades could corrupt the storage layout; add a private reserved storage array (e.g., uint256[n] __gap) after the declared state variable `address public sequencer;` (inside the storage section of L1Sequencer) to reserve future slots and preserve compatibility with OwnableUpgradeable and future upgrades.node/core/executor.go (1)
662-683: Consider consistent nil Transactions handling in executableL2DataToBlockV2.
blockV2ToExecutableL2Data(lines 639-643) ensuresTransactionsis never nil, butexecutableL2DataToBlockV2doesn't apply the same treatment. While this may be fine if the caller always provides non-nil data, it's inconsistent.♻️ Suggested consistency improvement
func executableL2DataToBlockV2(data *catalyst.ExecutableL2Data) *l2node.BlockV2 { if data == nil { return nil } + txs := data.Transactions + if txs == nil { + txs = make([][]byte, 0) + } return &l2node.BlockV2{ ParentHash: data.ParentHash, Miner: data.Miner, Number: data.Number, GasLimit: data.GasLimit, BaseFee: data.BaseFee, Timestamp: data.Timestamp, - Transactions: data.Transactions, + Transactions: txs,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/core/executor.go` around lines 662 - 683, executableL2DataToBlockV2 currently copies Transactions directly and can leave BlockV2.Transactions nil, making it inconsistent with blockV2ToExecutableL2Data which ensures Transactions is never nil; update executableL2DataToBlockV2 to defensive-copy or set Transactions to an empty slice when data.Transactions is nil (i.e., ensure BlockV2.Transactions = data.Transactions or make([]<txnType>, 0) if nil) so both conversion functions guarantee a non-nil Transactions slice.node/l1sequencer/verifier.go (1)
59-83: IsSequencer logic has potential for double RPC call.The current flow can make two L1 RPC calls in a single
IsSequencerinvocation:
- Line 67: Refresh if cache expired
- Line 78: Force refresh on cache miss
While this ensures correctness (catching recent sequencer changes), be aware of the RPC overhead. With a 10-second TTL, this is manageable, but with a longer TTL the double-call on miss becomes more relevant.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/l1sequencer/verifier.go` around lines 59 - 83, IsSequencer can perform two RPC calls by calling flushCache twice (once for expired cache and again on cache miss); modify IsSequencer to avoid a redundant second refresh by tracking whether a refresh already occurred in this invocation—call flushCache once if cache is expired, set a local flag (e.g., refreshed), check the cached sequencer (c.sequencer) and only call flushCache again on miss if refreshed is false; keep the existing mutex locking and return semantics and reference functions/fields IsSequencer, flushCache, c.cacheExpiry, c.sequencer, and c.mutex.node/l1sequencer/signer.go (1)
65-71: Consider whether nil verifier should be an error or expected state.The
IsActiveSequencermethod returns an error whenverifieris nil. However,NewLocalSigneraccepts a nil verifier without error. This creates an asymmetry where construction succeeds but a key method fails.If a nil verifier is a valid configuration (e.g., verifier disabled), consider returning
(false, nil)instead. If a verifier is always required for proper operation, validate this in the constructor.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/l1sequencer/signer.go` around lines 65 - 71, The constructor/usage mismatch: NewLocalSigner currently accepts a nil verifier but IsActiveSequencer treats nil verifier as an error; pick one of two fixes and implement it consistently: (A) If verifier is optional, change LocalSigner.IsActiveSequencer to return (false, nil) when s.verifier == nil (remove the fmt.Errorf path) so caller sees "not active" without error; or (B) If verifier is required, validate in NewLocalSigner (e.g., return an error) and ensure NewLocalSigner does not construct a LocalSigner with a nil verifier. Refer to the LocalSigner type, the IsActiveSequencer method, and NewLocalSigner to locate where to apply the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindings/bindings/l1sequencer.go`:
- Around line 32-36: The L1SequencerMetaData ABI is out of date: its initialize
signature only has initialize(address _owner) but the Solidity contract's
initialize requires initialize(address _owner, address _initialSequencer);
regenerate the Go bindings so L1SequencerMetaData.ABI includes the two-argument
initialize(address,address). Locate the binding struct L1SequencerMetaData and
re-run your binding generation tool (e.g., abigen / your build script) against
the updated L1Sequencer.sol contract so the generated ABI and Bin reflect the
new initialize(_owner,_initialSequencer) signature.
In `@contracts/deploy/022-SequencerInit.ts`:
- Around line 48-51: The call to IL1SequencerProxy.upgradeToAndCall (with
L1SequencerFactory.interface.encodeFunctionData('initialize', [owner,
initialSequencer])) returns a ContractTransaction that must be awaited for
mining; change the code to capture the returned tx, call await tx.wait() and
handle errors (e.g., try/catch or rethrow) before running awaitCondition so the
script can detect revert/failure early and avoid extra polling for an unmined
transaction.
- Around line 53-61: The call to awaitCondition(...) is passing rate and
attempts swapped (currently 3000, 1000) which yields an excessive wait; update
the call where IL1SequencerProxy.implementation() is compared to
L1SequencerImplAddress to use a reasonable retry strategy such as
awaitCondition(cond, 1000, 10) (rate = 1000ms, attempts = 10) or, if you
intended a very small number of retries, use attempts = 3; keep the condition
function as-is and only change the numeric arguments.
In `@node/l1sequencer/verifier.go`:
- Around line 16-20: The CacheTTL constant in verifier.go is set to 10 seconds
(CacheTTL) which is a test value; restore it to a production-safe default (30 *
time.Minute) or make it configurable during initialization and use that
configurable value where the cache is constructed so IsSequencer()'s
double-fetch won't cause excessive L1 RPCs; update the CacheTTL constant or add
a constructor parameter/field (e.g., sequencer verifier init/config) and ensure
IsSequencer() reads the configured TTL instead of the hard-coded 10s.
In `@node/types/retryable_client.go`:
- Around line 532-548: AssembleL2BlockV2 is calling rc.authClient directly which
bypasses the client-switching logic; change the call to use the accessor method
rc.aClient() (or the equivalent client-returning helper used elsewhere) when
invoking AssembleL2BlockV2 so the retry logic uses the current active client;
update the line that calls rc.authClient.AssembleL2BlockV2(...) to call
rc.aClient().AssembleL2BlockV2(...) and ensure error handling and variable names
(resp, respErr, retryableError, ret, err) remain consistent with the surrounding
backoff.Retry block.
In `@ops/docker-sequencer-test/run-test.sh`:
- Around line 62-76: The wait_for_block function currently loops forever; modify
wait_for_block to accept or derive a timeout (like wait_for_rpc) and track start
time, then inside the loop compute elapsed time and break with failure if
elapsed >= timeout; update log messages and the return code so it returns
non-zero on timeout and logs an error (use the same style as wait_for_rpc),
referencing the wait_for_block function, the target_height/local current
variable checks, and ensure sleep and echo behavior remain but are guarded by
the timeout check.
In `@ops/docker-sequencer-test/scripts/tx-generator.sh`:
- Line 9: The script defines PRIVATE_KEY but never uses it; either remove the
PRIVATE_KEY variable or change the transaction flow to sign and send a raw
transaction instead of calling eth_sendTransaction. If you choose to use it,
update the send step that currently calls eth_sendTransaction (around the
eth_sendTransaction usage) to create and sign a transaction with PRIVATE_KEY and
submit via eth_sendRawTransaction (i.e., build the tx, sign with the private
key, and send the resulting raw hex). If you choose to remove it, delete the
PRIVATE_KEY declaration and any related unused references so the script
consistently relies on the node's unlocked account.
- Around line 42-44: The script currently sets NONCE_HEX=$(get_nonce) and then
does arithmetic expansion NONCE=$((NONCE_HEX)) which will treat empty or non-hex
as 0; add validation after calling get_nonce to ensure NONCE_HEX is non-empty
and matches a hex regexp (e.g. ^[0-9a-fA-F]+$) before converting; if invalid,
either retry get_nonce or exit with an error to avoid nonce collisions; when
valid, convert explicitly (use base-16 conversion semantics) and then assign
NONCE and continue (references: get_nonce, NONCE_HEX, NONCE).
---
Outside diff comments:
In `@node/types/retryable_client.go`:
- Around line 445-461: BlockByNumber currently calls rc.ethClient directly,
bypassing the client-switching logic in rc.eClient(); change the call to use
rc.eClient().BlockByNumber so the method always uses the active client returned
by eClient(), preserve the existing retry/backoff flow and error handling
(including retryableError checks and setting err/ret), and ensure ret is only
assigned when resp is non-nil; update references inside the anonymous retry func
to use eClient() instead of ethClient.
---
Nitpick comments:
In `@contracts/contracts/l1/L1Sequencer.sol`:
- Around line 9-13: The contract L1Sequencer (which inherits OwnableUpgradeable)
lacks a reserved storage gap and so adding new state vars in upgrades could
corrupt the storage layout; add a private reserved storage array (e.g.,
uint256[n] __gap) after the declared state variable `address public sequencer;`
(inside the storage section of L1Sequencer) to reserve future slots and preserve
compatibility with OwnableUpgradeable and future upgrades.
In `@node/core/executor.go`:
- Around line 662-683: executableL2DataToBlockV2 currently copies Transactions
directly and can leave BlockV2.Transactions nil, making it inconsistent with
blockV2ToExecutableL2Data which ensures Transactions is never nil; update
executableL2DataToBlockV2 to defensive-copy or set Transactions to an empty
slice when data.Transactions is nil (i.e., ensure BlockV2.Transactions =
data.Transactions or make([]<txnType>, 0) if nil) so both conversion functions
guarantee a non-nil Transactions slice.
In `@node/flags/flags.go`:
- Around line 3-6: The import block in flags.go violates Go convention by
placing the third-party package github.com/urfave/cli before the standard
library package time; reorder the imports so standard library packages (time)
appear first, then a blank line, then third-party packages
(github.com/urfave/cli) to follow Go import grouping conventions.
In `@node/l1sequencer/signer.go`:
- Around line 65-71: The constructor/usage mismatch: NewLocalSigner currently
accepts a nil verifier but IsActiveSequencer treats nil verifier as an error;
pick one of two fixes and implement it consistently: (A) If verifier is
optional, change LocalSigner.IsActiveSequencer to return (false, nil) when
s.verifier == nil (remove the fmt.Errorf path) so caller sees "not active"
without error; or (B) If verifier is required, validate in NewLocalSigner (e.g.,
return an error) and ensure NewLocalSigner does not construct a LocalSigner with
a nil verifier. Refer to the LocalSigner type, the IsActiveSequencer method, and
NewLocalSigner to locate where to apply the change.
In `@node/l1sequencer/tracker.go`:
- Around line 54-68: The loop in L1Tracker.loop delays the first sync check by
up to one minute; call t.checkL1SyncLag() once immediately before entering the
for/select loop so the first check runs on startup (ensure you still respect
t.ctx.Done() by returning if context is canceled), then keep the existing
ticker-driven checks; reference the L1Tracker.loop function and the
checkL1SyncLag method to locate where to add the immediate invocation.
In `@node/l1sequencer/verifier.go`:
- Around line 59-83: IsSequencer can perform two RPC calls by calling flushCache
twice (once for expired cache and again on cache miss); modify IsSequencer to
avoid a redundant second refresh by tracking whether a refresh already occurred
in this invocation—call flushCache once if cache is expired, set a local flag
(e.g., refreshed), check the cached sequencer (c.sequencer) and only call
flushCache again on miss if refreshed is false; keep the existing mutex locking
and return semantics and reference functions/fields IsSequencer, flushCache,
c.cacheExpiry, c.sequencer, and c.mutex.
In `@node/sync/bridge_client.go`:
- Around line 27-30: Change BridgeClient.L1Client to return a narrow read-only
interface instead of *ethclient.Client: declare a new interface (e.g.,
ReadOnlyEthClient or L1ClientReader) in the same package that lists only the
methods callers actually use, update the method signature func (c *BridgeClient)
L1Client() ReadOnlyEthClient, and return c.l1Client (ethclient.Client implements
the interface). This preserves behavior but prevents callers from calling
mutating methods like Close(); ensure the interface includes all methods
currently invoked on the returned client so the code continues to compile.
In `@ops/docker-sequencer-test/docker-compose.override.yml`:
- Line 3: Remove the obsolete top-level "version:" key from
docker-compose.override.yml (the "version: '3.8'" line) because Compose v2+
ignores it and emits a warning; simply delete that key and its value so the file
uses the newer schema without the version field.
- Line 27: Add an inline comment next to the MORPH_NODE_SEQUENCER_PRIVATE_KEY
environment variable to clearly state that this value is a pre-generated
test/development private key (not for production) so scanners and future readers
know it's safe to commit; update the same comment for the other identical key
occurrence (the second MORPH_NODE_SEQUENCER_PRIVATE_KEY entry) to match.
In `@ops/docker-sequencer-test/Dockerfile.l2-geth-test`:
- Around line 12-17: The runtime stage currently uses the full builder image
(ghcr.io/morph-l2/go-ubuntu-builder:go-1.24-ubuntu); replace it with a minimal
runtime base such as ubuntu:22.04 or debian:bookworm-slim, install only the
runtime packages (ca-certificates, bash, curl) as already present, and adjust
any subsequent COPY/USER instructions that expect the built binary (ensure the
compiled binary built in the builder stage is copied into the new minimal stage
and ownership/permissions are set appropriately). Keep the existing RUN apt-get
lines but run them against the new minimal base and remove any leftover
build-tool assumptions from the runtime stage.
In `@ops/docker-sequencer-test/Dockerfile.l2-node-test`:
- Around line 37-47: The final stage currently uses the full builder image (FROM
ghcr.io/morph-l2/go-ubuntu-builder:go-1.24-ubuntu) even though only the compiled
binaries are needed; change the final stage to a minimal runtime base (e.g., a
distroless or slim image) and re-add any required runtime packages
(ca-certificates or libc) if needed, then keep the COPY lines (COPY
--from=builder /bitget/morph/node/build/bin/tendermint /usr/local/bin/ and COPY
--from=builder /bitget/morph/node/build/bin/morphnode /usr/local/bin/) and the
CMD (["morphnode","--home","/data"]) so only the two binaries are included in
the smaller image.
In `@ops/docker-sequencer-test/run-test.sh`:
- Line 471: The echo line has an extra leading space causing inconsistent
indentation; remove the extra leading space so the line reads starting at column
1 (i.e., align `echo "Node 0 (seq-0): Block $(get_block_number
http://127.0.0.1:8545)"` with surrounding lines), keeping the same content and
using the existing `get_block_number` call.
- Line 274: The exported SEQUENCER_PRIVATE_KEY value in run-test.sh is the
well-known Anvil/Hardhat account `#0` private key used for local/dev testing; add
an inline comment next to the export of SEQUENCER_PRIVATE_KEY clarifying that
this is the intentional, widely-used development/test-only key (not for
production) so readers understand it's safe and expected in tests.
- Line 68: Several local variables are declared with command substitution in one
step (e.g., local current=$(get_block_number "$rpc_url")), which triggers
ShellCheck SC2155; change each to declare the variable first and assign
separately to avoid masking return values. For each occurrence (functions like
get_block_number and variables such as current, target, next, etc.), replace the
single-line "local var=$(...)" with two lines: "local var" followed by
"var=$(...)" ensuring you preserve any quoting and command arguments (e.g.,
"$rpc_url"); apply this pattern to all listed occurrences.
- Around line 142-240: Extract the large inline Python payload into a separate
script (e.g., generate_genesis.py) and replace the embedded python3 -c "..."
block in run-test.sh with a call to that script; move all top-level symbols and
logic (sys.path.insert(0, '$DEVNET_DIR'), imports devnet, functions/variables
like run_command, read_json, write_json, log, devnet_cfg_orig,
temp_deploy_config, genesis_batch_header, addresses, etc.) into the new file,
accept required environment values (DEVNET_DIR, MORPH_ROOT, OPS_DIR,
CONTRACTS_DIR or pass them as args/env), add a proper shebang and make it
executable, and ensure the shell script invokes python3 generate_genesis.py with
the same environment so return codes and side-effects (files written to
devnet_dir, deployment_dir, .env updates) are preserved.
In `@ops/docker-sequencer-test/scripts/tx-generator.sh`:
- Around line 31-36: The get_block_number function is duplicated between
tx-generator.sh and run-test.sh; extract this function into a shared utility
script (e.g., a sourced common util script) and replace the duplicate
definitions by sourcing that shared script from both tx-generator.sh and
run-test.sh, ensuring the function name get_block_number remains unchanged so
callers keep working and update any shebangs/execute permissions as needed.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.workis excluded by!**/*.work
📒 Files selected for processing (30)
bindings/bindings/l1sequencer.gocontracts/contracts/l1/L1Sequencer.solcontracts/deploy/013-DeployProxys.tscontracts/deploy/014-DeployImpls.tscontracts/deploy/019-AdminTransfer.tscontracts/deploy/022-SequencerInit.tscontracts/deploy/index.tscontracts/src/types.tscontracts/tasks/deploy.tsnode/blocktag/config.gonode/blocktag/service.gonode/cmd/node/main.gonode/core/executor.gonode/flags/flags.gonode/go.modnode/l1sequencer/signer.gonode/l1sequencer/tracker.gonode/l1sequencer/verifier.gonode/sequencer/tm_node.gonode/sync/bridge_client.gonode/sync/syncer.gonode/types/retryable_client.goops/devnet-morph/devnet/__init__.pyops/docker-sequencer-test/Dockerfile.l2-geth-testops/docker-sequencer-test/Dockerfile.l2-node-testops/docker-sequencer-test/README.mdops/docker-sequencer-test/docker-compose.override.ymlops/docker-sequencer-test/entrypoint-l2.shops/docker-sequencer-test/run-test.shops/docker-sequencer-test/scripts/tx-generator.sh
💤 Files with no reviewable changes (1)
- node/blocktag/config.go
✅ Files skipped from review due to trivial changes (1)
- ops/docker-sequencer-test/README.md
🚧 Files skipped from review as they are similar to previous changes (1)
- node/go.mod
bindings/bindings/l1sequencer.go
Outdated
| // L1SequencerMetaData contains all meta data concerning the L1Sequencer contract. | ||
| var L1SequencerMetaData = &bind.MetaData{ | ||
| ABI: "[{\"type\":\"function\",\"name\":\"getSequencer\",\"inputs\":[],\"outputs\":[{\"name\":\"\",\"type\":\"address\",\"internalType\":\"address\"}],\"stateMutability\":\"view\"},{\"type\":\"function\",\"name\":\"initialize\",\"inputs\":[{\"name\":\"_owner\",\"type\":\"address\",\"internalType\":\"address\"}],\"outputs\":[],\"stateMutability\":\"nonpayable\"},{\"type\":\"function\",\"name\":\"owner\",\"inputs\":[],\"outputs\":[{\"name\":\"\",\"type\":\"address\",\"internalType\":\"address\"}],\"stateMutability\":\"view\"},{\"type\":\"function\",\"name\":\"renounceOwnership\",\"inputs\":[],\"outputs\":[],\"stateMutability\":\"nonpayable\"},{\"type\":\"function\",\"name\":\"sequencer\",\"inputs\":[],\"outputs\":[{\"name\":\"\",\"type\":\"address\",\"internalType\":\"address\"}],\"stateMutability\":\"view\"},{\"type\":\"function\",\"name\":\"transferOwnership\",\"inputs\":[{\"name\":\"newOwner\",\"type\":\"address\",\"internalType\":\"address\"}],\"outputs\":[],\"stateMutability\":\"nonpayable\"},{\"type\":\"function\",\"name\":\"updateSequencer\",\"inputs\":[{\"name\":\"newSequencer\",\"type\":\"address\",\"internalType\":\"address\"}],\"outputs\":[],\"stateMutability\":\"nonpayable\"},{\"type\":\"event\",\"name\":\"Initialized\",\"inputs\":[{\"name\":\"version\",\"type\":\"uint8\",\"indexed\":false,\"internalType\":\"uint8\"}],\"anonymous\":false},{\"type\":\"event\",\"name\":\"OwnershipTransferred\",\"inputs\":[{\"name\":\"previousOwner\",\"type\":\"address\",\"indexed\":true,\"internalType\":\"address\"},{\"name\":\"newOwner\",\"type\":\"address\",\"indexed\":true,\"internalType\":\"address\"}],\"anonymous\":false},{\"type\":\"event\",\"name\":\"SequencerUpdated\",\"inputs\":[{\"name\":\"oldSequencer\",\"type\":\"address\",\"indexed\":true,\"internalType\":\"address\"},{\"name\":\"newSequencer\",\"type\":\"address\",\"indexed\":true,\"internalType\":\"address\"}],\"anonymous\":false}]", | ||
| Bin: "0x608060405234801561000f575f80fd5b5061081a8061001d5f395ff3fe608060405234801561000f575f80fd5b506004361061007a575f3560e01c8063715018a611610058578063715018a6146100f65780638da5cb5b146100fe578063c4d66de81461011c578063f2fde38b1461012f575f80fd5b806343ae20a31461007e5780634d96a90a146100935780635c1bba38146100d6575b5f80fd5b61009161008c3660046107d3565b610142565b005b60655473ffffffffffffffffffffffffffffffffffffffff165b60405173ffffffffffffffffffffffffffffffffffffffff909116815260200160405180910390f35b6065546100ad9073ffffffffffffffffffffffffffffffffffffffff1681565b6100916102c7565b60335473ffffffffffffffffffffffffffffffffffffffff166100ad565b61009161012a3660046107d3565b6102da565b61009161013d3660046107d3565b6104ed565b61014a6105a4565b73ffffffffffffffffffffffffffffffffffffffff81166101cc576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152601160248201527f696e76616c69642073657175656e63657200000000000000000000000000000060448201526064015b60405180910390fd5b60655473ffffffffffffffffffffffffffffffffffffffff90811690821603610251576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152600e60248201527f73616d652073657175656e63657200000000000000000000000000000000000060448201526064016101c3565b6065805473ffffffffffffffffffffffffffffffffffffffff8381167fffffffffffffffffffffffff0000000000000000000000000000000000000000831681179093556040519116919082907fcd58b762453bd126b48db83f2cecd464f5281dd7e5e6824b528c09d0482984d6905f90a35050565b6102cf6105a4565b6102d85f610625565b565b5f54610100900460ff16158080156102f857505f54600160ff909116105b806103115750303b15801561031157505f5460ff166001145b61039d576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602e60248201527f496e697469616c697a61626c653a20636f6e747261637420697320616c72656160448201527f647920696e697469616c697a656400000000000000000000000000000000000060648201526084016101c3565b5f80547fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0016600117905580156103f9575f80547fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00ff166101001790555b73ffffffffffffffffffffffffffffffffffffffff8216610476576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152600d60248201527f696e76616c6964206f776e65720000000000000000000000000000000000000060448201526064016101c3565b61047e61069b565b61048782610625565b80156104e9575f80547fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00ff169055604051600181527f7f26b83ff96e1f2b6a682f133852f6798a09c465da95921460cefb38474024989060200160405180910390a15b5050565b6104f56105a4565b73ffffffffffffffffffffffffffffffffffffffff8116610598576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602660248201527f4f776e61626c653a206e6577206f776e657220697320746865207a65726f206160448201527f646472657373000000000000000000000000000000000000000000000000000060648201526084016101c3565b6105a181610625565b50565b60335473ffffffffffffffffffffffffffffffffffffffff1633146102d8576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820181905260248201527f4f776e61626c653a2063616c6c6572206973206e6f7420746865206f776e657260448201526064016101c3565b6033805473ffffffffffffffffffffffffffffffffffffffff8381167fffffffffffffffffffffffff0000000000000000000000000000000000000000831681179093556040519116919082907f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0905f90a35050565b5f54610100900460ff16610731576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602b60248201527f496e697469616c697a61626c653a20636f6e7472616374206973206e6f74206960448201527f6e697469616c697a696e6700000000000000000000000000000000000000000060648201526084016101c3565b6102d85f54610100900460ff166107ca576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602b60248201527f496e697469616c697a61626c653a20636f6e7472616374206973206e6f74206960448201527f6e697469616c697a696e6700000000000000000000000000000000000000000060648201526084016101c3565b6102d833610625565b5f602082840312156107e3575f80fd5b813573ffffffffffffffffffffffffffffffffffffffff81168114610806575f80fd5b939250505056fea164736f6c6343000818000a", | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the initialize function signature in the Solidity contract
echo "=== Checking Solidity contract ==="
find . -name "L1Sequencer.sol" -type f 2>/dev/null | head -5Repository: morph-l2/morph
Length of output: 133
🏁 Script executed:
#!/bin/bash
# Check if the file exists and examine the initialize function
if [ -f "contracts/contracts/l1/L1Sequencer.sol" ]; then
echo "=== Initialize function in Solidity contract ==="
rg -A5 'function initialize' contracts/contracts/l1/L1Sequencer.sol
else
echo "File not found at contracts/contracts/l1/L1Sequencer.sol"
find . -name "*L1Sequencer*" -type f 2>/dev/null
fiRepository: morph-l2/morph
Length of output: 316
🏁 Script executed:
#!/bin/bash
# Extract and parse the ABI from the Go binding
if [ -f "bindings/bindings/l1sequencer.go" ]; then
echo "=== ABI in Go binding (initialize section) ==="
# Extract the ABI string and format it
rg 'ABI:.*' bindings/bindings/l1sequencer.go | sed 's/.*ABI: "//' | sed 's/",$//' | python3 << 'PYEOF'
import sys
import json
abi_str = sys.stdin.read().strip()
try:
abi = json.loads(abi_str)
# Find initialize function
for item in abi:
if item.get('type') == 'function' and item.get('name') == 'initialize':
print("Initialize function in ABI:")
print(json.dumps(item, indent=2))
break
except Exception as e:
print(f"Error parsing ABI: {e}")
PYEOF
else
echo "File not found at bindings/bindings/l1sequencer.go"
fiRepository: morph-l2/morph
Length of output: 165
🏁 Script executed:
#!/bin/bash
# Extract the ABI more carefully from the Go binding
if [ -f "bindings/bindings/l1sequencer.go" ]; then
# Read the file and extract just the ABI value between the quotes
sed -n '/ABI:/,/Bin:/p' bindings/bindings/l1sequencer.go | head -1 | sed 's/.*ABI: "//' | sed 's/",$//' > /tmp/abi.json
# Try to validate and parse it
python3 << 'PYEOF'
import json
import sys
try:
with open('/tmp/abi.json', 'r') as f:
abi_str = f.read().strip()
# Parse the ABI
abi = json.loads(abi_str)
# Find initialize function
for item in abi:
if item.get('type') == 'function' and item.get('name') == 'initialize':
print("Initialize function in Go binding ABI:")
print(f" Name: {item.get('name')}")
print(f" Number of inputs: {len(item.get('inputs', []))}")
print(f" Inputs: {json.dumps(item.get('inputs', []), indent=4)}")
break
except Exception as e:
print(f"Error: {e}")
print("Attempting alternative extraction...")
PYEOF
else
echo "File not found"
fiRepository: morph-l2/morph
Length of output: 177
🏁 Script executed:
#!/bin/bash
# Read the Go file directly to see the ABI structure
if [ -f "bindings/bindings/l1sequencer.go" ]; then
echo "=== File content around ABI definition ==="
sed -n '30,40p' bindings/bindings/l1sequencer.go
fiRepository: morph-l2/morph
Length of output: 6539
🏁 Script executed:
#!/bin/bash
# Try to extract just the initialize portion of the ABI
cat bindings/bindings/l1sequencer.go | grep -o '"name":"initialize"[^}]*}' | head -1Repository: morph-l2/morph
Length of output: 40
Critical: ABI mismatch with Solidity contract.
The generated ABI shows initialize takes only one parameter (address _owner), but the Solidity contract at contracts/contracts/l1/L1Sequencer.sol defines initialize(address _owner, address _initialSequencer) with two parameters. This mismatch will cause the Go binding to call the wrong function or fail.
Regenerate the bindings from the updated Solidity contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindings/bindings/l1sequencer.go` around lines 32 - 36, The
L1SequencerMetaData ABI is out of date: its initialize signature only has
initialize(address _owner) but the Solidity contract's initialize requires
initialize(address _owner, address _initialSequencer); regenerate the Go
bindings so L1SequencerMetaData.ABI includes the two-argument
initialize(address,address). Locate the binding struct L1SequencerMetaData and
re-run your binding generation tool (e.g., abigen / your build script) against
the updated L1Sequencer.sol contract so the generated ABI and Bin reflect the
new initialize(_owner,_initialSequencer) signature.
| await IL1SequencerProxy.upgradeToAndCall( | ||
| L1SequencerImplAddress, | ||
| L1SequencerFactory.interface.encodeFunctionData('initialize', [owner, initialSequencer]) | ||
| ) |
There was a problem hiding this comment.
Missing .wait() on upgradeToAndCall transaction.
The upgradeToAndCall call returns a ContractTransaction (ethers v5). Without calling .wait(), the code proceeds before the transaction is mined and confirmed. While awaitCondition below eventually catches up, this approach:
- Cannot detect transaction failures/reverts early
- Wastes polling cycles waiting for an unmined transaction
🔧 Proposed fix
// Upgrade and initialize the proxy with owner and initial sequencer
// Note: We set sequencer in initialize() to avoid TransparentUpgradeableProxy admin restriction
- await IL1SequencerProxy.upgradeToAndCall(
+ const tx = await IL1SequencerProxy.upgradeToAndCall(
L1SequencerImplAddress,
L1SequencerFactory.interface.encodeFunctionData('initialize', [owner, initialSequencer])
)
+ await tx.wait()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await IL1SequencerProxy.upgradeToAndCall( | |
| L1SequencerImplAddress, | |
| L1SequencerFactory.interface.encodeFunctionData('initialize', [owner, initialSequencer]) | |
| ) | |
| const tx = await IL1SequencerProxy.upgradeToAndCall( | |
| L1SequencerImplAddress, | |
| L1SequencerFactory.interface.encodeFunctionData('initialize', [owner, initialSequencer]) | |
| ) | |
| await tx.wait() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/deploy/022-SequencerInit.ts` around lines 48 - 51, The call to
IL1SequencerProxy.upgradeToAndCall (with
L1SequencerFactory.interface.encodeFunctionData('initialize', [owner,
initialSequencer])) returns a ContractTransaction that must be awaited for
mining; change the code to capture the returned tx, call await tx.wait() and
handle errors (e.g., try/catch or rethrow) before running awaitCondition so the
script can detect revert/failure early and avoid extra polling for an unmined
transaction.
| await awaitCondition( | ||
| async () => { | ||
| return ( | ||
| (await IL1SequencerProxy.implementation()).toLocaleLowerCase() === L1SequencerImplAddress.toLocaleLowerCase() | ||
| ) | ||
| }, | ||
| 3000, | ||
| 1000 | ||
| ) |
There was a problem hiding this comment.
awaitCondition parameters appear excessive or swapped.
Based on the awaitCondition signature in deploy-utils.ts: awaitCondition(cond, rate = 1000, attempts = 10), the current call passes rate=3000 and attempts=1000, resulting in a maximum wait time of ~50 minutes (3000ms × 1000 attempts).
This seems excessive for waiting on a single transaction's confirmation. If the intent was 3 attempts with 1000ms intervals, the parameters are swapped. Otherwise, consider reducing attempts to a reasonable value (e.g., 10-20).
🔧 Suggested fix (if intent is reasonable timeout)
await awaitCondition(
async () => {
return (
(await IL1SequencerProxy.implementation()).toLocaleLowerCase() === L1SequencerImplAddress.toLocaleLowerCase()
)
},
- 3000,
- 1000
+ 3000,
+ 20
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await awaitCondition( | |
| async () => { | |
| return ( | |
| (await IL1SequencerProxy.implementation()).toLocaleLowerCase() === L1SequencerImplAddress.toLocaleLowerCase() | |
| ) | |
| }, | |
| 3000, | |
| 1000 | |
| ) | |
| await awaitCondition( | |
| async () => { | |
| return ( | |
| (await IL1SequencerProxy.implementation()).toLocaleLowerCase() === L1SequencerImplAddress.toLocaleLowerCase() | |
| ) | |
| }, | |
| 3000, | |
| 20 | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/deploy/022-SequencerInit.ts` around lines 53 - 61, The call to
awaitCondition(...) is passing rate and attempts swapped (currently 3000, 1000)
which yields an excessive wait; update the call where
IL1SequencerProxy.implementation() is compared to L1SequencerImplAddress to use
a reasonable retry strategy such as awaitCondition(cond, 1000, 10) (rate =
1000ms, attempts = 10) or, if you intended a very small number of retries, use
attempts = 3; keep the condition function as-is and only change the numeric
arguments.
node/l1sequencer/verifier.go
Outdated
| const ( | ||
| // CacheTTL is the time-to-live for the sequencer verifier cache | ||
| //CacheTTL = 30 * time.Minute | ||
| CacheTTL = 10 * time.Second | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the current state of the file
head -30 node/l1sequencer/verifier.goRepository: morph-l2/morph
Length of output: 750
🏁 Script executed:
# Check if there are any configuration files that might override CacheTTL
find . -type f \( -name "*.go" -o -name "*.json" -o -name "*.yaml" -o -name "*.toml" \) | xargs grep -l "CacheTTL" 2>/dev/null | head -20Repository: morph-l2/morph
Length of output: 88
🏁 Script executed:
# Look for other cache TTLs in the codebase to understand patterns
rg "TTL.*time\.(Second|Minute|Hour)" --type go -B 2 -A 2 | head -60Repository: morph-l2/morph
Length of output: 368
🏁 Script executed:
# Check git history or comments around this change
git log --oneline -n 20 -- node/l1sequencer/verifier.go 2>/dev/null || echo "Git history not available"Repository: morph-l2/morph
Length of output: 103
🏁 Script executed:
# Let's see the full implementation of SequencerVerifier and how CacheTTL is used
sed -n '16,100p' node/l1sequencer/verifier.goRepository: morph-l2/morph
Length of output: 2275
🏁 Script executed:
# Continue reading the file to see IsSequencer and other methods
sed -n '100,200p' node/l1sequencer/verifier.goRepository: morph-l2/morph
Length of output: 40
🏁 Script executed:
# Check if there are any related config/setup files that might indicate testing vs production
find . -path ./vendor -prune -o -type f -name "*sequencer*" -print | grep -E "\.(go|toml|yaml|json)$" | head -20Repository: morph-l2/morph
Length of output: 224
🏁 Script executed:
# Look for any initialization or configuration that might override the TTL
rg "SequencerVerifier|NewSequencerVerifier" -A 5 --type go | head -60Repository: morph-l2/morph
Length of output: 3418
🏁 Script executed:
# Check if there are test files that might indicate this is temporary
find . -type f -name "*sequencer*test*.go" -o -name "*test*sequencer*.go" | head -10Repository: morph-l2/morph
Length of output: 40
🏁 Script executed:
# Look for any config structures that might allow TTL override
rg "type.*Config" -A 10 node/l1sequencer/ --type goRepository: morph-l2/morph
Length of output: 40
🏁 Script executed:
# Check the full git history of this file to understand the change
git log --oneline -- node/l1sequencer/verifier.go 2>/dev/null | head -10 || echo "No git history available"Repository: morph-l2/morph
Length of output: 103
🏁 Script executed:
# Look at where SequencerVerifier is instantiated to see if TTL can be configured
rg "NewSequencerVerifier" -B 5 -A 5 --type goRepository: morph-l2/morph
Length of output: 1403
Fix CacheTTL value before production deployment.
The CacheTTL constant is set to 10 seconds with the 30-minute value commented out. This appears to be test code from the test_3_13 branch that was merged into production. The 10-second TTL will trigger frequent L1 RPC calls, especially problematic with the double-fetch pattern in IsSequencer() where the cache can be refreshed twice per call (on expiry + on cache miss). Either restore the 30-minute TTL or make the TTL configurable at initialization time.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node/l1sequencer/verifier.go` around lines 16 - 20, The CacheTTL constant in
verifier.go is set to 10 seconds (CacheTTL) which is a test value; restore it to
a production-safe default (30 * time.Minute) or make it configurable during
initialization and use that configurable value where the cache is constructed so
IsSequencer()'s double-fetch won't cause excessive L1 RPCs; update the CacheTTL
constant or add a constructor parameter/field (e.g., sequencer verifier
init/config) and ensure IsSequencer() reads the configured TTL instead of the
hard-coded 10s.
node/types/retryable_client.go
Outdated
| // AssembleL2BlockV2 assembles a L2 block based on parent hash. | ||
| func (rc *RetryableClient) AssembleL2BlockV2(ctx context.Context, parentHash common.Hash, transactions eth.Transactions) (ret *catalyst.ExecutableL2Data, err error) { | ||
| if retryErr := backoff.Retry(func() error { | ||
| resp, respErr := rc.authClient.AssembleL2BlockV2(ctx, parentHash, transactions) | ||
| if respErr != nil { | ||
| rc.logger.Info("failed to AssembleL2BlockV2", "error", respErr) | ||
| if retryableError(respErr) { | ||
| return respErr | ||
| } | ||
| err = respErr | ||
| } | ||
| ret = resp | ||
| return nil | ||
| }, rc.b); retryErr != nil { | ||
| return nil, retryErr | ||
| } | ||
| return |
There was a problem hiding this comment.
Bug: AssembleL2BlockV2 bypasses client switching logic.
AssembleL2BlockV2 uses rc.authClient directly instead of rc.aClient(). This bypasses the client switching mechanism and could cause block assembly to use the wrong client after a switch.
🐛 Proposed fix
func (rc *RetryableClient) AssembleL2BlockV2(ctx context.Context, parentHash common.Hash, transactions eth.Transactions) (ret *catalyst.ExecutableL2Data, err error) {
if retryErr := backoff.Retry(func() error {
- resp, respErr := rc.authClient.AssembleL2BlockV2(ctx, parentHash, transactions)
+ resp, respErr := rc.aClient().AssembleL2BlockV2(ctx, parentHash, transactions)
if respErr != nil {
rc.logger.Info("failed to AssembleL2BlockV2", "error", respErr)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // AssembleL2BlockV2 assembles a L2 block based on parent hash. | |
| func (rc *RetryableClient) AssembleL2BlockV2(ctx context.Context, parentHash common.Hash, transactions eth.Transactions) (ret *catalyst.ExecutableL2Data, err error) { | |
| if retryErr := backoff.Retry(func() error { | |
| resp, respErr := rc.authClient.AssembleL2BlockV2(ctx, parentHash, transactions) | |
| if respErr != nil { | |
| rc.logger.Info("failed to AssembleL2BlockV2", "error", respErr) | |
| if retryableError(respErr) { | |
| return respErr | |
| } | |
| err = respErr | |
| } | |
| ret = resp | |
| return nil | |
| }, rc.b); retryErr != nil { | |
| return nil, retryErr | |
| } | |
| return | |
| // AssembleL2BlockV2 assembles a L2 block based on parent hash. | |
| func (rc *RetryableClient) AssembleL2BlockV2(ctx context.Context, parentHash common.Hash, transactions eth.Transactions) (ret *catalyst.ExecutableL2Data, err error) { | |
| if retryErr := backoff.Retry(func() error { | |
| resp, respErr := rc.aClient().AssembleL2BlockV2(ctx, parentHash, transactions) | |
| if respErr != nil { | |
| rc.logger.Info("failed to AssembleL2BlockV2", "error", respErr) | |
| if retryableError(respErr) { | |
| return respErr | |
| } | |
| err = respErr | |
| } | |
| ret = resp | |
| return nil | |
| }, rc.b); retryErr != nil { | |
| return nil, retryErr | |
| } | |
| return |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node/types/retryable_client.go` around lines 532 - 548, AssembleL2BlockV2 is
calling rc.authClient directly which bypasses the client-switching logic; change
the call to use the accessor method rc.aClient() (or the equivalent
client-returning helper used elsewhere) when invoking AssembleL2BlockV2 so the
retry logic uses the current active client; update the line that calls
rc.authClient.AssembleL2BlockV2(...) to call rc.aClient().AssembleL2BlockV2(...)
and ensure error handling and variable names (resp, respErr, retryableError,
ret, err) remain consistent with the surrounding backoff.Retry block.
| wait_for_block() { | ||
| local target_height=$1 | ||
| local rpc_url="${2:-$L2_RPC}" | ||
|
|
||
| log_info "Waiting for block $target_height..." | ||
| while true; do | ||
| local current=$(get_block_number "$rpc_url") | ||
| if [ "$current" -ge "$target_height" ]; then | ||
| log_success "Reached block $current" | ||
| return 0 | ||
| fi | ||
| echo -ne "\r Current block: $current / $target_height" | ||
| sleep 2 | ||
| done | ||
| } |
There was a problem hiding this comment.
Add timeout to wait_for_block to prevent indefinite hangs.
The while true loop will run forever if the target block is never reached (e.g., due to network issues or node crash). Add a timeout mechanism similar to wait_for_rpc.
🛡️ Proposed fix
wait_for_block() {
local target_height=$1
local rpc_url="${2:-$L2_RPC}"
+ local max_wait=${3:-600} # 10 minute default
+ local waited=0
log_info "Waiting for block $target_height..."
- while true; do
- local current=$(get_block_number "$rpc_url")
+ while [ $waited -lt $max_wait ]; do
+ local current
+ current=$(get_block_number "$rpc_url")
if [ "$current" -ge "$target_height" ]; then
log_success "Reached block $current"
return 0
fi
echo -ne "\r Current block: $current / $target_height"
sleep 2
+ waited=$((waited + 2))
done
+ log_error "Timeout waiting for block $target_height"
+ return 1
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| wait_for_block() { | |
| local target_height=$1 | |
| local rpc_url="${2:-$L2_RPC}" | |
| log_info "Waiting for block $target_height..." | |
| while true; do | |
| local current=$(get_block_number "$rpc_url") | |
| if [ "$current" -ge "$target_height" ]; then | |
| log_success "Reached block $current" | |
| return 0 | |
| fi | |
| echo -ne "\r Current block: $current / $target_height" | |
| sleep 2 | |
| done | |
| } | |
| wait_for_block() { | |
| local target_height=$1 | |
| local rpc_url="${2:-$L2_RPC}" | |
| local max_wait=${3:-600} # 10 minute default | |
| local waited=0 | |
| log_info "Waiting for block $target_height..." | |
| while [ $waited -lt $max_wait ]; do | |
| local current | |
| current=$(get_block_number "$rpc_url") | |
| if [ "$current" -ge "$target_height" ]; then | |
| log_success "Reached block $current" | |
| return 0 | |
| fi | |
| echo -ne "\r Current block: $current / $target_height" | |
| sleep 2 | |
| waited=$((waited + 2)) | |
| done | |
| log_error "Timeout waiting for block $target_height" | |
| return 1 | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 68-68: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ops/docker-sequencer-test/run-test.sh` around lines 62 - 76, The
wait_for_block function currently loops forever; modify wait_for_block to accept
or derive a timeout (like wait_for_rpc) and track start time, then inside the
loop compute elapsed time and break with failure if elapsed >= timeout; update
log messages and the return code so it returns non-zero on timeout and logs an
error (use the same style as wait_for_rpc), referencing the wait_for_block
function, the target_height/local current variable checks, and ensure sleep and
echo behavior remain but are guarded by the timeout check.
|
|
||
| L2_RPC="${L2_RPC:-http://morph-geth-0:8545}" | ||
| INTERVAL="${TX_INTERVAL:-5}" # seconds between txs | ||
| PRIVATE_KEY="${PRIVATE_KEY:-0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80}" |
There was a problem hiding this comment.
Unused PRIVATE_KEY variable.
PRIVATE_KEY is defined but never used. The transaction at line 56 uses eth_sendTransaction with only the from address, relying on the node's unlocked account rather than signing with this key. Either remove this variable or use eth_sendRawTransaction with a signed transaction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ops/docker-sequencer-test/scripts/tx-generator.sh` at line 9, The script
defines PRIVATE_KEY but never uses it; either remove the PRIVATE_KEY variable or
change the transaction flow to sign and send a raw transaction instead of
calling eth_sendTransaction. If you choose to use it, update the send step that
currently calls eth_sendTransaction (around the eth_sendTransaction usage) to
create and sign a transaction with PRIVATE_KEY and submit via
eth_sendRawTransaction (i.e., build the tx, sign with the private key, and send
the resulting raw hex). If you choose to remove it, delete the PRIVATE_KEY
declaration and any related unused references so the script consistently relies
on the node's unlocked account.
| NONCE_HEX=$(get_nonce) | ||
| NONCE=$((NONCE_HEX)) | ||
| TX_COUNT=0 |
There was a problem hiding this comment.
Add validation for empty/invalid nonce response.
If get_nonce returns an empty string or non-hex value, the arithmetic expansion $((NONCE_HEX)) will silently evaluate to 0, potentially causing nonce collisions. Add a check or default.
🛡️ Proposed fix
NONCE_HEX=$(get_nonce)
+if [ -z "$NONCE_HEX" ]; then
+ echo "Failed to get initial nonce, defaulting to 0"
+ NONCE_HEX="0x0"
+fi
NONCE=$((NONCE_HEX))
TX_COUNT=0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| NONCE_HEX=$(get_nonce) | |
| NONCE=$((NONCE_HEX)) | |
| TX_COUNT=0 | |
| NONCE_HEX=$(get_nonce) | |
| if [ -z "$NONCE_HEX" ]; then | |
| echo "Failed to get initial nonce, defaulting to 0" | |
| NONCE_HEX="0x0" | |
| fi | |
| NONCE=$((NONCE_HEX)) | |
| TX_COUNT=0 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ops/docker-sequencer-test/scripts/tx-generator.sh` around lines 42 - 44, The
script currently sets NONCE_HEX=$(get_nonce) and then does arithmetic expansion
NONCE=$((NONCE_HEX)) which will treat empty or non-hex as 0; add validation
after calling get_nonce to ensure NONCE_HEX is non-empty and matches a hex
regexp (e.g. ^[0-9a-fA-F]+$) before converting; if invalid, either retry
get_nonce or exit with an error to avoid nonce collisions; when valid, convert
explicitly (use base-16 conversion semantics) and then assign NONCE and continue
(references: get_nonce, NONCE_HEX, NONCE).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@contracts/go.mod`:
- Line 9: Update the contracts module to account for go-ethereum commit
a805e930447c breaking changes: change any receipt encoding/decoding or storage
logic that expects Receipt.Reference as *common.Reference to handle []byte and
treat Version, Reference, Memo as optional (nil/empty) when
serializing/deserializing; update reorg handlers (any functions named like
handleReorg, processReorg, or index cleanup routines) to assume reference index
entries may be deleted for reorged transactions and avoid relying on stale
indexes; modify code using MorphTx.copy() (and any callers that access
MorphTx.FeeLimit) to handle FeeLimit possibly being nil after copy (add nil
checks or initialize FeeLimit where needed); update imports/usages of EVM trace
loggers and EIP-712 types to their new packages and rename opcode constants
(replace SHA3→KECCAK256, SUICIDE→SELFDESTRUCT) wherever referenced. Ensure all
changes touch the symbols Receipt, Reference, MorphTx.copy / MorphTx.FeeLimit,
and reorg/index handler functions.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (9)
bindings/go.sumis excluded by!**/*.sumcontracts/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sumnode/go.sumis excluded by!**/*.sumops/l2-genesis/go.sumis excluded by!**/*.sumops/tools/go.sumis excluded by!**/*.sumoracle/go.sumis excluded by!**/*.sumtoken-price-oracle/go.sumis excluded by!**/*.sumtx-submitter/go.sumis excluded by!**/*.sum
📒 Files selected for processing (10)
Makefilebindings/go.modcontracts/go.modgo-ethereumnode/go.modops/l2-genesis/go.modops/tools/go.modoracle/go.modtoken-price-oracle/go.modtx-submitter/go.mod
✅ Files skipped from review due to trivial changes (1)
- go-ethereum
🚧 Files skipped from review as they are similar to previous changes (6)
- ops/l2-genesis/go.mod
- Makefile
- oracle/go.mod
- token-price-oracle/go.mod
- ops/tools/go.mod
- bindings/go.mod
| require ( | ||
| github.com/iden3/go-iden3-crypto v0.0.16 | ||
| github.com/morph-l2/go-ethereum v1.10.14-0.20260211074551-4f0f6e6bd141 | ||
| github.com/morph-l2/go-ethereum v1.10.14-0.20260227035937-a805e930447c |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the go-ethereum commit exists and check for breaking changes
echo "Checking if commit a805e930447c exists in morph-l2/go-ethereum..."
gh api repos/morph-l2/go-ethereum/commits/a805e930447c --jq '.sha, .commit.message' 2>&1 || echo "Commit check failed or not found"
echo -e "\n--- Checking commit details ---"
gh api repos/morph-l2/go-ethereum/commits/a805e930447c --jq '.commit.author.date, .commit.message' 2>&1 || echo "Unable to retrieve commit details"Repository: morph-l2/morph
Length of output: 509
🌐 Web query:
morph-l2/go-ethereum commit a805e930447c breaking changes
💡 Result:
Commit a805e930447c (“fix sth”) in morph-l2/go-ethereum introduces these potentially breaking changes (mainly for downstream code and persisted-data assumptions):
- Receipt RLP schema change (Morph-specific fields): In
storedReceiptRLP(used for receipt storage/encoding),Version,Reference, andMemowere changed torlp:"optional", andReference’s type changed from*common.Referenceto[]byteto avoid decode issues when empty. Any code depending on the old field types / always-present fields (or decoding receipts with the old exact struct layout) may break. [1] - Reorg behavior change (index cleanup): During chain reorgs, the code now additionally deletes reference index entries for “truly deleted”
MorphTxtransactions by iterating the old chain to get block context (timestamp, tx index). If tooling relied on the previous (leakier) index behavior, results can change. [1] MorphTx.copy()semantics forFeeLimit: The copy logic no longer pre-allocatesFeeLimit; instead it allocates it only when non-nil (cpy.FeeLimit = new(big.Int).Set(tx.FeeLimit)). Code that assumedFeeLimitis always non-nil after copying may break. [1]
Sources: [1]
🌐 Web query:
go-ethereum v1.10.14 breaking changes migration
💡 Result:
go-ethereum (geth) v1.10.14: breaking changes / migration notes
- “Fast sync” is removed
- Impact: nodes/configs using
--syncmode fastwill fail to start or must be changed. - Migration: switch to snap sync (default in the 1.10.x line) or full sync (e.g.,
--syncmode=snapor--syncmode=full). [2] - This removal is also called out by downstream operators as a startup-breaking config change. [3]
- Go library / internal API moves that can break builds
If you embed geth as a library or compile tools against its packages, v1.10.14 includes package/layout changes:
- Built-in EVM trace loggers moved from
core/vmto a dedicated package (update imports/usages accordingly). [1] - EIP-712 typed-data structs moved from
signer/coretosigner/core/apitypes(update imports). [1]
- Opcode name changes (source-compat implications for tooling)
- Internal opcode names were modernized (e.g., SHA3 → KECCAK256, SUICIDE → SELFDESTRUCT). If your tooling matches on these names (debuggers/tracers/log parsers), update expectations. [1]
Primary reference: the v1.10.14 release notes (“Hourglass Nebula”). [1]
Update contracts module for go-ethereum breaking changes in commit a805e930447c.
This Morph-specific commit introduces several breaking changes that will require code updates:
- Receipt RLP schema:
Referencefield type changed from*common.Referenceto[]byte, andVersion,Reference,Memoare now optional. Update any receipt encoding/decoding or storage logic. - Chain reorg behavior: Reference index cleanup now deletes index entries for reorg'd transactions. Update reorg handlers if they depend on index state.
- MorphTx.copy() semantics:
FeeLimitis no longer pre-allocated during copy. Update code that assumesFeeLimitis always non-nil after copying. - go-ethereum v1.10.14 changes: Fast sync is removed (migrate to snap sync), EVM trace loggers and EIP-712 types have moved packages (update imports), opcode names changed (SHA3 → KECCAK256, SUICIDE → SELFDESTRUCT).
Review and update the contracts module to handle these changes before merging.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/go.mod` at line 9, Update the contracts module to account for
go-ethereum commit a805e930447c breaking changes: change any receipt
encoding/decoding or storage logic that expects Receipt.Reference as
*common.Reference to handle []byte and treat Version, Reference, Memo as
optional (nil/empty) when serializing/deserializing; update reorg handlers (any
functions named like handleReorg, processReorg, or index cleanup routines) to
assume reference index entries may be deleted for reorged transactions and avoid
relying on stale indexes; modify code using MorphTx.copy() (and any callers that
access MorphTx.FeeLimit) to handle FeeLimit possibly being nil after copy (add
nil checks or initialize FeeLimit where needed); update imports/usages of EVM
trace loggers and EIP-712 types to their new packages and rename opcode
constants (replace SHA3→KECCAK256, SUICIDE→SELFDESTRUCT) wherever referenced.
Ensure all changes touch the symbols Receipt, Reference, MorphTx.copy /
MorphTx.FeeLimit, and reorg/index handler functions.
Summary by CodeRabbit
New Features
Chores