Skip to content

Ensure fork suppressor only triggers for includable seals#8470

Merged
jordanschalm merged 5 commits intomasterfrom
jord/execution-fork-suppressor-fix
Mar 4, 2026
Merged

Ensure fork suppressor only triggers for includable seals#8470
jordanschalm merged 5 commits intomasterfrom
jord/execution-fork-suppressor-fix

Conversation

@jordanschalm
Copy link
Copy Markdown
Member

@jordanschalm jordanschalm commented Feb 26, 2026

Context

In this recent incident, sealing halted because a consensus safety mechanism intended to prevent sealing inconsistent results was triggered. The mechanism was triggered for a forked result which had one EN and one VN attesting to it.

However, we require two ENs attesting to a result in order to seal it. Conceptually the safety mechanism should only trigger if we have two inconsistent includable seals; in this case it triggered with one includable seal (the correct result, which 4/5 ENs attested to) and one almost-includable seal (fork result, which 1/5 ENs and one VN attested to).

The "includable" property is enforced by the IncorporatedResultSeal mempool: it only returns seals for which there are two execution receipts.

The reason for this behaviour is that the two read methods of the ExecForkSuppressor behave differently:

  • All only checks seals returned from the underlying IncorporatedResultSeal mempool.
  • Get compares the requested seal (from the underlying IncorporatedResultSeal mempool) to an internal cache stored locally in the ExecForkSuppressor. This internal cache is necessary because Get queries a single seal by ID, but we need to compare that seal to all the other seals. However, the internal cache includes all seals ever added to the mempool, including those for which there are less than two execution receipts. So, the Get function has different behaviour and will more liberally trigger the consensus safety mechanism.

The codepath that triggered the consensus safety mechanism during the incident used Get, which is why it triggered with only one EN producing a forked result/receipt.

Changes

This PR aims to make minimal changes to the existing logic so that ExecForkSuppressor only triggers the safety mechanism for includable seals:

  • No change to All (already works as desired)
  • Get now filters the internal cache by checking whether the underlying IncorporatedResultSeal mempool would return the seal. If the mempool returns the seal by ID, that indicates that there are 2 execution receipts for the seal.

Summary by CodeRabbit

  • Bug Fixes
    • Reworked fork-suppression and conflict-detection to consider only actually retrievable results, with stricter query-time fork detection and height-based pruning to reduce stale entries and false positives.
  • Documentation
    • Clarified seal vs. incorporated-result semantics and explained deferred fork-detection behavior.
  • Tests
    • Updated tests to reflect additional retrieval checks during conflict filtering.

@jordanschalm jordanschalm requested a review from a team as a code owner February 26, 2026 20:08
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 26, 2026

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Scanned Files

None

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 26, 2026

📝 Walkthrough

Walkthrough

ExecForkSuppressor replaces per-block seal sets with per-block sets of IncorporatedResult IDs, defers includability checks to the wrapped mempool at query time, adds height-based pruning, and reworks conflict detection to translate IDs to includable seals, persist fork evidence, and invoke a callback when forks are detected. (41 words)

Changes

Cohort / File(s) Summary
ExecForkSuppressor core
module/mempool/consensus/exec_fork_suppressor.go
Replaced per-block sealSet with potentiallySealableResults (set of IncorporatedResult IDs); added byHeight and lowestHeight; updated constructor and Clear; modified Add/Get/All to record IDs, defer includability checks to wrapped mempool, translate IDs→seals at query time, and updated pruning logic. Reworked conflict detection and fork-handling flow (detect, clear, persist evidence, callback).
Unit tests
module/mempool/consensus/exec_fork_suppressor_test.go
Adjusted expectations to account for extra Get calls to the wrapped mempool when filtering candidates by IncorporatedResult ID during query-time includability resolution.
Mempool seal comments
module/mempool/incorporated_result_seals.go, module/mempool/stdmap/incorporated_result_seals.go, module/mempool/consensus/incorporated_result_seals.go
Expanded and clarified doc-comments (Add semantics, receipt wording). No behavior changes.
Go module
go.mod
Minor manifest edits.

Sequence Diagram(s)

sequenceDiagram
    participant Suppressor as ExecForkSuppressor
    participant Pool as WrappedMempool
    participant Store as Persistence
    participant CB as ForkCallback

    Suppressor->>Pool: Add(seal) — forward to wrapped pool
    Pool-->>Suppressor: Accept/Reject
    Suppressor->>Suppressor: record IncorporatedResult ID in sealsForBlock and byHeight

    Suppressor->>Pool: Get(query for block) — request candidate seals
    alt multiple IncorporatedResult IDs present
      note right of Suppressor: translate IDs -> candidate seals
      Suppressor->>Pool: Get(seal by ID) per-ID to check includability
      Pool-->>Suppressor: return includable seals
    end
    Suppressor->>Suppressor: filterConflictingSeals(includable seals)
    alt conflict detected
      Suppressor->>Store: persist fork evidence
      Store-->>Suppressor: ack
      Suppressor->>CB: execForkDetected callback
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰
I hopped through maps and ID-lined trails,
I asked the pool which seals set sails,
I pruned by height and checked each clue,
Then thumped a drum when forks came through —
A carrot for every race that nails!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely describes the main change: the PR ensures the fork suppressor only triggers for includable seals, which is the central bug fix addressing the root cause of sealing halts.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jord/execution-fork-suppressor-fix

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
module/mempool/consensus/exec_fork_suppressor_test.go (1)

237-241: Add a regression case for “non-includable conflicting seal” on Get.

These updates verify extra Get invocations, but they don’t explicitly assert the core fix path. Please add a case where a conflicting seal exists in sealsForBlock but wrappedMempool.Get(conflictingID) returns false, and verify OnExecFork is not triggered.

Also applies to: 278-281

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor_test.go` around lines 237 -
241, Add a regression sub-case that simulates a non-includable conflicting seal
by stubbing wrappedMempool.Get to return (nil, false) for the conflicting seal
ID and ensuring OnExecFork is not invoked; specifically, in the test around the
existing wrappedMempool.On("Get", conflictingSeal.IncorporatedResultID()) calls,
add an alternative expectation where wrappedMempool.On("Get",
conflictingSeal.IncorporatedResultID()).Return(nil, false).Once() (or use the
mock's Once/Times appropriate) for the path where conflictingSeal appears in
sealsForBlock but is not present in the mempool, then assert that the
ExecForkSuppressor (or the mock handling OnExecFork) does not receive an
OnExecFork call for that block/height. Ensure you add the same case for the
second location noted (lines ~278-281) so both test paths cover the regression.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@module/mempool/consensus/exec_fork_suppressor_test.go`:
- Around line 237-241: Add a regression sub-case that simulates a non-includable
conflicting seal by stubbing wrappedMempool.Get to return (nil, false) for the
conflicting seal ID and ensuring OnExecFork is not invoked; specifically, in the
test around the existing wrappedMempool.On("Get",
conflictingSeal.IncorporatedResultID()) calls, add an alternative expectation
where wrappedMempool.On("Get",
conflictingSeal.IncorporatedResultID()).Return(nil, false).Once() (or use the
mock's Once/Times appropriate) for the path where conflictingSeal appears in
sealsForBlock but is not present in the mempool, then assert that the
ExecForkSuppressor (or the mock handling OnExecFork) does not receive an
OnExecFork call for that block/height. Ensure you add the same case for the
second location noted (lines ~278-281) so both test paths cover the regression.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6263ac9 and 5c64fe2.

📒 Files selected for processing (2)
  • module/mempool/consensus/exec_fork_suppressor.go
  • module/mempool/consensus/exec_fork_suppressor_test.go

Copy link
Copy Markdown
Member

@AlexHentschel AlexHentschel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the direction of your fix. However, I find that the change adds too much subtle complexity that easily can lead to further bugs being introduced in future maintenance.

I think a conceptual subtlety with the current code is the following:

  • // Internally indexes every added seal by blockID. Expects that underlying mempool never eject items.
  • // STEP 2: add newSeal to the wrapped mempool
    added, err := s.seals.Add(newSeal) // internally de-duplicates
    if err != nil {
    return added, fmt.Errorf("failed to add seal to wrapped mempool: %w", err)
    }
    if !added { // if underlying mempool did not accept the seal => nothing to do anymore
    return false, nil
    }
    // STEP 3: add newSeal to secondary index of this wrapper
    // CAUTION: We expect that underlying mempool NEVER ejects seals because it breaks liveness.
    blockSeals, found := s.sealsForBlock[blockID]

This documentation is ambiguous at best, misleading at the worst given the current implementation:

  • When I read that the "underlying mempool NEVER ejects seals" it is very intuitive to assume: If I put a seal into the mempool, then I can retrieve it from the mempool.
  • And while we do not eject seals from the mempool in the strict sense, I feel we are practically behaving in an unintuitive way:
    • if a seal is hidden because it has only one EN committing to it, this is functionally indistinguishable from the seal being ejected (unless a second EN commits to the result). And this is exactly the detail that most people will likely intuitively get wrong.

We also didn't understand the source of the error while fire fighting. So to me, this indicates that there is more to this issue than just fixing the edge case (potentially increasing the complexity further).

Atm, I am not clear what the best approach is. On the one hand, I'd prefer to have more intuitive code with lower maintenance risk. Likely, verification and sealing (incl. extensive checking mode) is not going to be done anytime soon, so we'll be running with those disaster prevention mechanisms for quite some time. On the other hand, they are just substitutes for a mature solution that will eventually hopefully no longer be needed, and we are really tight on dev time right now. So this might be tech debt worth taking on.

Will experiment a couple hours, at least expanding on the documentation.

Update

@jordanschalm PR (largely documentation) and some additional guardrails: #8472 (merged)

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Feb 26, 2026

Codecov Report

❌ Patch coverage is 90.90909% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
module/mempool/consensus/exec_fork_suppressor.go 90.90% 0 Missing and 1 partial ⚠️

📢 Thoughts on this report? Let us know!

AlexHentschel and others added 2 commits February 26, 2026 23:18
…objects, so there is no cached seal value to accidentally read without re-validating through the wrapped pool; the type name and its doc block make the superset semantics explicit.
the index `sealSet` stores ID of incorporated results only, not seal …
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
module/mempool/consensus/exec_fork_suppressor.go (2)

320-352: ⚠️ Potential issue | 🟡 Minor

lowestHeight not updated after pruning non-empty state.

When sealsForBlock is non-empty, the method prunes entries but doesn't update lowestHeight. This causes:

  1. Suboptimal early rejection in Add() (line 145) - seals below the pruned threshold still pass through to the wrapped pool
  2. Incorrect range calculation for the optimization at line 339 in subsequent prune calls

The wrapped pool enforces correctness, but the local state becomes inconsistent.

Proposed fix
 	} else {
 		for h := s.lowestHeight; h < height; h++ {
 			s.removeByHeight(h)
 		}
 	}
+	s.lowestHeight = height

 	return nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor.go` around lines 320 - 352,
PruneUpToHeight currently prunes entries when s.sealsForBlock is non-empty but
never updates s.lowestHeight, leaving local state inconsistent; after acquiring
the lock and performing the removals via s.removeByHeight (the branches
iterating s.byHeight or from s.lowestHeight), set s.lowestHeight = height
(ensure this happens while the mutex is held) so subsequent Add() and future
PruneUpToHeight() calls use the updated lowestHeight for correct early rejection
and range calculations.

311-318: ⚠️ Potential issue | 🟡 Minor

byHeight index should be cleared for consistency.

The Clear() method resets sealsForBlock but doesn't clear byHeight. While this may not cause immediate issues when called during fork detection (since execForkDetected prevents further Add() calls), the interface contract promises to remove all entities. For completeness and to prevent stale state if Clear() is called in other contexts, byHeight should also be reset.

Proposed fix
 func (s *ExecForkSuppressor) Clear() {
 	s.mutex.Lock()
 	defer s.mutex.Unlock()
 	s.sealsForBlock = make(map[flow.Identifier]potentiallySealableResults)
+	s.byHeight = make(map[uint64]map[flow.Identifier]struct{})
 	s.seals.Clear()
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor.go` around lines 311 - 318,
Clear() currently resets s.sealsForBlock and calls s.seals.Clear() but leaves
the byHeight index populated; update ExecForkSuppressor.Clear to also reset the
byHeight map (e.g., s.byHeight = make(map[<keyType>]<valueType>) or the existing
zeroing pattern used elsewhere) while holding s.mutex so all internal state
(sealsForBlock, byHeight, and s.seals) is cleared consistently to honor the
“remove all entities” contract.
🧹 Nitpick comments (1)
module/mempool/consensus/exec_fork_suppressor.go (1)

336-337: Minor documentation typo.

"range to prune" appears twice in the comment.

Proposed fix
-	// Optimization: if there are less height in the index than the height range to prune,
-	// range to prune, then just go through each seal.
+	// Optimization: if there are fewer heights in the index than the height range to prune,
+	// then just iterate through each indexed height.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor.go` around lines 336 - 337, Fix
the duplicated phrase in the comment in exec_fork_suppressor.go: replace the
line "// Optimization: if there are less height in the index than the height
range to prune, // range to prune, then just go through each seal." with a
single corrected sentence (also fix grammar) such as "// Optimization: if there
are fewer heights in the index than the height range to prune, then just go
through each seal." so the duplicate "range to prune" is removed and wording is
clearer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@module/mempool/consensus/exec_fork_suppressor.go`:
- Around line 320-352: PruneUpToHeight currently prunes entries when
s.sealsForBlock is non-empty but never updates s.lowestHeight, leaving local
state inconsistent; after acquiring the lock and performing the removals via
s.removeByHeight (the branches iterating s.byHeight or from s.lowestHeight), set
s.lowestHeight = height (ensure this happens while the mutex is held) so
subsequent Add() and future PruneUpToHeight() calls use the updated lowestHeight
for correct early rejection and range calculations.
- Around line 311-318: Clear() currently resets s.sealsForBlock and calls
s.seals.Clear() but leaves the byHeight index populated; update
ExecForkSuppressor.Clear to also reset the byHeight map (e.g., s.byHeight =
make(map[<keyType>]<valueType>) or the existing zeroing pattern used elsewhere)
while holding s.mutex so all internal state (sealsForBlock, byHeight, and
s.seals) is cleared consistently to honor the “remove all entities” contract.

---

Nitpick comments:
In `@module/mempool/consensus/exec_fork_suppressor.go`:
- Around line 336-337: Fix the duplicated phrase in the comment in
exec_fork_suppressor.go: replace the line "// Optimization: if there are less
height in the index than the height range to prune, // range to prune, then just
go through each seal." with a single corrected sentence (also fix grammar) such
as "// Optimization: if there are fewer heights in the index than the height
range to prune, then just go through each seal." so the duplicate "range to
prune" is removed and wording is clearer.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5c64fe2 and f24cb27.

📒 Files selected for processing (4)
  • module/mempool/consensus/exec_fork_suppressor.go
  • module/mempool/consensus/incorporated_result_seals.go
  • module/mempool/incorporated_result_seals.go
  • module/mempool/stdmap/incorporated_result_seals.go
✅ Files skipped from review due to trivial changes (2)
  • module/mempool/stdmap/incorporated_result_seals.go
  • module/mempool/consensus/incorporated_result_seals.go

@AlexHentschel
Copy link
Copy Markdown
Member

Thanks for figuring out what the problem was. Great work 🙇

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
module/mempool/consensus/exec_fork_suppressor.go (2)

309-341: ⚠️ Potential issue | 🟠 Major

lowestHeight is not updated after pruning in the normal case.

When sealsForBlock is non-empty, the pruning loop executes but lowestHeight is never updated to height. This causes:

  1. Future PruneUpToHeight calls to iterate over already-pruned heights unnecessarily
  2. The Add method's height check (line 150) uses a stale lowestHeight, potentially allowing seals at heights that should have been rejected
Proposed fix to update lowestHeight after pruning
 	} else {
 		for h := s.lowestHeight; h < height; h++ {
 			s.removeByHeight(h)
 		}
 	}
+	s.lowestHeight = height

 	return nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor.go` around lines 309 - 341, The
PruneUpToHeight method (ExecForkSuppressor) never updates s.lowestHeight when
s.sealsForBlock is non-empty, leaving a stale lowestHeight after you remove
heights; after the pruning loop(s) (both the uint64(len(s.byHeight)) branch and
the for h := s.lowestHeight branch) set s.lowestHeight = height while still
holding s.mutex so future PruneUpToHeight and Add calls use the correct
lowestHeight; ensure this assignment happens before the function returns.

300-307: ⚠️ Potential issue | 🟡 Minor

byHeight is not cleared, leading to inconsistent state.

Clear() resets sealsForBlock but leaves byHeight intact. This creates an inconsistency: byHeight maps heights to block IDs that no longer exist in sealsForBlock. While this may not cause immediate failures (the system halts on fork detection), it could cause issues if the state is later inspected or if the component is reused.

Proposed fix to reset all internal state
 func (s *ExecForkSuppressor) Clear() {
 	s.mutex.Lock()
 	defer s.mutex.Unlock()
 	s.sealsForBlock = make(map[flow.Identifier]potentiallySealableResults)
+	s.byHeight = make(map[uint64]map[flow.Identifier]struct{})
+	s.lowestHeight = 0
 	s.seals.Clear()
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor.go` around lines 300 - 307,
Clear() currently resets s.sealsForBlock and s.seals but leaves s.byHeight
intact, causing stale mappings; update ExecForkSuppressor.Clear to also reset
s.byHeight to an empty map (same concrete type as declared on the struct) while
holding the mutex so all internal state is consistently cleared (i.e., add
s.byHeight = make(<the byHeight map type>) before unlocking).
🧹 Nitpick comments (1)
module/mempool/consensus/exec_fork_suppressor.go (1)

237-242: Consider defensive check for missing sealsForBlock entry.

Line 237 assumes sealsForBlock[seal.Seal.BlockID] will always exist when the underlying pool returns a seal. While this is the expected invariant (per comments in Remove), if irIDs is nil, len(irIDs) == 0 silently falls through to the expensive multi-seal path instead of detecting the inconsistency.

For consistency with the fatal check in Remove() (lines 272-275), consider adding a similar guard:

Proposed defensive check
 	irIDs := s.sealsForBlock[seal.Seal.BlockID]
+	if irIDs == nil {
+		s.mutex.RUnlock()
+		s.log.Fatal().Msg("inconsistent state detected: seal not in secondary index")
+	}
 	if len(irIDs) == 1 {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/mempool/consensus/exec_fork_suppressor.go` around lines 237 - 242, Add
a defensive existence check for s.sealsForBlock[seal.Seal.BlockID] before
relying on its length: in the function that reads irIDs (where the snippet uses
irIDs := s.sealsForBlock[seal.Seal.BlockID]), verify the map lookup succeeded
(e.g., via the comma-ok form) and if not, release the lock (s.mutex.RUnlock())
and fail consistently with the same fatal/panic behavior used in Remove() to
surface the invariant breach; ensure this uses the same log/panic style and
mentions the BlockID so the inconsistent state is clearly reported.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@module/mempool/consensus/exec_fork_suppressor.go`:
- Around line 309-341: The PruneUpToHeight method (ExecForkSuppressor) never
updates s.lowestHeight when s.sealsForBlock is non-empty, leaving a stale
lowestHeight after you remove heights; after the pruning loop(s) (both the
uint64(len(s.byHeight)) branch and the for h := s.lowestHeight branch) set
s.lowestHeight = height while still holding s.mutex so future PruneUpToHeight
and Add calls use the correct lowestHeight; ensure this assignment happens
before the function returns.
- Around line 300-307: Clear() currently resets s.sealsForBlock and s.seals but
leaves s.byHeight intact, causing stale mappings; update
ExecForkSuppressor.Clear to also reset s.byHeight to an empty map (same concrete
type as declared on the struct) while holding the mutex so all internal state is
consistently cleared (i.e., add s.byHeight = make(<the byHeight map type>)
before unlocking).

---

Nitpick comments:
In `@module/mempool/consensus/exec_fork_suppressor.go`:
- Around line 237-242: Add a defensive existence check for
s.sealsForBlock[seal.Seal.BlockID] before relying on its length: in the function
that reads irIDs (where the snippet uses irIDs :=
s.sealsForBlock[seal.Seal.BlockID]), verify the map lookup succeeded (e.g., via
the comma-ok form) and if not, release the lock (s.mutex.RUnlock()) and fail
consistently with the same fatal/panic behavior used in Remove() to surface the
invariant breach; ensure this uses the same log/panic style and mentions the
BlockID so the inconsistent state is clearly reported.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f24cb27 and bfaf758.

📒 Files selected for processing (1)
  • module/mempool/consensus/exec_fork_suppressor.go

Copy link
Copy Markdown
Member

@durkmurder durkmurder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job, really like the documentation changes.

@jordanschalm jordanschalm added this pull request to the merge queue Mar 4, 2026
Merged via the queue into master with commit b55ff64 Mar 4, 2026
61 checks passed
@jordanschalm jordanschalm deleted the jord/execution-fork-suppressor-fix branch March 4, 2026 17:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants