Conversation
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
📝 WalkthroughWalkthroughMigrates many internal counters/flags to typed atomic types, updates tests to use test-scoped contexts (t.Context()/b.Context()), adjusts some concurrency patterns (wg.Go and a loop capture change), adds a stopCh to WorkerWatcher for shutdown handling, updates CI Go/PHP versions, and removes CLAUDE.md documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test/Caller
participant WW as WorkerWatcher
participant Alloc as Allocator
participant Worker as Worker
participant Log as Logger
Test->>WW: Start (NewSyncWorkerWatcher)
WW->>Alloc: Request allocate loop (ticker)
loop allocation cycle
Alloc->>WW: Allocate worker request
WW->>Worker: Spawn worker
Worker-->>WW: Worker ready / failure
WW->>Log: increment/adjust counts
end
Test->>WW: Destroy()
WW->>WW: send on stopCh
WW->>Alloc: stop allocation ticker / return WatcherStopped
Alloc-->>WW: allocation stopped
WW->>Log: final cleanup
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 OpenGrep (1.16.2)pool/static_pool/dyn_allocator_test.go┌──────────────┐ �[32m✔�[39m �[1mOpengrep OSS�[0m �[1m Loading rules from local config...�[0m Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
There was a problem hiding this comment.
Pull request overview
This PR is a stability-focused cleanup that modernizes the codebase toward newer Go/tooling, tightens shutdown behavior in the worker watcher, and updates concurrency/state tracking to newer atomic types.
Changes:
- Add shutdown signaling to
WorkerWatcherallocation flow and adjust worker-replacement accounting. - Modernize concurrency/state primitives (e.g.,
atomic.Bool,atomic.Uint64,atomic.Int64) across pool/watch/FSM. - Update tests/benchmarks to use per-test contexts and newer goroutine/WaitGroup patterns.
Reviewed changes
Copilot reviewed 23 out of 24 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
worker_watcher/worker_watcher.go |
Adds stopCh for stopping allocations during shutdown; adjusts allocation failure accounting and logs. |
worker_watcher/container/channel/vec.go |
Migrates destroy/reset flags to atomic.Bool. |
worker_watcher/container/channel/vec_test.go |
Uses t.Context() in tests. |
worker/worker.go |
Removes a gosec suppression around UnixNano() conversion. |
worker/worker_test.go |
Uses t.Context() in test execution. |
pool/static_pool/pool.go |
Migrates queue counters to typed atomics; tweaks gosec suppression around stream goroutine. |
pool/static_pool/options.go |
Updates queue-size option to use atomic.Uint64.Store. |
pool/static_pool/debug.go |
Mirrors gosec suppression change for stream goroutine. |
pool/static_pool/supervisor_test.go |
Updates contexts and teardown to use t.Context() / t.Cleanup. |
pool/static_pool/pool_test.go |
Updates contexts and teardown; refactors goroutine spawning and some assertions. |
pool/static_pool/fuzz_test.go |
Updates fuzz harness to use f.Context() and cleanup. |
pool/static_pool/dyn_allocator.go |
Refactors dynamic allocator loop style and last-allocation timestamp handling. |
pool/static_pool/dyn_allocator_test.go |
Updates contexts and goroutine spawning patterns. |
pool/allocator.go |
Removes loop-variable copies in parallel allocation and cleanup loops. |
pool/ratelimiter/ratelimiter_test.go |
Refactors goroutine spawning to a newer WaitGroup pattern. |
ipc/socket/socket_test.go |
Updates tests/benchmarks to use t.Context() / b.Context() and new goroutine patterns. |
ipc/socket/socket_spawn_test.go |
Updates contexts and goroutine spawning patterns. |
ipc/pipe/pipe_test.go |
Updates tests/benchmarks to use t.Context() / b.Context() and new goroutine patterns. |
ipc/pipe/pipe_spawn_test.go |
Updates contexts and goroutine spawning patterns. |
fsm/fsm.go |
Migrates FSM state/execution counters to typed atomics; adjusts initialization and string default handling. |
go.mod |
Bumps module Go version to 1.26. |
go.sum |
Cleans up sums after dependency/version updates. |
CLAUDE.md |
Removes repository guidance file. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #42 +/- ##
==========================================
+ Coverage 70.04% 70.48% +0.43%
==========================================
Files 19 19
Lines 1489 1494 +5
==========================================
+ Hits 1043 1053 +10
+ Misses 391 387 -4
+ Partials 55 54 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
worker_watcher/container/channel/vec.go (1)
52-75:⚠️ Potential issue | 🟠 Major
Destroydoesn't wake goroutines already blocked inPop.
Poponly checksv.destroybefore entering the blocking select. IfDestroy()runs after a caller is already waiting on<-v.workers>, that caller stays blocked untilctx.Done()instead of returningWatcherStopped, so shutdown becomes dependent on request timeouts rather than the watcher stop signal.Also applies to: 87-89
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@worker_watcher/container/channel/vec.go` around lines 52 - 75, Pop can block on receiving from v.workers even after Destroy() is called; update shutdown so blocked Pop callers return WatcherStopped immediately by making Destroy() close (or otherwise broadcast on) the v.workers channel or set a dedicated stop channel, and then update Vec.Pop to handle a closed channel (check the receive ok flag or select on the stop channel) and return errors.E(errors.WatcherStopped) when the channel is closed; ensure both places that select on v.workers (the current receive and the later select at the other block mentioned) are adjusted and preserve proper RLock/RUnlock behavior while avoiding reads from a closed channel.fsm/fsm.go (1)
38-45:⚠️ Potential issue | 🟠 MajorRefactor
Transitionto use a CAS (Compare-And-Swap) loop for atomic validation and state update.The current implementation validates the transition and updates the state in separate steps. The
recognizerloadscurrentStatemultiple times to check validity, but thenStoreexecutes independently. Another goroutine can change the state between those operations, allowing invalid sequences like Ready → Destroyed → Working despite atomic field types. This requires a CAS loop that atomically validates against the observedfromstate and updates only if unchanged. Therecognizermust be refactored to accept thefromstate as a parameter instead of loading it internally, enabling proper atomic validation withCompareAndSwap.Also applies to lines 108-157 (the
recognizerfunction).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fsm/fsm.go` around lines 38 - 45, The Transition method currently calls recognizer() which reloads currentState internally and then calls Store(), allowing races; refactor recognizer to accept the observed from state (change signature to recognizer(from int64, to int64) error and remove internal loads) and implement a CAS loop in Transition that: 1) loads from via s.currentState.Load(), 2) calls s.recognizer(from, to) to validate the transition, 3) if valid attempts s.currentState.CompareAndSwap(from, to) and repeats the load/validate/CAS until CAS succeeds or recognizer returns an error, avoiding a final independent Store; update any other callers of recognizer accordingly and remove the separate Store() call in Transition.pool/static_pool/dyn_allocator_test.go (1)
135-155:⚠️ Potential issue | 🟠 MajorMove
require.*calls out of thewg.Gocallbacks to the main test goroutine.Testify's
requirepackage callst.FailNow(), which can only terminate the goroutine it's called from. When invoked inside a spawned goroutine, it won't stop the test—it will only exit that goroutine, causing confusing behavior and test hangs. Instead, either useassert.*for non-blocking checks within goroutines, or collect errors and assert them in the main test goroutine afterwg.Wait().This pattern appears in all four test sections.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` around lines 135 - 155, The tests call require.NoError and require.Error inside the wg.Go anonymous goroutines (the callbacks passed to wg.Go that call p.Exec), which can call t.FailNow in the spawned goroutine and not stop the main test; move these require.* assertions out of the goroutines by having the goroutines send their error/results back on channels (or use assert.* inside the goroutine) and then perform require.NoError/require.Error checks in the main test goroutine after wg.Wait (or read from the result channels) so assertions like require.NoError/require.Error around p.Exec are executed from the main test goroutine rather than inside the wg.Go callbacks.pool/static_pool/supervisor_test.go (1)
200-226:⚠️ Potential issue | 🟡 MinorSame cleanup issue applies to
Test_SupervisedPoolReset.Additionally,
Reset(line 217) also derives a timeout context from the passed context (seepool.go:402). If a cancelled context is passed, the reset operation will also be affected. While the inlineResetcall usest.Context()which is valid during test execution, be aware the cleanup still has the issue noted above.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/supervisor_test.go` around lines 200 - 226, The test Test_SupervisedPoolReset registers cleanup with t.Cleanup using t.Context(), which can be canceled by the testing framework before cleanup runs; change the cleanup to call p.Destroy with a non-cancelled context (e.g., context.Background() or a context with timeout) so Destroy reliably completes (update the t.Cleanup closure to call p.Destroy(context.Background()) or use context.WithTimeout(context.Background(), <reasonableDuration>)). Also note Reset(ctx) on Pool derives a timeout from the provided ctx (Reset), so keep using t.Context() for the Reset call but ensure cleanup uses an independent context.pool/static_pool/pool_test.go (1)
1046-1072:⚠️ Potential issue | 🟡 MinorMissing cleanup in
Benchmark_Pool_Echo.Unlike
Benchmark_Pool_Echo_BatchedandBenchmark_Pool_Echo_Replaced, this benchmark does not have ab.Cleanupto destroy the pool.🔧 Proposed fix
func Benchmark_Pool_Echo(b *testing.B) { p, err := NewPool( b.Context(), func(cmd []string) *exec.Cmd { return exec.Command("php", "../../tests/client.php", "echo", "pipes") }, pipe.NewPipeFactory(log()), testCfg, log(), ) if err != nil { b.Fatal(err) } + b.Cleanup(func() { + ctx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + p.Destroy(ctx) + }) bd := make([]byte, 1024)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/pool_test.go` around lines 1046 - 1072, The benchmark Benchmark_Pool_Echo creates a pool with NewPool but never destroys it; add a cleanup to call the pool's destroy method after creation by inserting b.Cleanup(func() { _ = p.Destroy() }) (or p.Close() if your pool uses Close) immediately after the NewPool success check so the pool is torn down when the benchmark finishes; reference NewPool, Benchmark_Pool_Echo and p.Exec when locating the change.
🧹 Nitpick comments (1)
.github/workflows/tests.yml (1)
12-15: Expand the matrix beyond Linux.This PR touches a lot of test infrastructure, but this workflow still runs only on
ubuntu-latest. That leaves Windows/macOS-specific regressions unexercised.Suggested change
matrix: php: ["8.5"] go: [stable] - os: ["ubuntu-latest"] + os: ["ubuntu-latest", "macos-latest", "windows-latest"]Based on learnings: Ensure tests run successfully on multiple OS platforms including Linux, macOS, and Windows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/tests.yml around lines 12 - 15, Update the GitHub Actions matrix to run tests on multiple OSes by expanding the matrix.os entries from just "ubuntu-latest" to include "macos-latest" and "windows-latest" (keep "ubuntu-latest" too), ensure the job uses matrix.os for runs-on, and verify any OS-specific steps in the workflow handle platform differences (shell/command syntax, path separators, tool installation) so the existing php and go matrix combinations (php, go) are executed across all three platforms.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ipc/pipe/pipe_spawn_test.go`:
- Line 162: The tests/benchmarks are discarding the error from
SpawnWorkerWithContext (w, _ := f.SpawnWorkerWithContext(b.Context(), cmd)) and
then using w, which hides startup errors; change each occurrence to capture the
error (w, err := f.SpawnWorkerWithContext(...)) and fail fast: in tests call
t.Fatalf("SpawnWorkerWithContext failed: %v", err) (or require.NoError if you
prefer the test helper), and in benchmarks call b.Fatalf(...) so the
test/benchmark stops immediately on worker bootstrap failure; apply this pattern
to the other SpawnWorkerWithContext call sites that follow the same w, _ pattern
in this file.
In `@ipc/pipe/pipe_test.go`:
- Line 198: The tests are ignoring the error returned by SpawnWorkerWithContext
which leads to nil derefs when startup/handshake fails; update each site using
"w, _ := f.SpawnWorkerWithContext(...)" to check the returned error (e.g., "w,
err := f.SpawnWorkerWithContext(...)" and call t.Fatalf/b.Fatalf or
require.NoError depending on the test helper) and return immediately on error;
apply this fail-fast pattern for the occurrences around SpawnWorkerWithContext
referenced in the comment (including the uses at the same file near lines where
w is assigned: 198, 215, 296, 318, 339, 382, 409) so tests surface the actual
startup error instead of panicking later.
In `@pool/allocator.go`:
- Around line 67-74: The rollback loop in AllocateParallel that spawns
goroutines to Wait() and then calls Kill() on non-nil entries can drop Kill
(and/or Wait) errors, leaving stray workers; modify the rollback to capture
errors from workers[j].Kill() (and any Wait() error) — aggregate them (e.g., via
a slice or multierror) and append or wrap them with the original allocation
error before returning from AllocateParallel so callers see both the allocation
failure and any rollback failures; reference the workers slice,
AllocateParallel, Kill(), and Wait() when locating and updating the rollback
logic.
In `@pool/static_pool/dyn_allocator_test.go`:
- Line 50: The cleanup callbacks call np.Destroy(t.Context()) where t.Context()
is canceled when cleanup runs; change each cleanup to create a fresh context
(e.g. context.WithTimeout(context.Background(), ...) or context.Background())
inside the cleanup func and pass that to np.Destroy so Destroy's internal
timeout/cancellation works as intended; update all occurrences that use
t.Context() in cleanup callbacks (including the call sites around np.Destroy in
dyn_allocator_test.go) to use a newly constructed context and cancel/cleanup it
appropriately.
- Around line 79-92: The test spawns the loop that calls wg.Go inside a separate
goroutine which allows wg.Wait() to observe a zero counter and return early; fix
by registering all 1,000 tasks before waiting—either call wg.Add(1000) on the
sync.WaitGroup before starting the goroutine that invokes wg.Go, or move the for
loop that calls wg.Go out of the separate goroutine so wg.Go is executed
synchronously prior to calling wg.Wait; ensure this change applies to the wg,
wg.Go calls and the np.Exec payload invocations.
In `@pool/static_pool/fuzz_test.go`:
- Line 41: The cleanup currently calls p.Destroy(f.Context()), but
testing.F.Context() is cancelled before cleanup callbacks so Destroy's internal
timeout (context.WithTimeout) is defeated; change the cleanup to call p.Destroy
with a fresh non-cancelled context (e.g., context.Background() or
context.WithTimeout(context.Background(), DestroyTimeout) inside the cleanup)
instead of f.Context() so Destroy's timeout logic can run as intended; update
the f.Cleanup callback that references p.Destroy to use context.Background() (or
derive a timeout from your DestroyTimeout constant) rather than f.Context().
In `@pool/static_pool/pool_test.go`:
- Line 52: The cleanup uses t.Context() which may be cancelled before the
cleanup runs; change the t.Cleanup call to create a fresh background context
with an explicit timeout (as used elsewhere) and pass that to p.Destroy;
specifically locate the t.Cleanup(...) invoking p.Destroy and replace using
t.Context() with a context created from
context.WithTimeout(context.Background(), <reasonable duration>) and ensure the
cancel function is deferred/used appropriately inside the cleanup so
p.Destroy(ctx) runs with a valid context.
In `@pool/static_pool/supervisor_test.go`:
- Line 58: The cleanup passes t.Context() (which is already canceled when
t.Cleanup runs) into p.Destroy, causing context.WithTimeout in Destroy (uses
sp.cfg.DestroyTimeout) to create an immediately-canceled context and force the
urgent shutdown path in worker_watcher.Destroy; update each t.Cleanup call
(e.g., the one invoking p.Destroy) to call Destroy with an explicit
background-based timeout context instead of t.Context(): create a ctx :=
context.Background() (or context.WithTimeout(context.Background(),
sp.cfg.DestroyTimeout) and cancel it) and pass that ctx to p.Destroy so
Destroy's context.WithTimeout behaves correctly and triggers graceful draining
rather than urgent shutdown.
In `@worker_watcher/worker_watcher.go`:
- Around line 39-40: Guard closing of stopCh to prevent panics on repeated
Destroy calls by adding an idempotent mechanism (e.g., a sync.Once field or an
atomic/locked boolean) to the WorkerWatcher type and invoking it from Destroy so
close(ww.stopCh) only runs once; update the Destroy method (and any equivalent
shutdown paths referenced around the other occurrences) to use that once-guard
(or check-and-nil the channel under a lock) instead of directly closing stopCh,
ensuring callers can call ww.Destroy(ctx) multiple times safely.
---
Outside diff comments:
In `@fsm/fsm.go`:
- Around line 38-45: The Transition method currently calls recognizer() which
reloads currentState internally and then calls Store(), allowing races; refactor
recognizer to accept the observed from state (change signature to
recognizer(from int64, to int64) error and remove internal loads) and implement
a CAS loop in Transition that: 1) loads from via s.currentState.Load(), 2) calls
s.recognizer(from, to) to validate the transition, 3) if valid attempts
s.currentState.CompareAndSwap(from, to) and repeats the load/validate/CAS until
CAS succeeds or recognizer returns an error, avoiding a final independent Store;
update any other callers of recognizer accordingly and remove the separate
Store() call in Transition.
In `@pool/static_pool/dyn_allocator_test.go`:
- Around line 135-155: The tests call require.NoError and require.Error inside
the wg.Go anonymous goroutines (the callbacks passed to wg.Go that call p.Exec),
which can call t.FailNow in the spawned goroutine and not stop the main test;
move these require.* assertions out of the goroutines by having the goroutines
send their error/results back on channels (or use assert.* inside the goroutine)
and then perform require.NoError/require.Error checks in the main test goroutine
after wg.Wait (or read from the result channels) so assertions like
require.NoError/require.Error around p.Exec are executed from the main test
goroutine rather than inside the wg.Go callbacks.
In `@pool/static_pool/pool_test.go`:
- Around line 1046-1072: The benchmark Benchmark_Pool_Echo creates a pool with
NewPool but never destroys it; add a cleanup to call the pool's destroy method
after creation by inserting b.Cleanup(func() { _ = p.Destroy() }) (or p.Close()
if your pool uses Close) immediately after the NewPool success check so the pool
is torn down when the benchmark finishes; reference NewPool, Benchmark_Pool_Echo
and p.Exec when locating the change.
In `@pool/static_pool/supervisor_test.go`:
- Around line 200-226: The test Test_SupervisedPoolReset registers cleanup with
t.Cleanup using t.Context(), which can be canceled by the testing framework
before cleanup runs; change the cleanup to call p.Destroy with a non-cancelled
context (e.g., context.Background() or a context with timeout) so Destroy
reliably completes (update the t.Cleanup closure to call
p.Destroy(context.Background()) or use context.WithTimeout(context.Background(),
<reasonableDuration>)). Also note Reset(ctx) on Pool derives a timeout from the
provided ctx (Reset), so keep using t.Context() for the Reset call but ensure
cleanup uses an independent context.
In `@worker_watcher/container/channel/vec.go`:
- Around line 52-75: Pop can block on receiving from v.workers even after
Destroy() is called; update shutdown so blocked Pop callers return
WatcherStopped immediately by making Destroy() close (or otherwise broadcast on)
the v.workers channel or set a dedicated stop channel, and then update Vec.Pop
to handle a closed channel (check the receive ok flag or select on the stop
channel) and return errors.E(errors.WatcherStopped) when the channel is closed;
ensure both places that select on v.workers (the current receive and the later
select at the other block mentioned) are adjusted and preserve proper
RLock/RUnlock behavior while avoiding reads from a closed channel.
---
Nitpick comments:
In @.github/workflows/tests.yml:
- Around line 12-15: Update the GitHub Actions matrix to run tests on multiple
OSes by expanding the matrix.os entries from just "ubuntu-latest" to include
"macos-latest" and "windows-latest" (keep "ubuntu-latest" too), ensure the job
uses matrix.os for runs-on, and verify any OS-specific steps in the workflow
handle platform differences (shell/command syntax, path separators, tool
installation) so the existing php and go matrix combinations (php, go) are
executed across all three platforms.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5aa5521d-3ccd-4dd0-89b6-1c83320dacbf
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (23)
.github/workflows/tests.ymlCLAUDE.mdfsm/fsm.gogo.modipc/pipe/pipe_spawn_test.goipc/pipe/pipe_test.goipc/socket/socket_spawn_test.goipc/socket/socket_test.gopool/allocator.gopool/ratelimiter/ratelimiter_test.gopool/static_pool/debug.gopool/static_pool/dyn_allocator.gopool/static_pool/dyn_allocator_test.gopool/static_pool/fuzz_test.gopool/static_pool/options.gopool/static_pool/pool.gopool/static_pool/pool_test.gopool/static_pool/supervisor_test.goworker/worker.goworker/worker_test.goworker_watcher/container/channel/vec.goworker_watcher/container/channel/vec_test.goworker_watcher/worker_watcher.go
💤 Files with no reviewable changes (1)
- CLAUDE.md
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@worker_watcher/worker_watcher.go`:
- Around line 39-40: The stopCh signaling uses a size-1 buffered non-blocking
send (select { case stopCh <- struct{}{}: default: }) which only wakes a single
waiter; replace this with a broadcast close pattern: change stopCh to an
unbuffered chan struct{} (or keep chan struct{} but treat it as close-only), add
a sync.Once field (e.g., destroyOnce) on the same struct, initialize stopCh, and
in Destroy() call destroyOnce.Do(func(){ close(w.stopCh) }) so all goroutines
waiting on <-w.stopCh are awakened; remove any code that attempts to send into
stopCh and ensure Allocate retry loops and other waiters use <-w.stopCh to
detect shutdown.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fb55c996-e4e6-412e-9a4e-9866dc2b0993
📒 Files selected for processing (1)
worker_watcher/worker_watcher.go
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pool/static_pool/dyn_allocator_test.go (1)
135-155:⚠️ Potential issue | 🟠 MajorReplace
require.*assertions withassert.*insidewg.Go()goroutine bodies.
requirecallst.FailNow(), which terminates only the calling goroutine (viaruntime.Goexit()), not the test. Per Go'stestingpackage documentation,FailNowmust be invoked only from the main test goroutine. Useassertinstead, or collect errors and callrequireafterwg.Wait().Problematic instances found at:
- Lines 138, 154
- Lines 168, 181
- Lines 228, 241
- Line 296
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` around lines 135 - 155, Inside the goroutines passed to wg.Go replace calls to require.* (e.g., require.NoError and require.Error) with the corresponding assert.* (assert.NoError, assert.Error) so you don't call t.FailNow()/runtime.Goexit from non-main test goroutines; locate these calls in the closures used with wg.Go (the two anonymous functions shown around the "sending request 1" and "sending request 2" logs) and change the require assertions to assert assertions, or alternatively capture errors into a channel/slice and call require from the main test goroutine after wg.Wait().
♻️ Duplicate comments (2)
pool/static_pool/dyn_allocator_test.go (2)
50-50:⚠️ Potential issue | 🟠 MajorUse a fresh context inside these cleanup callbacks.
testing.T.Context()is canceled beforet.Cleanupruns, so eachDestroy(t.Context())here reachesPool.Destroywith an already-canceled parent context. That makes shutdown flaky and bypasses the normal timeout path.When does Go's testing.T.Context() get canceled relative to t.Cleanup callbacks, and does context.WithTimeout(t.Context(), ...) inherit that cancellation?Also applies to: 106-106, 196-198, 248-250, 304-304
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` at line 50, The cleanup callbacks call np.Destroy(t.Context()) which uses testing.T.Context() that is already canceled when t.Cleanup runs; replace those calls to create a fresh parent context (e.g., ctx := context.Background() or ctx, cancel := context.WithTimeout(context.Background(), <timeout>); defer cancel()) and pass that ctx to np.Destroy in the t.Cleanup closures so Pool.Destroy (np.Destroy) runs with a live context and honors the intended timeout; update each t.Cleanup that currently references t.Context() (the closure invoking np.Destroy) to construct and use a fresh context instead.
80-92:⚠️ Potential issue | 🟠 MajorRegister the
wg.Gowork before callingwg.Wait().
wg.Wait()can observe a zero counter and return before the outer goroutine has queued any of the 1,000 tasks, which makes the rest of the test race the workload.Suggested fix
- go func() { - for range 1000 { - wg.Go(func() { - r, erre := np.Exec(t.Context(), &payload.Payload{Body: []byte("hello"), Context: nil}, make(chan struct{})) - if erre != nil { - t.Log("failed request: ", erre.Error()) - return - } - resp := <-r - assert.Equal(t, []byte("hello"), resp.Body()) - }) - } - }() + for range 1000 { + wg.Go(func() { + r, erre := np.Exec(t.Context(), &payload.Payload{Body: []byte("hello"), Context: nil}, make(chan struct{})) + if erre != nil { + t.Log("failed request: ", erre.Error()) + return + } + resp := <-r + assert.Equal(t, []byte("hello"), resp.Body()) + }) + }In Go 1.25/1.26, what do the sync.WaitGroup.Go and Wait docs say about calling Go after Wait has started when the counter is initially zero?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` around lines 80 - 92, The test races because wg.Wait() can observe a zero counter before the outer goroutine has enqueued wg.Go tasks; fix by registering the work before waiting—call wg.Add(1000) (or otherwise increment the waitgroup counter) before starting the goroutine that calls wg.Go, or move the wg.Add calls inside the same goroutine before any wg.Go invocation so that wg.Go/wg.Wait ordering is safe; update the test around wg.Go, wg.Wait and the outer anonymous goroutine that calls np.Exec/ payload.Payload to ensure the waitgroup counter is set prior to Wait.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pool/static_pool/dyn_allocator_test.go`:
- Around line 321-330: After NewPool(...) returns successfully, immediately
register test teardown to always destroy the pool (e.g., using t.Cleanup or
defer with p.Destroy) so that any subsequent require.* failures still clean up
PHP workers; locate the NewPool call and the surrounding test body (symbols:
NewPool, p, require.NoError) and add a teardown registration right after
require.NoError(t, err) to call the pool destroy/close method (p.Destroy or
equivalent) with the appropriate context.
---
Outside diff comments:
In `@pool/static_pool/dyn_allocator_test.go`:
- Around line 135-155: Inside the goroutines passed to wg.Go replace calls to
require.* (e.g., require.NoError and require.Error) with the corresponding
assert.* (assert.NoError, assert.Error) so you don't call
t.FailNow()/runtime.Goexit from non-main test goroutines; locate these calls in
the closures used with wg.Go (the two anonymous functions shown around the
"sending request 1" and "sending request 2" logs) and change the require
assertions to assert assertions, or alternatively capture errors into a
channel/slice and call require from the main test goroutine after wg.Wait().
---
Duplicate comments:
In `@pool/static_pool/dyn_allocator_test.go`:
- Line 50: The cleanup callbacks call np.Destroy(t.Context()) which uses
testing.T.Context() that is already canceled when t.Cleanup runs; replace those
calls to create a fresh parent context (e.g., ctx := context.Background() or
ctx, cancel := context.WithTimeout(context.Background(), <timeout>); defer
cancel()) and pass that ctx to np.Destroy in the t.Cleanup closures so
Pool.Destroy (np.Destroy) runs with a live context and honors the intended
timeout; update each t.Cleanup that currently references t.Context() (the
closure invoking np.Destroy) to construct and use a fresh context instead.
- Around line 80-92: The test races because wg.Wait() can observe a zero counter
before the outer goroutine has enqueued wg.Go tasks; fix by registering the work
before waiting—call wg.Add(1000) (or otherwise increment the waitgroup counter)
before starting the goroutine that calls wg.Go, or move the wg.Add calls inside
the same goroutine before any wg.Go invocation so that wg.Go/wg.Wait ordering is
safe; update the test around wg.Go, wg.Wait and the outer anonymous goroutine
that calls np.Exec/ payload.Payload to ensure the waitgroup counter is set prior
to Wait.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e22aafb7-475e-4875-9700-c44000e6f161
📒 Files selected for processing (1)
pool/static_pool/dyn_allocator_test.go
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pool/static_pool/dyn_allocator_test.go (2)
34-42:⚠️ Potential issue | 🟡 MinorUse
requireforNewPoolsetup.
assert.NoError/assert.NotNillets these tests keep running after construction fails, so the laterExec,Workers, and goroutine code can dereference a nil pool and hide the real setup failure.Also applies to: 67-77, 121-131, 213-223, 266-277
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` around lines 34 - 42, The test uses assert.NoError and assert.NotNil when calling NewPool which allows the test to continue with a nil np; change those to require.NoError(t, err) and require.NotNil(t, np) for the NewPool setup so the test aborts immediately on construction failure. Update the same pattern wherever NewPool is constructed (the other blocks around the mentioned ranges) so NewPool, np and err are checked with require instead of assert before using np in Exec/Workers/goroutines.
135-155:⚠️ Potential issue | 🟠 MajorDon't call
requirefromwg.Gogoroutines.
require.*delegates toFailNow, which is documented to work only from the goroutine executing the test; it cannot safely be called from spawned goroutines. In these cases, useassertor return errors to the main test goroutine and callrequireafterwg.Wait().Lines affected: 138, 154, 168, 181, 228, 241, 296
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` around lines 135 - 155, The tests are calling require.* inside wg.Go goroutines (e.g., inside the wg.Go closures that call p.Exec), which is unsafe because require.FailNow must run in the main test goroutine; change those require calls to non-fatal assertions or propagate errors back to the main goroutine and assert there. Specifically, update the anonymous functions passed to wg.Go (the closures that call p.Exec and inspect r) to use assert.NoError/assert.Error (from the testing package used) and collect any failure state (errors or booleans) into a channel or slice, or return an error to the main test goroutine before calling wg.Wait(); after wg.Wait() use require.* to fail the test definitively if any goroutine reported an error. Ensure you update all occurrences where require is used inside wg.Go (closures around calls to p.Exec and response handling).
♻️ Duplicate comments (2)
pool/static_pool/dyn_allocator_test.go (2)
50-50:⚠️ Potential issue | 🟠 MajorUse a fresh context inside each cleanup callback.
testing.T.Context()is canceled just before cleanup callbacks run, so everyDestroy(t.Context())here hands teardown an already-canceled context. Create a new cleanup context inside the callback instead, ideally with the pool's destroy timeout. (pkg.go.dev)Also applies to: 106-106, 196-198, 248-250, 304-304, 332-332, 397-397
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` at line 50, The cleanup callbacks call np.Destroy(t.Context()) which uses a context that has been canceled before cleanup runs; change each cleanup to create a fresh context (e.g., context.WithTimeout or context.WithCancel using the pool's destroy timeout) inside the callback and pass that new context to np.Destroy (replace calls like np.Destroy(t.Context()) in tests around np.Destroy, e.g., at the cleanup in dyn_allocator_test.go and the other occurrences) and ensure the created context is cancelled after Destroy returns.
79-92:⚠️ Potential issue | 🟠 MajorRegister the 1,000
wg.Gotasks before callingwg.Wait().The outer
go func()letswg.Wait()race the initial task registration and return while the counter is still zero.WaitGroup.Gohas the same ordering rule asAdd: when the group is empty,Gomust happen beforeWait. (pkg.go.dev)Suggested fix
wg := &sync.WaitGroup{} - go func() { - for range 1000 { - wg.Go(func() { - r, erre := np.Exec(t.Context(), &payload.Payload{Body: []byte("hello"), Context: nil}, make(chan struct{})) - if erre != nil { - t.Log("failed request: ", erre.Error()) - return - } - resp := <-r - assert.Equal(t, []byte("hello"), resp.Body()) - }) - } - }() + for range 1000 { + wg.Go(func() { + r, erre := np.Exec(t.Context(), &payload.Payload{Body: []byte("hello"), Context: nil}, make(chan struct{})) + if erre != nil { + t.Log("failed request: ", erre.Error()) + return + } + resp := <-r + assert.Equal(t, []byte("hello"), resp.Body()) + }) + }Also applies to: 101-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pool/static_pool/dyn_allocator_test.go` around lines 79 - 92, The test registers 1,000 wg.Go tasks inside an outer goroutine which can let wg.Wait race and return early; change the test so all 1,000 calls to wg.Go happen before calling wg.Wait (e.g., remove the outer go func() and run the for loop synchronously, or add a registration barrier that signals when registration is complete), ensuring the WaitGroup.Go registrations (wg.Go) occur before wg.Wait; keep the existing request body/response logic (np.Exec and reading resp) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@pool/static_pool/dyn_allocator_test.go`:
- Around line 34-42: The test uses assert.NoError and assert.NotNil when calling
NewPool which allows the test to continue with a nil np; change those to
require.NoError(t, err) and require.NotNil(t, np) for the NewPool setup so the
test aborts immediately on construction failure. Update the same pattern
wherever NewPool is constructed (the other blocks around the mentioned ranges)
so NewPool, np and err are checked with require instead of assert before using
np in Exec/Workers/goroutines.
- Around line 135-155: The tests are calling require.* inside wg.Go goroutines
(e.g., inside the wg.Go closures that call p.Exec), which is unsafe because
require.FailNow must run in the main test goroutine; change those require calls
to non-fatal assertions or propagate errors back to the main goroutine and
assert there. Specifically, update the anonymous functions passed to wg.Go (the
closures that call p.Exec and inspect r) to use assert.NoError/assert.Error
(from the testing package used) and collect any failure state (errors or
booleans) into a channel or slice, or return an error to the main test goroutine
before calling wg.Wait(); after wg.Wait() use require.* to fail the test
definitively if any goroutine reported an error. Ensure you update all
occurrences where require is used inside wg.Go (closures around calls to p.Exec
and response handling).
---
Duplicate comments:
In `@pool/static_pool/dyn_allocator_test.go`:
- Line 50: The cleanup callbacks call np.Destroy(t.Context()) which uses a
context that has been canceled before cleanup runs; change each cleanup to
create a fresh context (e.g., context.WithTimeout or context.WithCancel using
the pool's destroy timeout) inside the callback and pass that new context to
np.Destroy (replace calls like np.Destroy(t.Context()) in tests around
np.Destroy, e.g., at the cleanup in dyn_allocator_test.go and the other
occurrences) and ensure the created context is cancelled after Destroy returns.
- Around line 79-92: The test registers 1,000 wg.Go tasks inside an outer
goroutine which can let wg.Wait race and return early; change the test so all
1,000 calls to wg.Go happen before calling wg.Wait (e.g., remove the outer go
func() and run the for loop synchronously, or add a registration barrier that
signals when registration is complete), ensuring the WaitGroup.Go registrations
(wg.Go) occur before wg.Wait; keep the existing request body/response logic
(np.Exec and reading resp) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1d39b74f-8f73-4ef4-8154-fd77efcd6165
📒 Files selected for processing (1)
pool/static_pool/dyn_allocator_test.go
Reason for This PR
Description of Changes
License Acceptance
By submitting this pull request, I confirm that my contribution is made under
the terms of the MIT license.
PR Checklist
[Author TODO: Meet these criteria.][Reviewer TODO: Verify that these criteria are met. Request changes if not]git commit -s).CHANGELOG.md.Summary by CodeRabbit
Chores
Refactor
Tests
Documentation