Network mode host#6
Conversation
📝 WalkthroughWalkthroughThis PR implements a generic agent-provided session summary mechanism (replacing Vibe Kanban–specific hardcoded logic), adds a complete Vibe Kanban web-UI agent module with port discovery, introduces configurable host vs bridge container networking modes, improves Docker image lifecycle management with dependency-ordered purge, and updates comprehensive design and requirement documentation. ChangesAgent Summary Info and Vibe Kanban Implementation
Docker Networking and Image Lifecycle
Testing Infrastructure
Constants and Documentation
🎯 4 (Complex) | ⏱️ ~60 minutes
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/cmd/root.go (1)
677-700:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRun agent health checks on the reconnect path too.
This early-return path skips the health-check loop on Lines 750-755. For an already-running container, the command can still print summary info/URLs without warning that an agent is unhealthy, so reconnect behavior is less reliable than fresh-start behavior.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/cmd/root.go` around lines 677 - 700, The reconnect branch currently skips the agent health-check loop and returns early, so change it to run the same agent health checks before printing the session summary: reuse/extract the health-check logic that iterates over enabledAgents (the loop used later that detects unhealthy agents and emits warnings) into a helper (e.g., runAgentHealthChecks(ctx, c, containerName, enabledAgents) returning agentInfo and any health errors) and call it here before printSessionSummary(containerName, sshPort, enabledIDs, agentInfo); ensure warnings for unhealthy agents are emitted the same way as on fresh start and only then return.
🧹 Nitpick comments (2)
internal/docker/export_test.go (1)
1-1: ⚡ Quick winEmpty test file serves no purpose.
This file contains only a package declaration with no exports, functions, or test code. Consider either removing it if it was added accidentally, or clarifying its intended purpose if it's a placeholder for future test utilities.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/docker/export_test.go` at line 1, The file export_test.go currently only contains "package docker" and should either be deleted if accidental or turned into a real test placeholder: either remove export_test.go to avoid empty test files, or replace its contents with a minimal test scaffold (e.g., a TestExportPlaceholder function in package docker that calls t.Skip or add a clear TODO comment explaining intended tests) so it no longer is empty and causes confusion; locate export_test.go and update or remove it accordingly.internal/cmd/purge_test.go (1)
145-167: ⚡ Quick winAssert prune execution in the new ordering test.
The new test verifies image-removal order but doesn’t verify that
ImagesPruneactually ran; a regression could skip prune and still pass.✅ Suggested test hardening
err := cmd.RunPurgeWith(mock) require.NoError(t, err) require.Len(t, mock.removedImageIDs, 3) +require.True(t, mock.pruned, "purge should prune dangling images between instance and base removal")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/cmd/purge_test.go` around lines 145 - 167, The test currently checks removal order but not that ImagesPrune ran; update the test around the RunPurgeWith(mock) call to assert the mock's prune-call indicator (e.g., mock.pruneCalled, mock.ImagesPruneCalled, or mock.pruneCount) is set/greater than zero after cmd.RunPurgeWith returns; keep the new assertion near the top (immediately after require.NoError(t, err)) and reference the mock and ImagesPrune-related field/method names so a regression that skips ImagesPrune will fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.kiro/specs/bootstrap-ai-coding/design-vibekanban.md:
- Line 698: Two Dockerfile example code fences lack a language tag which
triggers markdownlint MD040; locate the two Dockerfile examples in the
design-vibekanban.md content (the fenced blocks used for Dockerfile snippets)
and add the language identifier by changing the opening fences to use
```dockerfile (or ```Dockerfile) so the code blocks declare the language and
satisfy MD040.
- Around line 137-179: Update the design doc to remove the stale core-coupled
requirements: delete or stop prescribing SessionSummary.VibeKanbanURL, the
discoverVibeKanbanPort() flow in root.go, and any direct core checks against
constants.VibeKanbanAgentName; likewise stop asserting bridge mode is not
host-accessible. Instead document that each agent implements
AgentInfo/SummaryInfo to expose its own summary (including any UI URL), and that
the renderer (FormatSessionSummary / root.go) should be core-agnostic and only
display what SummaryInfo provides; replace references to container port
discovery with the generic internal/docker/DiscoverListeningPort helper (or note
agent-side responsibility) and update the other listed sections (450-530,
592-597, 613-614) to reflect this agent-owned SummaryInfo pattern.
In `@internal/agents/vibekanban/integration_test.go`:
- Around line 67-72: The temp project directory created in setupSharedContainer
(variable projectDir / dirName) is never cleaned up; modify setupSharedContainer
to persist the full projectDir (e.g., store it on the test fixture struct or
return it) so teardownSharedContainer can access it, and in
teardownSharedContainer call os.RemoveAll(projectDir) (with error
handling/logging) to delete the temp dir; apply the same change for the other
creation sites referenced (lines ~194-209) so every MkdirTemp has a matching
RemoveAll in teardown.
In `@internal/agents/vibekanban/vibekanban.go`:
- Around line 164-170: The health check currently uses docker.ExecInContainer
with pgrep -f "vibe-kanban", which can match supervisor scripts like
"vibe-kanban-supervisor.sh" and produce false positives; update the pgrep
invocation in the same docker.ExecInContainer call to use a stricter match
(e.g., use pgrep -x "vibe-kanban" or pgrep -f with a word-boundary/anchor) so it
only matches the actual service process name, keep the existing exitCode
handling (exitCode == 0 means running) and error wrapping in the fmt.Errorf
return.
- Around line 87-99: The generated bash script uses "date +%%%%s" in the
fmt.Sprintf format string which becomes "date +%%s" in the output (causing
literal "%s" in bash); change the format token to "date +%%s" (i.e., use two
percent signs in the Go format string) where the restart timestamps are
appended/checked (the places that build the script around RESTART_TIMES, the
loop that prunes by WINDOW_SECONDS and the line that appends a new timestamp) so
the produced script contains "date +%s" and returns epoch seconds as intended.
In `@internal/cmd/purge.go`:
- Around line 83-87: The dangling-images prune call currently removes all host
dangling images because danglingFilter only sets "dangling=true" and ImagesPrune
is called directly; restrict pruning to BAC-owned artifacts by adding the BAC
label filter (e.g. "label", "bac.managed=true") to the filter set (or
merge/replace danglingFilter with the existing purgeFilter() usage) before
calling api.ImagesPrune so the prune only targets images matching both
dangling=true and bac.managed=true; update the code that builds danglingFilter
(symbol: danglingFilter) or call purgeFilter() and pass that filter into
ImagesPrune (symbol: ImagesPrune) to scope the operation.
In `@internal/cmd/root_test.go`:
- Around line 482-529: The test is falsely matching any line containing the key
and value; change the assertions to look for exact agent-info lines produced by
FormatSessionSummary instead of substring matches: for each agent.KeyValue in
agentInfo build the exact expectedLine (e.g. fmt.Sprintf("%s: %s", kv.Key,
kv.Value)) and assert that this expectedLine appears in lines at an index
greater than the detected "Enabled agents:" index (use equality or anchored
regex match against lines slice), replacing the current contains-based checks in
the loops that reference agentInfo and FormatSessionSummary.
In `@internal/cmd/root.go`:
- Around line 394-400: The current call to c.ImagesPrune with only
danglingFilter (dangling=true) is daemon-wide and can remove unrelated images;
change this so the purge is scoped to BAC-owned artifacts only: either add a
BAC-specific filter (e.g.,
danglingFilter.Add("label","<BAC-ownership-label>=true") or
danglingFilter.Add("reference","<bac-repo-or-name-pattern>") before calling
c.ImagesPrune, or replace the ImagesPrune call with a targeted flow that lists
images (c.ImageList / ImagesList) filtered for BAC-owned images and then
prunes/removes those specific image IDs; update the code around danglingFilter
and c.ImagesPrune to use that BAC-scoped filter or targeted deletion so --purge
only affects BAC resources.
In `@internal/docker/integration_test.go`:
- Around line 653-654: Replace the hardcoded SSH port assignments (e.g., the
sshPort := 22222 declarations) with dynamically allocated ports by calling the
existing helper findFreePort(); specifically, locate the sshPort variables in
integration_test.go (the sshPort := 22222 at the top and the similar
declarations around lines referenced) and change them to sshPort :=
findFreePort() (or the equivalent call pattern used elsewhere in this file),
ensuring any dependent uses continue to reference sshPort so tests bind to an
available port.
In `@internal/docker/runner_network_test.go`:
- Line 39: The handler currently checks the endpoint by slicing
r.URL.Path[len(r.URL.Path)-18:] which panics for short paths; replace that
fixed-length slice with strings.HasSuffix(r.URL.Path, "/containers/create") and
ensure strings is imported; keep the rest of the condition (r.Method ==
http.MethodPost) unchanged so the branch still triggers only for POST to the
create endpoint.
---
Outside diff comments:
In `@internal/cmd/root.go`:
- Around line 677-700: The reconnect branch currently skips the agent
health-check loop and returns early, so change it to run the same agent health
checks before printing the session summary: reuse/extract the health-check logic
that iterates over enabledAgents (the loop used later that detects unhealthy
agents and emits warnings) into a helper (e.g., runAgentHealthChecks(ctx, c,
containerName, enabledAgents) returning agentInfo and any health errors) and
call it here before printSessionSummary(containerName, sshPort, enabledIDs,
agentInfo); ensure warnings for unhealthy agents are emitted the same way as on
fresh start and only then return.
---
Nitpick comments:
In `@internal/cmd/purge_test.go`:
- Around line 145-167: The test currently checks removal order but not that
ImagesPrune ran; update the test around the RunPurgeWith(mock) call to assert
the mock's prune-call indicator (e.g., mock.pruneCalled, mock.ImagesPruneCalled,
or mock.pruneCount) is set/greater than zero after cmd.RunPurgeWith returns;
keep the new assertion near the top (immediately after require.NoError(t, err))
and reference the mock and ImagesPrune-related field/method names so a
regression that skips ImagesPrune will fail.
In `@internal/docker/export_test.go`:
- Line 1: The file export_test.go currently only contains "package docker" and
should either be deleted if accidental or turned into a real test placeholder:
either remove export_test.go to avoid empty test files, or replace its contents
with a minimal test scaffold (e.g., a TestExportPlaceholder function in package
docker that calls t.Skip or add a clear TODO comment explaining intended tests)
so it no longer is empty and causes confusion; locate export_test.go and update
or remove it accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 078b8525-f03f-4e85-a078-18d5b8198db9
📒 Files selected for processing (51)
.kiro/specs/bootstrap-ai-coding/design-agent-summary-info.md.kiro/specs/bootstrap-ai-coding/design-build-resources.md.kiro/specs/bootstrap-ai-coding/design-components.md.kiro/specs/bootstrap-ai-coding/design-data-models.md.kiro/specs/bootstrap-ai-coding/design-docker.md.kiro/specs/bootstrap-ai-coding/design-properties.md.kiro/specs/bootstrap-ai-coding/design-vibekanban.md.kiro/specs/bootstrap-ai-coding/design.md.kiro/specs/bootstrap-ai-coding/requirements-agent-summary-info.md.kiro/specs/bootstrap-ai-coding/requirements-agents.md.kiro/specs/bootstrap-ai-coding/requirements-cli-combinations.md.kiro/specs/bootstrap-ai-coding/requirements-core.md.kiro/specs/bootstrap-ai-coding/requirements-two-layer-image.md.kiro/specs/bootstrap-ai-coding/requirements.md.kiro/specs/bootstrap-ai-coding/tasks.md.kiro/steering/agent-module.md.kiro/steering/cli-flags.md.kiro/steering/constants.md.kiro/steering/product.md.kiro/steering/structure.mdinternal/agent/agent.gointernal/agent/registry_test.gointernal/agents/augment/augment.gointernal/agents/augment/augment_test.gointernal/agents/augment/integration_test.gointernal/agents/buildresources/buildresources.gointernal/agents/buildresources/buildresources_test.gointernal/agents/buildresources/integration_test.gointernal/agents/claude/claude.gointernal/agents/claude/claude_test.gointernal/agents/claude/integration_test.gointernal/agents/vibekanban/integration_test.gointernal/agents/vibekanban/vibekanban.gointernal/agents/vibekanban/vibekanban_test.gointernal/cmd/purge.gointernal/cmd/purge_test.gointernal/cmd/root.gointernal/cmd/root_test.gointernal/cmd/stop.gointernal/constants/constants.gointernal/datadir/datadir.gointernal/datadir/datadir_test.gointernal/docker/builder.gointernal/docker/builder_test.gointernal/docker/client.gointernal/docker/export_test.gointernal/docker/integration_test.gointernal/docker/runner.gointernal/docker/runner_network_test.gointernal/docker/testing_helper.gomain.go
| The `SessionSummary` struct in `cmd/root.go` needs a new field: | ||
|
|
||
| ```go | ||
| type SessionSummary struct { | ||
| DataDir string | ||
| ProjectDir string | ||
| SSHPort int | ||
| SSHConnect string | ||
| EnabledAgents []string | ||
| VibeKanbanURL string // empty if not discovered or not enabled | ||
| } | ||
| ``` | ||
|
|
||
| `FormatSessionSummary` conditionally includes the Vibe Kanban line: | ||
|
|
||
| ```go | ||
| func FormatSessionSummary(s SessionSummary) string { | ||
| var sb strings.Builder | ||
| fmt.Fprintf(&sb, "Data directory: %s\n", s.DataDir) | ||
| fmt.Fprintf(&sb, "Project directory: %s\n", s.ProjectDir) | ||
| fmt.Fprintf(&sb, "SSH port: %d\n", s.SSHPort) | ||
| fmt.Fprintf(&sb, "SSH connect: %s\n", s.SSHConnect) | ||
| fmt.Fprintf(&sb, "Enabled agents: %s\n", strings.Join(s.EnabledAgents, ", ")) | ||
| if s.VibeKanbanURL != "" { | ||
| fmt.Fprintf(&sb, "Vibe Kanban: %s\n", s.VibeKanbanURL) | ||
| } | ||
| return sb.String() | ||
| } | ||
| ``` | ||
|
|
||
| ### Port Discovery Function | ||
|
|
||
| A new exported function in `internal/docker/` for discovering a process's listening port: | ||
|
|
||
| ```go | ||
| // DiscoverListeningPort executes `ss -tlnp` inside the container and returns | ||
| // the first port where the given process name is listening. Returns 0 if not found. | ||
| func DiscoverListeningPort(ctx context.Context, c *Client, containerID string, processName string) (int, error) { | ||
| // Uses ExecInContainerWithOutput (new helper) to capture stdout | ||
| // Parses ss output for lines containing processName | ||
| // Extracts port from the Local Address:Port column | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Stale core-coupled design sections conflict with current architecture.
These sections still prescribe SessionSummary.VibeKanbanURL, discoverVibeKanbanPort() in root.go, and direct core checks against constants.VibeKanbanAgentName, plus they claim bridge mode isn’t host-accessible. That contradicts the generic AgentInfo/SummaryInfo design and current bridge port-mapping behavior.
Please update this doc to consistently describe agent-owned SummaryInfo() and core-agnostic rendering only.
Also applies to: 450-530, 592-597, 613-614
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.kiro/specs/bootstrap-ai-coding/design-vibekanban.md around lines 137 - 179,
Update the design doc to remove the stale core-coupled requirements: delete or
stop prescribing SessionSummary.VibeKanbanURL, the discoverVibeKanbanPort() flow
in root.go, and any direct core checks against constants.VibeKanbanAgentName;
likewise stop asserting bridge mode is not host-accessible. Instead document
that each agent implements AgentInfo/SummaryInfo to expose its own summary
(including any UI URL), and that the renderer (FormatSessionSummary / root.go)
should be core-agnostic and only display what SummaryInfo provides; replace
references to container port discovery with the generic
internal/docker/DiscoverListeningPort helper (or note agent-side responsibility)
and update the other listed sections (450-530, 592-597, 613-614) to reflect this
agent-owned SummaryInfo pattern.
| When all default agents are enabled, the Vibe Kanban layers appear in the base image: | ||
|
|
||
| **Base_Image (`bac-base:latest`):** | ||
| ``` |
There was a problem hiding this comment.
Add fenced code block languages to satisfy markdownlint.
The two Dockerfile examples should declare a language (for example, ```dockerfile) to clear MD040.
Also applies to: 720-720
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 698-698: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.kiro/specs/bootstrap-ai-coding/design-vibekanban.md at line 698, Two
Dockerfile example code fences lack a language tag which triggers markdownlint
MD040; locate the two Dockerfile examples in the design-vibekanban.md content
(the fenced blocks used for Dockerfile snippets) and add the language identifier
by changing the opening fences to use ```dockerfile (or ```Dockerfile) so the
code blocks declare the language and satisfy MD040.
| projectDir, err := os.MkdirTemp("", "bac-vibekanban-integration-*") | ||
| if err != nil { | ||
| return fmt.Errorf("creating temp dir: %w", err) | ||
| } | ||
| dirName := filepath.Base(projectDir) | ||
|
|
There was a problem hiding this comment.
Temporary project directory is never cleaned up.
The directory created in setupSharedContainer() is not removed in teardown, which leaves /tmp residue across integration runs. Persist and remove it in teardownSharedContainer().
Also applies to: 194-209
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/agents/vibekanban/integration_test.go` around lines 67 - 72, The
temp project directory created in setupSharedContainer (variable projectDir /
dirName) is never cleaned up; modify setupSharedContainer to persist the full
projectDir (e.g., store it on the test fixture struct or return it) so
teardownSharedContainer can access it, and in teardownSharedContainer call
os.RemoveAll(projectDir) (with error handling/logging) to delete the temp dir;
apply the same change for the other creation sites referenced (lines ~194-209)
so every MkdirTemp has a matching RemoveAll in teardown.
| NOW=$(date +%%%%s) | ||
| PRUNED=() | ||
| for ts in "${RESTART_TIMES[@]}"; do | ||
| if (( NOW - ts < WINDOW_SECONDS )); then | ||
| PRUNED+=("$ts") | ||
| fi | ||
| done | ||
| RESTART_TIMES=("${PRUNED[@]}") | ||
| if (( ${#RESTART_TIMES[@]} >= MAX_RESTARTS )); then | ||
| echo "vibe-kanban-supervisor: exceeded $MAX_RESTARTS restarts in ${WINDOW_SECONDS}s, giving up" >&2 | ||
| exit 1 | ||
| fi | ||
| RESTART_TIMES+=("$(date +%%%%s)") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n 'date \+\%\%\%\%s|date \+\%\%s' internal/agents/vibekanban/vibekanban.go
sed -n '84,102p' internal/agents/vibekanban/vibekanban.goRepository: koudis/bootstrap-ai-coding
Length of output: 173
🏁 Script executed:
cat -n internal/agents/vibekanban/vibekanban.go | sed -n '80,105p'Repository: koudis/bootstrap-ai-coding
Length of output: 1061
🏁 Script executed:
rg -n "NOW=\|date.*s" internal/agents/vibekanban/vibekanban.go -A 2 -B 2Repository: koudis/bootstrap-ai-coding
Length of output: 52
🏁 Script executed:
rg -n "vibe-kanban-supervisor|NOW=\|date \+" internal/agents/vibekanban/vibekanban.go -B 5 -A 5Repository: koudis/bootstrap-ai-coding
Length of output: 1521
🏁 Script executed:
rg -n "func buildSupervisorScript" internal/agents/vibekanban/vibekanban.go -A 50Repository: koudis/bootstrap-ai-coding
Length of output: 1776
Restart-window timestamps are incorrectly escaped in generated script.
On lines 87 and 99, date +%%%%s in the fmt.Sprintf raw string generates date +%%s in the bash script, which outputs a literal %s instead of epoch seconds. This breaks the 60-second restart-window pruning logic.
💡 Proposed fix
- NOW=$(date +%%%%s)
+ NOW=$(date +%%s)
@@
- RESTART_TIMES+=("$(date +%%%%s)")
+ RESTART_TIMES+=("$(date +%%s)")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| NOW=$(date +%%%%s) | |
| PRUNED=() | |
| for ts in "${RESTART_TIMES[@]}"; do | |
| if (( NOW - ts < WINDOW_SECONDS )); then | |
| PRUNED+=("$ts") | |
| fi | |
| done | |
| RESTART_TIMES=("${PRUNED[@]}") | |
| if (( ${#RESTART_TIMES[@]} >= MAX_RESTARTS )); then | |
| echo "vibe-kanban-supervisor: exceeded $MAX_RESTARTS restarts in ${WINDOW_SECONDS}s, giving up" >&2 | |
| exit 1 | |
| fi | |
| RESTART_TIMES+=("$(date +%%%%s)") | |
| NOW=$(date +%%s) | |
| PRUNED=() | |
| for ts in "${RESTART_TIMES[@]}"; do | |
| if (( NOW - ts < WINDOW_SECONDS )); then | |
| PRUNED+=("$ts") | |
| fi | |
| done | |
| RESTART_TIMES=("${PRUNED[@]}") | |
| if (( ${`#RESTART_TIMES`[@]} >= MAX_RESTARTS )); then | |
| echo "vibe-kanban-supervisor: exceeded $MAX_RESTARTS restarts in ${WINDOW_SECONDS}s, giving up" >&2 | |
| exit 1 | |
| fi | |
| RESTART_TIMES+=("$(date +%%s)") |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/agents/vibekanban/vibekanban.go` around lines 87 - 99, The generated
bash script uses "date +%%%%s" in the fmt.Sprintf format string which becomes
"date +%%s" in the output (causing literal "%s" in bash); change the format
token to "date +%%s" (i.e., use two percent signs in the Go format string) where
the restart timestamps are appended/checked (the places that build the script
around RESTART_TIMES, the loop that prunes by WINDOW_SECONDS and the line that
appends a new timestamp) so the produced script contains "date +%s" and returns
epoch seconds as intended.
| exitCode, err = docker.ExecInContainer(ctx, c, containerID, []string{"pgrep", "-f", "vibe-kanban"}) | ||
| if err != nil { | ||
| return fmt.Errorf("vibe-kanban health check failed (process check): %w", err) | ||
| } | ||
| if exitCode == 0 { | ||
| return nil // Process is running | ||
| } |
There was a problem hiding this comment.
Process health check can false-positive on the supervisor process.
pgrep -f "vibe-kanban" may match vibe-kanban-supervisor.sh, so health can pass even if the actual web server is down. Use a stricter match for the service process.
💡 Proposed fix
- exitCode, err = docker.ExecInContainer(ctx, c, containerID, []string{"pgrep", "-f", "vibe-kanban"})
+ exitCode, err = docker.ExecInContainer(ctx, c, containerID, []string{"pgrep", "-x", "vibe-kanban"})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| exitCode, err = docker.ExecInContainer(ctx, c, containerID, []string{"pgrep", "-f", "vibe-kanban"}) | |
| if err != nil { | |
| return fmt.Errorf("vibe-kanban health check failed (process check): %w", err) | |
| } | |
| if exitCode == 0 { | |
| return nil // Process is running | |
| } | |
| exitCode, err = docker.ExecInContainer(ctx, c, containerID, []string{"pgrep", "-x", "vibe-kanban"}) | |
| if err != nil { | |
| return fmt.Errorf("vibe-kanban health check failed (process check): %w", err) | |
| } | |
| if exitCode == 0 { | |
| return nil // Process is running | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/agents/vibekanban/vibekanban.go` around lines 164 - 170, The health
check currently uses docker.ExecInContainer with pgrep -f "vibe-kanban", which
can match supervisor scripts like "vibe-kanban-supervisor.sh" and produce false
positives; update the pgrep invocation in the same docker.ExecInContainer call
to use a stricter match (e.g., use pgrep -x "vibe-kanban" or pgrep -f with a
word-boundary/anchor) so it only matches the actual service process name, keep
the existing exitCode handling (exitCode == 0 means running) and error wrapping
in the fmt.Errorf return.
| danglingFilter := filters.NewArgs() | ||
| danglingFilter.Add("dangling", "true") | ||
| if _, err := api.ImagesPrune(ctx, danglingFilter); err != nil { | ||
| fmt.Printf("warning: pruning dangling images: %v\n", err) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify prune-filter scope in purge flow
rg -n -C3 'danglingFilter|ImagesPrune' internal/cmd/purge.goRepository: koudis/bootstrap-ai-coding
Length of output: 911
🏁 Script executed:
# Check for any label usage or BAC-managed tag patterns
rg -i 'bac\.managed|label.*bac|managed.*label' --type goRepository: koudis/bootstrap-ai-coding
Length of output: 5277
🏁 Script executed:
# Look at full RunPurgeWith function to understand design and intent
sed -n '40,100p' internal/cmd/purge.goRepository: koudis/bootstrap-ai-coding
Length of output: 2084
🏁 Script executed:
# Check if images are created with labels anywhere in the codebase
rg -i 'label.*=|Add.*label' --type go | head -20Repository: koudis/bootstrap-ai-coding
Length of output: 623
Scope dangling-image pruning to BAC-owned artifacts.
Line 85 prunes all dangling images on the host, not just BAC-related ones. The function first filters to BAC-managed images using purgeFilter(), but the ImagesPrune call bypasses that scope and operates globally. This can delete unrelated local Docker cache/images outside this tool's ownership.
Add the bac.managed=true label filter to match the pattern used elsewhere in the codebase (stop.go, runner.go):
🔧 Safer baseline change
danglingFilter := filters.NewArgs()
danglingFilter.Add("dangling", "true")
+danglingFilter.Add("label", "bac.managed=true")
if _, err := api.ImagesPrune(ctx, danglingFilter); err != nil {
fmt.Printf("warning: pruning dangling images: %v\n", err)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| danglingFilter := filters.NewArgs() | |
| danglingFilter.Add("dangling", "true") | |
| if _, err := api.ImagesPrune(ctx, danglingFilter); err != nil { | |
| fmt.Printf("warning: pruning dangling images: %v\n", err) | |
| } | |
| danglingFilter := filters.NewArgs() | |
| danglingFilter.Add("dangling", "true") | |
| danglingFilter.Add("label", "bac.managed=true") | |
| if _, err := api.ImagesPrune(ctx, danglingFilter); err != nil { | |
| fmt.Printf("warning: pruning dangling images: %v\n", err) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/cmd/purge.go` around lines 83 - 87, The dangling-images prune call
currently removes all host dangling images because danglingFilter only sets
"dangling=true" and ImagesPrune is called directly; restrict pruning to
BAC-owned artifacts by adding the BAC label filter (e.g. "label",
"bac.managed=true") to the filter set (or merge/replace danglingFilter with the
existing purgeFilter() usage) before calling api.ImagesPrune so the prune only
targets images matching both dangling=true and bac.managed=true; update the code
that builds danglingFilter (symbol: danglingFilter) or call purgeFilter() and
pass that filter into ImagesPrune (symbol: ImagesPrune) to scope the operation.
| // Generate random AgentInfo (0–5 entries). | ||
| infoCount := rapid.IntRange(0, 5).Draw(t, "infoCount") | ||
| var agentInfo []agent.KeyValue | ||
| for i := 0; i < infoCount; i++ { | ||
| key := rapid.StringMatching(`[A-Za-z][A-Za-z0-9 ]*`).Draw(t, fmt.Sprintf("key%d", i)) | ||
| value := rapid.StringMatching(`[a-zA-Z0-9:/._-]+`).Draw(t, fmt.Sprintf("value%d", i)) | ||
| agentInfo = append(agentInfo, agent.KeyValue{Key: key, Value: value}) | ||
| } | ||
|
|
||
| summary := cmd.SessionSummary{ | ||
| DataDir: dataDir, | ||
| ProjectDir: projectDir, | ||
| SSHPort: sshPort, | ||
| SSHConnect: sshConnect, | ||
| EnabledAgents: agents, | ||
| AgentInfo: agentInfo, | ||
| } | ||
|
|
||
| output := cmd.FormatSessionSummary(summary) | ||
| lines := strings.Split(output, "\n") | ||
|
|
||
| // (a) Every KeyValue.Key and KeyValue.Value appears in the output. | ||
| for _, kv := range agentInfo { | ||
| require.Contains(t, output, kv.Key+":", | ||
| "output must contain key %q with colon", kv.Key) | ||
| require.Contains(t, output, kv.Value, | ||
| "output must contain value %q", kv.Value) | ||
| } | ||
|
|
||
| // (b) All agent info lines appear after the "Enabled agents" line. | ||
| enabledAgentsLineIdx := -1 | ||
| for i, line := range lines { | ||
| if strings.HasPrefix(line, "Enabled agents:") { | ||
| enabledAgentsLineIdx = i | ||
| break | ||
| } | ||
| } | ||
| require.NotEqual(t, -1, enabledAgentsLineIdx, | ||
| "output must contain 'Enabled agents:' line") | ||
|
|
||
| for _, kv := range agentInfo { | ||
| for i, line := range lines { | ||
| if strings.Contains(line, kv.Key+":") && strings.Contains(line, kv.Value) { | ||
| require.Greater(t, i, enabledAgentsLineIdx, | ||
| "agent info line for key %q must appear after 'Enabled agents:' line", kv.Key) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Match exact agent-info lines in this property test.
The current scan treats any line containing the key and value as a hit. Generated keys like SSH port or Data directory can collide with the five standard summary lines and make this property fail even when FormatSessionSummary is correct.
Suggested tightening
- for _, kv := range agentInfo {
- for i, line := range lines {
- if strings.Contains(line, kv.Key+":") && strings.Contains(line, kv.Value) {
- require.Greater(t, i, enabledAgentsLineIdx,
- "agent info line for key %q must appear after 'Enabled agents:' line", kv.Key)
- }
- }
- }
+ for _, kv := range agentInfo {
+ expectedLine := fmt.Sprintf("%-17s%s", kv.Key+":", kv.Value)
+ foundIdx := -1
+ for i, line := range lines {
+ if line == expectedLine {
+ foundIdx = i
+ break
+ }
+ }
+ require.NotEqual(t, -1, foundIdx,
+ "output must contain the exact agent info line for key %q", kv.Key)
+ require.Greater(t, foundIdx, enabledAgentsLineIdx,
+ "agent info line for key %q must appear after 'Enabled agents:' line", kv.Key)
+ }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/cmd/root_test.go` around lines 482 - 529, The test is falsely
matching any line containing the key and value; change the assertions to look
for exact agent-info lines produced by FormatSessionSummary instead of substring
matches: for each agent.KeyValue in agentInfo build the exact expectedLine (e.g.
fmt.Sprintf("%s: %s", kv.Key, kv.Value)) and assert that this expectedLine
appears in lines at an index greater than the detected "Enabled agents:" index
(use equality or anchored regex match against lines slice), replacing the
current contains-based checks in the loops that reference agentInfo and
FormatSessionSummary.
| // Prune dangling images (untagged intermediate build layers that may still | ||
| // reference bac-base as their parent). | ||
| danglingFilter := filters.NewArgs() | ||
| danglingFilter.Add("dangling", "true") | ||
| if _, err := c.ImagesPrune(ctx, danglingFilter); err != nil { | ||
| fmt.Fprintf(os.Stderr, "warning: pruning dangling images: %v\n", err) | ||
| } |
There was a problem hiding this comment.
Keep --purge scoped to BAC-owned artifacts.
ImagesPrune(dangling=true) is daemon-wide, so this path can delete unrelated dangling layers/build cache from other projects. That makes a BAC-scoped purge destructive outside the tool’s own resources.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/cmd/root.go` around lines 394 - 400, The current call to
c.ImagesPrune with only danglingFilter (dangling=true) is daemon-wide and can
remove unrelated images; change this so the purge is scoped to BAC-owned
artifacts only: either add a BAC-specific filter (e.g.,
danglingFilter.Add("label","<BAC-ownership-label>=true") or
danglingFilter.Add("reference","<bac-repo-or-name-pattern>") before calling
c.ImagesPrune, or replace the ImagesPrune call with a targeted flow that lists
images (c.ImageList / ImagesList) filtered for BAC-owned images and then
prunes/removes those specific image IDs; update the code around danglingFilter
and c.ImagesPrune to use that BAC-scoped filter or targeted deletion so --purge
only affects BAC resources.
| sshPort := 22222 | ||
|
|
There was a problem hiding this comment.
Use dynamically allocated ports in these integration tests.
Lines 653, 765, and 877 hardcode SSH ports. If any is already in use on the host, these tests become flaky. Reuse findFreePort() here as done elsewhere in this file.
💡 Proposed fix
- // Use a high port to avoid conflicts
- sshPort := 22222
+ sshPort, err := findFreePort()
+ require.NoError(t, err, "finding free port")
@@
- // Use a high port for SSH to avoid conflicts
- sshPort := 22224
+ sshPort, err := findFreePort()
+ require.NoError(t, err, "finding free port")
@@
- // Use a different high port to avoid conflicts with host network test
- sshPort := 22223
+ sshPort, err := findFreePort()
+ require.NoError(t, err, "finding free port")Also applies to: 765-766, 877-878
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/docker/integration_test.go` around lines 653 - 654, Replace the
hardcoded SSH port assignments (e.g., the sshPort := 22222 declarations) with
dynamically allocated ports by calling the existing helper findFreePort();
specifically, locate the sshPort variables in integration_test.go (the sshPort
:= 22222 at the top and the similar declarations around lines referenced) and
change them to sshPort := findFreePort() (or the equivalent call pattern used
elsewhere in this file), ensuring any dependent uses continue to reference
sshPort so tests bind to an available port.
| mux := http.NewServeMux() | ||
| mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { | ||
| // Match the containers/create endpoint (path includes version prefix) | ||
| if r.Method == http.MethodPost && r.URL.Path[len(r.URL.Path)-18:] == "/containers/create" { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the unsafe fixed-length path slicing is present.
rg -n 'r\.URL\.Path\[len\(r\.URL\.Path\)-18:\]' internal/docker/runner_network_test.goRepository: koudis/bootstrap-ai-coding
Length of output: 167
🏁 Script executed:
cat -n internal/docker/runner_network_test.go | head -60Repository: koudis/bootstrap-ai-coding
Length of output: 2583
🏁 Script executed:
rg -n "http.MethodPost|POST" internal/docker/runner_network_test.goRepository: koudis/bootstrap-ai-coding
Length of output: 167
🏁 Script executed:
wc -l internal/docker/runner_network_test.go && tail -100 internal/docker/runner_network_test.goRepository: koudis/bootstrap-ai-coding
Length of output: 3675
Use safe suffix matching for /containers/create detection.
At Line 39, fixed-length slicing on r.URL.Path can panic for paths shorter than 18 characters. Use strings.HasSuffix() instead to safely check the endpoint.
💡 Proposed fix
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
+ "strings"
"testing"
@@
- if r.Method == http.MethodPost && r.URL.Path[len(r.URL.Path)-18:] == "/containers/create" {
+ if r.Method == http.MethodPost && strings.HasSuffix(r.URL.Path, "/containers/create") {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if r.Method == http.MethodPost && r.URL.Path[len(r.URL.Path)-18:] == "/containers/create" { | |
| import ( | |
| "context" | |
| "encoding/json" | |
| "fmt" | |
| "net/http" | |
| "net/http/httptest" | |
| "strings" | |
| "testing" | |
| ) | |
| // ... handler function ... | |
| if r.Method == http.MethodPost && strings.HasSuffix(r.URL.Path, "/containers/create") { |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/docker/runner_network_test.go` at line 39, The handler currently
checks the endpoint by slicing r.URL.Path[len(r.URL.Path)-18:] which panics for
short paths; replace that fixed-length slice with strings.HasSuffix(r.URL.Path,
"/containers/create") and ensure strings is imported; keep the rest of the
condition (r.Method == http.MethodPost) unchanged so the branch still triggers
only for POST to the create endpoint.
@coderabbitai
Summary by CodeRabbit
New Features
--host-network-offflag to switch container networking from host mode to bridge mode.Enhancements