HYPERFLEET-573 - test: add test cases for maestro#32
HYPERFLEET-573 - test: add test cases for maestro#32tzhou5 wants to merge 3 commits intoopenshift-hyperfleet:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new comprehensive test-design document at test-design/testcases/maestro.md for the Maestro Transportation Layer adapter. The document defines end-to-end and targeted test scenarios (ManifestWork creation and status feedback, resource-generation tracking, transport mode comparison, NestedDiscovery status bridging, multi-target routing, Maestro unavailability, invalid targetCluster, and TLS/mTLS validation), lists environment and setup prerequisites, provides step-by-step actions, expected outcomes, status/metadata checks, idempotency notes, concrete commands, and cleanup procedures. Sequence Diagram(s)sequenceDiagram
participant Tester as Test Runner
participant MGMT as Management Cluster API
participant Maestro as Maestro Transport
participant Target as Target Cluster Agent / NestedDiscovery
participant K8s as Target Kubernetes API
Tester->>MGMT: Create ManifestWork (resources, targetCluster)
MGMT->>Maestro: Persisted ManifestWork / Notify transport
Maestro->>Target: Deliver resources (Create/Skip, Apply)
Target->>K8s: Apply resources
K8s-->>Target: Pod/CR status
Target-->>Maestro: Resource apply status & discovery status
Maestro-->>MGMT: Update ManifestWork status (conditions, per-resource status)
MGMT-->>Tester: Status available (kubectl / API)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
test-design/testcases/maestro.md (1)
163-165: Selecting.items[0]makes status assertions order-dependent and flaky.Several checks assume the first status entry is the relevant adapter result. If multiple adapters/reporters exist, ordering can differ. Filter by adapter name (or latest
observed_time) before assertions.Suggested jq pattern
-curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}/statuses | jq '.items[0]' +curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}/statuses \ + | jq --arg adapter "${ADAPTER_NAME}" ' + .items + | map(select(.adapter == $adapter)) + | sort_by(.observed_time) + | last + 'Also applies to: 370-372, 421-423, 557-559, 777-779, 897-899
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 163 - 165, The test uses a brittle selector `.items[0]` when calling the cluster statuses API; update the jq selection in the curl pipeline so it filters the .items array for the relevant adapter/report entry (e.g., match by adapter name field or pick the item with the latest observed_time) instead of taking index 0; replace each occurrence of `.items[0]` in the test file with a jq filter that finds the adapter by name or sorts/selects by observed_time before performing assertions so the checks are order-independent (apply the same change for the other occurrences noted).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test-design/testcases/maestro.md`:
- Around line 176-184: The cleanup currently only deletes ManifestWork and the
namespace (variables MANIFESTWORK_NAME, CONSUMER_NAME, CLUSTER_ID) which leaves
HyperFleet Cluster records; update the cleanup to explicitly call the Cluster
API delete for the created cluster ID(s) (the Cluster resource corresponding to
CLUSTER_ID) before or alongside the existing kubectl namespace/manifestwork
removal, ensuring each scenario (the steps around the occurrences of CLUSTER_ID
at lines referenced) makes an API delete request for the created cluster record
as primary cleanup and keeps the kubectl delete of ManifestWork/namespace as
secondary.
- Around line 494-505: The nested discovery for the mgmtConfigMap is using
byName: "{{ .clusterId }}" which does not match the expected ConfigMap name
"cluster-config", so update the nestedDiscoveries entry for name "mgmtConfigMap"
to use byName: "cluster-config" (replace the template value) so the discovery
resolves correctly; ensure the change is applied wherever the mgmtConfigMap
nested discovery appears (also update the similar occurrence around the other
instance noted) so the byName value matches the documented ConfigMap name.
- Around line 770-771: Update the test expectation that currently references
MaestroError code 16 to use code 14 (UNAVAILABLE) for server-unreachable
scenarios: change the line "Adapter logs show Maestro connection error
(MaestroError code 16)" to indicate "MaestroError code 14 (UNAVAILABLE)" so the
test reflects that an unreachable gRPC endpoint should report UNAVAILABLE (code
14) rather than UNAUTHENTICATED (code 16); keep the second expectation "Adapter
does NOT crash" unchanged.
---
Nitpick comments:
In `@test-design/testcases/maestro.md`:
- Around line 163-165: The test uses a brittle selector `.items[0]` when calling
the cluster statuses API; update the jq selection in the curl pipeline so it
filters the .items array for the relevant adapter/report entry (e.g., match by
adapter name field or pick the item with the latest observed_time) instead of
taking index 0; replace each occurrence of `.items[0]` in the test file with a
jq filter that finds the adapter by name or sorts/selects by observed_time
before performing assertions so the checks are order-independent (apply the same
change for the other occurrences noted).
| #### Step 8: Cleanup | ||
| **Action:** | ||
| ```bash | ||
| # Delete the ManifestWork | ||
| kubectl delete manifestwork -n ${CONSUMER_NAME} ${MANIFESTWORK_NAME} --ignore-not-found | ||
|
|
||
| # Delete the cluster (via namespace cleanup) | ||
| kubectl delete ns ${CLUSTER_ID} --ignore-not-found | ||
| ``` |
There was a problem hiding this comment.
Cleanup should delete Cluster objects via API, not only namespaces/ManifestWorks.
These cleanup steps remove infra artifacts but can leave HyperFleet cluster records behind, which risks data pollution and false positives in later tests. Prefer explicit API delete for the created cluster IDs in each scenario (Line 176, Line 446, Line 906) and keep namespace/ManifestWork deletion as secondary cleanup.
Suggested cleanup pattern
-# Delete the cluster (via namespace cleanup)
-kubectl delete ns ${CLUSTER_ID} --ignore-not-found
+# Delete cluster via HyperFleet API (source of truth)
+curl -s -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}
+
+# Optional: delete residual namespace/workload artifacts if needed
+kubectl delete ns ${CLUSTER_ID} --ignore-not-foundAlso applies to: 446-459, 906-910
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-design/testcases/maestro.md` around lines 176 - 184, The cleanup
currently only deletes ManifestWork and the namespace (variables
MANIFESTWORK_NAME, CONSUMER_NAME, CLUSTER_ID) which leaves HyperFleet Cluster
records; update the cleanup to explicitly call the Cluster API delete for the
created cluster ID(s) (the Cluster resource corresponding to CLUSTER_ID) before
or alongside the existing kubectl namespace/manifestwork removal, ensuring each
scenario (the steps around the occurrences of CLUSTER_ID at lines referenced)
makes an API delete request for the created cluster record as primary cleanup
and keeps the kubectl delete of ManifestWork/namespace as secondary.
test-design/testcases/maestro.md
Outdated
| 4. Adapter task config includes nestedDiscoveries with both `byName` and `bySelectors` patterns: | ||
| ```yaml | ||
| nestedDiscoveries: | ||
| - name: "mgmtNamespace" | ||
| discovery: | ||
| bySelectors: | ||
| labelSelector: | ||
| hyperfleet.io/resource-type: "namespace" | ||
| - name: "mgmtConfigMap" | ||
| discovery: | ||
| byName: "{{ .clusterId }}" | ||
| ``` |
There was a problem hiding this comment.
mgmtConfigMap discovery key does not match the documented ConfigMap name.
In Line 504, byName: "{{ .clusterId }}" conflicts with the expected ConfigMap name "cluster-config" (Line 144). This likely prevents nested discovery from resolving the ConfigMap in this test case.
Suggested fix
- name: "mgmtConfigMap"
discovery:
- byName: "{{ .clusterId }}"
+ byName: "cluster-config"Also applies to: 551-553
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-design/testcases/maestro.md` around lines 494 - 505, The nested
discovery for the mgmtConfigMap is using byName: "{{ .clusterId }}" which does
not match the expected ConfigMap name "cluster-config", so update the
nestedDiscoveries entry for name "mgmtConfigMap" to use byName: "cluster-config"
(replace the template value) so the discovery resolves correctly; ensure the
change is applied wherever the mgmtConfigMap nested discovery appears (also
update the similar occurrence around the other instance noted) so the byName
value matches the documented ConfigMap name.
test-design/testcases/maestro.md
Outdated
| - Adapter logs show Maestro connection error (MaestroError code 16) | ||
| - Adapter does NOT crash (pod remains Running, no CrashLoopBackOff) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
gRPC status codes UNAVAILABLE UNAUTHENTICATED canonical numbers
💡 Result:
- UNAVAILABLE → canonical numeric code 14 (grpc.io)
- UNAUTHENTICATED → canonical numeric code 16 (grpc.io)
Citations:
- 1: https://grpc.io/docs/guides/status-codes/?utm_source=openai
- 2: https://grpc.io/docs/guides/status-codes/?utm_source=openai
🏁 Script executed:
cat -n test-design/testcases/maestro.md | sed -n '760,790p'Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1350
🏁 Script executed:
cat -n test-design/testcases/maestro.md | sed -n '720,775p'Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 2257
Use code 14 (UNAVAILABLE) instead of code 16 for unreachable server scenarios.
This test simulates Maestro server unavailability (Step 2 scales deployment to 0 replicas). When a gRPC endpoint is unreachable, the canonical status code is UNAVAILABLE (code 14), not code 16. Code 16 (UNAUTHENTICATED) applies to authentication failures, not server unavailability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-design/testcases/maestro.md` around lines 770 - 771, Update the test
expectation that currently references MaestroError code 16 to use code 14
(UNAVAILABLE) for server-unreachable scenarios: change the line "Adapter logs
show Maestro connection error (MaestroError code 16)" to indicate "MaestroError
code 14 (UNAVAILABLE)" so the test reflects that an unreachable gRPC endpoint
should report UNAVAILABLE (code 14) rather than UNAUTHENTICATED (code 16); keep
the second expectation "Adapter does NOT crash" unchanged.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
test-design/testcases/maestro.md (1)
201-210:⚠️ Potential issue | 🟠 MajorCleanup should remove HyperFleet cluster records, not only k8s artifacts.
Current cleanup removes namespaces/resource bundles but leaves created cluster objects in HyperFleet, which can pollute later runs and bias status-based assertions.
Also applies to: 301-307, 406-411, 512-516, 775-781
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 201 - 210, The cleanup section currently deletes only the Kubernetes namespace and Maestro resource bundle but misses removing the HyperFleet cluster records; update the cleanup steps in maestro.md (the blocks that delete "<CLUSTER_ID>-adapter2-namespace" and the resource-bundle delete curl) to also call the HyperFleet API (or appropriate CLI) to delete the cluster record for <CLUSTER_ID> and any related HyperFleet objects so runs are not polluted—add a command to remove the HyperFleet cluster by ID and ensure the same change is applied to the other listed cleanup blocks (lines around 301-307, 406-411, 512-516, 775-781).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test-design/testcases/maestro.md`:
- Around line 567-576: The test steps reference temp files that are never
created; create and populate /tmp/adapter2-task-cluster2.yaml,
/tmp/adapter2-task-modified.yaml, and /tmp/adapter2-config-tls.yaml before
calling kubectl apply. Update the procedure around the hyperfleet-adapter2-task
configmap change to: read the original task-config.yaml, modify
placementClusterName from "cluster1" to "cluster2" and change the task config
expression value to "\"cluster2\"" (e.g., using a YAML-safe edit tool or sed/yq)
to write /tmp/adapter2-task-cluster2.yaml, likewise generate
/tmp/adapter2-task-modified.yaml and /tmp/adapter2-config-tls.yaml from their
originals with the intended edits, then run the kubectl create configmap ...
--from-file=task-config.yaml=/tmp/adapter2-task-cluster2.yaml ... and similar
commands; ensure you reference the same configmap names
(hyperfleet-adapter2-task) and file names used in the kubectl commands so they
exist at apply time.
- Around line 112-128: The create-cluster curl invocation (curl -s -X POST
http://localhost:8000/api/hyperfleet/v1/clusters) prints JSON but doesn't
persist the returned id; modify the create steps to capture the created cluster
id (jq .id) into a shell variable (e.g., CLUSTER_ID or a uniquely named variable
per scenario) immediately after each POST and replace later occurrences of the
literal <CLUSTER_ID> with that variable; apply this change to all identical
create blocks referenced (the POST at the shown snippet and the other
create-call groups noted: 243-256, 351-364, 447-460, 586-599, 688-701, 841-854)
so downstream steps use the stored id.
---
Duplicate comments:
In `@test-design/testcases/maestro.md`:
- Around line 201-210: The cleanup section currently deletes only the Kubernetes
namespace and Maestro resource bundle but misses removing the HyperFleet cluster
records; update the cleanup steps in maestro.md (the blocks that delete
"<CLUSTER_ID>-adapter2-namespace" and the resource-bundle delete curl) to also
call the HyperFleet API (or appropriate CLI) to delete the cluster record for
<CLUSTER_ID> and any related HyperFleet objects so runs are not polluted—add a
command to remove the HyperFleet cluster by ID and ensure the same change is
applied to the other listed cleanup blocks (lines around 301-307, 406-411,
512-516, 775-781).
| #### Step 2: Create a cluster via HyperFleet API | ||
| **Action:** | ||
| ```bash | ||
| curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{ | ||
| "kind": "Cluster", | ||
| "name": "maestro-e2e-test-'$(date +%Y%m%d-%H%M%S)'", | ||
| "spec": { | ||
| "platform": { | ||
| "type": "gcp", | ||
| "gcp": {"projectID": "test-project", "region": "us-central1"} | ||
| }, | ||
| "release": {"version": "4.14.0"} | ||
| } | ||
| }' | jq '{id: .id, name: .name, generation: .generation}' | ||
| ``` |
There was a problem hiding this comment.
Persist the created cluster ID immediately after each create call.
Several flows use <CLUSTER_ID> in later steps, but the create commands only print JSON and never store the ID. This makes the test steps non-executable as written (for example, Line 190, Line 391, Line 722, Line 888).
Suggested fix
-curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \
+CLUSTER_ID=$(curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \
-H "Content-Type: application/json" \
-d '{
...
- }' | jq '{id: .id, name: .name, generation: .generation}'
+ }' | tee /tmp/cluster-create-response.json | jq -r '.id')
+
+jq '{id: .id, name: .name, generation: .generation}' /tmp/cluster-create-response.json
+echo "Using CLUSTER_ID=${CLUSTER_ID}"Also applies to: 243-256, 351-364, 447-460, 586-599, 688-701, 841-854
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-design/testcases/maestro.md` around lines 112 - 128, The create-cluster
curl invocation (curl -s -X POST
http://localhost:8000/api/hyperfleet/v1/clusters) prints JSON but doesn't
persist the returned id; modify the create steps to capture the created cluster
id (jq .id) into a shell variable (e.g., CLUSTER_ID or a uniquely named variable
per scenario) immediately after each POST and replace later occurrences of the
literal <CLUSTER_ID> with that variable; apply this change to all identical
create blocks referenced (the POST at the shown snippet and the other
create-call groups noted: 243-256, 351-364, 447-460, 586-599, 688-701, 841-854)
so downstream steps use the stored id.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (16)
test-design/testcases/maestro.md (16)
498-502:⚠️ Potential issue | 🟠 MajorCleanup incomplete in Test 4.
Missing HyperFleet API delete call. Apply the cleanup pattern from Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 498 - 502, Test 4's cleanup (Step 6) only runs the kubectl namespace deletion and misses the HyperFleet API deletion; replicate the cleanup pattern used in Test 1 by adding the same HyperFleet API delete call to Test 4's cleanup section (Step 6) so the adapter is removed from HyperFleet as well—locate the Test 4 cleanup block around "Step 6: Cleanup" and the Test 1 cleanup block to copy the exact HyperFleet delete request/sequence and insert it alongside the existing kubectl delete namespace command.
103-119:⚠️ Potential issue | 🟠 MajorCluster ID is not captured for use in subsequent steps.
The curl command outputs JSON to jq but never stores the returned
idin a shell variable. Later steps (lines 168, 171, 181, 196) reference<CLUSTER_ID>as a placeholder, making the test non-executable as written.This issue was previously flagged and remains unaddressed.
💾 Suggested fix to capture the cluster ID
-curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ +CLUSTER_ID=$(curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ -H "Content-Type: application/json" \ -d '{ "kind": "Cluster", "name": "maestro-e2e-test-'$(date +%Y%m%d-%H%M%S)'", "spec": { "platform": { "type": "gcp", "gcp": {"projectID": "test-project", "region": "us-central1"} }, "release": {"version": "4.14.0"} } - }' | jq '{id: .id, name: .name, generation: .generation}' + }' | tee /tmp/cluster-response.json | jq -r '.id') + +# Display cluster details +jq '{id: .id, name: .name, generation: .generation}' /tmp/cluster-response.json +echo "Created cluster with ID: ${CLUSTER_ID}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 103 - 119, The cluster creation curl currently pipes JSON to jq but doesn't save the returned id; modify the curl pipeline in the "Create a cluster via HyperFleet API" step so it captures the cluster id into a shell variable (e.g., CLUSTER_ID) by extracting the id with jq -r .id, validate it's non-empty, and then reuse that CLUSTER_ID in subsequent steps that reference <CLUSTER_ID> (the existing curl invocation that produces id/name/generation is the place to change). Ensure the script echoes or exports CLUSTER_ID so later commands can consume it.
575-585:⚠️ Potential issue | 🟠 MajorCluster ID not captured in Test 5.
Same variable capture issue. Apply the fix pattern from Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 575 - 585, The cluster creation curl in Test 5 fails to capture the created Cluster ID; replicate the fix from Test 1 by assigning the response id to a shell variable (e.g., CLUSTER_ID) using jq -r '.id' and then use that variable in subsequent test steps; update the POST block that creates "multi-consumer-test-..." to capture and export CLUSTER_ID so later steps reference it.
345-355:⚠️ Potential issue | 🟠 MajorCluster ID not captured in Test 3.
Same variable capture issue. Apply the fix pattern from Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 345 - 355, The curl in Test 3 doesn't capture the created Cluster's ID; change the command to follow Test 1's pattern by assigning the response to a shell variable (e.g., cluster_id) and extracting the id with jq -r .id instead of only printing '{id: .id, name: .name}'; update any subsequent references to use that cluster_id (the existing name pattern "transport-compare-...'$(date +%Y%m%d-%H%M%S)'" and the jq extraction are the locations to modify).
438-448:⚠️ Potential issue | 🟠 MajorCluster ID not captured in Test 4.
Same variable capture issue. Apply the fix pattern from Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 438 - 448, The Test 4 cluster-creation curl invocation doesn't capture the created cluster's ID into a shell variable; update the POST in Test 4 (the curl that creates "nested-discovery-test-...") to assign the response id to a variable (e.g., CLUSTER_ID) by piping the curl output to jq -r '.id' and storing it (similar to Test 1), and if needed also capture the cluster name similarly; ensure subsequent test steps use this CLUSTER_ID variable.
940-956:⚠️ Potential issue | 🟠 MajorMissing command to create the modified TLS config file.
The test extracts the original config (line 945) and provides modification instructions in comments (lines 947-948), but never generates
/tmp/adapter2-config-tls.yaml. Line 951 attempts to apply this non-existent file.This issue was previously flagged and remains unaddressed.
🔧 Suggested fix
# Backup original config kubectl get configmap hyperfleet-adapter2-config -n hyperfleet \ -o jsonpath='{.data.adapter-config\.yaml}' > /tmp/adapter2-config-original.yaml -# Modify insecure: true → insecure: false -# (keep httpServerAddress as http:// - no certs provided) +# Generate modified config: change insecure from true to false +sed 's/insecure: true/insecure: false/' \ + /tmp/adapter2-config-original.yaml > /tmp/adapter2-config-tls.yaml + +# Verify the change +echo "Modified insecure setting:" +grep -A2 "insecure:" /tmp/adapter2-config-tls.yaml kubectl create configmap hyperfleet-adapter2-config -n hyperfleet \ --from-file=adapter-config.yaml=/tmp/adapter2-config-tls.yaml \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 940 - 956, The test is missing the step that generates /tmp/adapter2-config-tls.yaml from the backed-up /tmp/adapter2-config-original.yaml before applying the ConfigMap; add an explicit step that reads /tmp/adapter2-config-original.yaml, updates the insecure field from true to false (keeping httpServerAddress as http:// and not adding certs), writes the result to /tmp/adapter2-config-tls.yaml, and then run the existing kubectl create configmap ... --from-file=adapter-config.yaml=/tmp/adapter2-config-tls.yaml and kubectl rollout restart deployment/hyperfleet-adapter2 -n hyperfleet to apply the change.
759-765:⚠️ Potential issue | 🟠 MajorCleanup incomplete in Test 6.
Missing HyperFleet API delete call and using placeholder instead of variable.
🧹 Suggested cleanup
+# Delete cluster via API +curl -s -X DELETE http://localhost:8000/api/hyperfleet/v1/clusters/${CLUSTER_ID} + # Delete the namespace created by Maestro agent -kubectl delete ns <CLUSTER_ID>-adapter2-namespace --ignore-not-found +kubectl delete ns ${CLUSTER_ID}-adapter2-namespace --ignore-not-found + # Ensure Maestro is fully restored kubectl get pods -n maestro --no-headers🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 759 - 765, In Test 6's Step 9 cleanup replace the literal placeholder "<CLUSTER_ID>-adapter2-namespace" with the actual test variable (e.g., CLUSTER_ID) and add the missing HyperFleet API DELETE call to fully remove the cluster from HyperFleet; locate the cleanup block referenced by "Step 9: Cleanup" / the kubectl delete line and insert an HTTP DELETE (or call to the existing hyperfleet client method used elsewhere in tests) that targets the cluster by CLUSTER_ID so both the Kubernetes namespace and the HyperFleet resource are removed.
237-247:⚠️ Potential issue | 🟠 MajorCluster ID not captured in Test 2.
Same issue as Test 1: the cluster creation command doesn't capture the ID into a variable, but later steps use
<CLUSTER_ID>placeholder. Apply the same fix pattern as suggested for Test 1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 237 - 247, The cluster creation step in Test 2 runs the curl that returns JSON but doesn't capture the created cluster's id into a variable, yet later steps reference <CLUSTER_ID>; modify the curl invocation that posts the Cluster (the same block that currently pipes to jq '{id: .id, name: .name, generation: .generation}') to extract just the id and assign it to CLUSTER_ID (e.g., use jq -r .id) so subsequent steps use that CLUSTER_ID variable; ensure the rest of the test uses the CLUSTER_ID placeholder consistently.
545-567:⚠️ Potential issue | 🟠 MajorMissing command to create the modified config file.
The test extracts the original config (line 551) and provides comments about what to modify (lines 553-557), but never actually generates
/tmp/adapter2-task-cluster2.yaml. Line 561 attempts to apply this non-existent file, causing the test to fail.This issue was previously flagged and remains unaddressed.
🔧 Suggested fix to generate the modified config
# Extract current task config kubectl get configmap hyperfleet-adapter2-task -n hyperfleet \ -o jsonpath='{.data.task-config\.yaml}' > /tmp/adapter2-task-original.yaml -# Modify placementClusterName from "cluster1" to "cluster2" -# In the task config, change: -# expression: "\"cluster1\"" -# To: -# expression: "\"cluster2\"" +# Generate modified config: change placementClusterName from "cluster1" to "cluster2" +sed 's/expression: "\\"cluster1\\""/expression: "\\"cluster2\\""/' \ + /tmp/adapter2-task-original.yaml > /tmp/adapter2-task-cluster2.yaml + +# Verify the change +echo "Modified placementClusterName to cluster2:" +grep -A1 placementClusterName /tmp/adapter2-task-cluster2.yaml # Apply the modified config kubectl create configmap hyperfleet-adapter2-task -n hyperfleet \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 545 - 567, The script extracts /tmp/adapter2-task-original.yaml but never creates /tmp/adapter2-task-cluster2.yaml, so kubectl apply fails; update the steps around the hyperfleet-adapter2-task configmap to generate the modified file (e.g., read /tmp/adapter2-task-original.yaml and replace the placementClusterName expression value "cluster1" -> "cluster2") and write the result to /tmp/adapter2-task-cluster2.yaml before the kubectl create configmap ... --from-file=task-config.yaml=/tmp/adapter2-task-cluster2.yaml command; target the configmap name hyperfleet-adapter2-task and the extracted file path /tmp/adapter2-task-original.yaml when implementing the replace (using sed, yq, awk, or similar) so the subsequent kubectl apply and rollout restart/deployment/hyperfleet-adapter2 steps operate on the actual modified file.
675-685:⚠️ Potential issue | 🟠 MajorCluster ID not captured in Test 6.
Same variable capture issue. Apply the fix pattern from Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 675 - 685, The Test 6 curl invocation currently prints JSON via jq ('jq \'{id: .id, name: .name}\'') but does not capture the new cluster's ID into a shell variable, causing downstream steps to lose the Cluster ID; change the invocation to follow Test 1's pattern by assigning the created cluster's id to a variable (e.g., CLUSTER_ID) by extracting .id from the curl output (using jq -r .id or equivalent) and reuse that variable for later steps—locate the curl block that creates "maestro-unavail-test-..." and replace the current jq-only print with the variable capture approach used in Test 1.
875-887:⚠️ Potential issue | 🟠 MajorCleanup incomplete in Test 7.
Missing HyperFleet API delete call. Should delete cluster via API before restoring adapter config.
🧹 Suggested cleanup
+# Delete cluster via API (before restoring config) +curl -s -X DELETE http://localhost:8000/api/hyperfleet/v1/clusters/${CLUSTER_ID} + # Restore original config kubectl create configmap hyperfleet-adapter2-task -n hyperfleet \ --from-file=task-config.yaml=/tmp/adapter2-task-original.yaml \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 875 - 887, The cleanup in Test 7's "Step 6: Restore and cleanup" is incomplete: before running the configmap restore and rolling the hyperfleet-adapter2 deployment (the kubectl create configmap ... and kubectl rollout restart deployment/hyperfleet-adapter2 commands), add a call to the HyperFleet API to delete the test cluster (e.g., invoke the DELETE cluster endpoint for the created cluster ID) so the cluster is removed via API prior to restoring adapter config; locate the Test 7 teardown logic surrounding the configmap restore/rollout and insert the API delete call there, ensuring it uses the same cluster identifier produced earlier in the test and handles errors before proceeding with the kubectl restore and rollout commands.
192-201:⚠️ Potential issue | 🟠 MajorCleanup should delete the Cluster via the HyperFleet API.
The cleanup steps delete infrastructure artifacts (namespace, resource bundle) but leave the HyperFleet Cluster record in the database. This can cause data pollution and false positives in subsequent tests.
This issue was previously flagged and remains unaddressed.
🧹 Suggested cleanup pattern
+# Delete cluster via HyperFleet API (primary cleanup) +curl -s -X DELETE http://localhost:8000/api/hyperfleet/v1/clusters/${CLUSTER_ID} + # Delete the namespace created by Maestro agent -kubectl delete ns <CLUSTER_ID>-adapter2-namespace --ignore-not-found +kubectl delete ns ${CLUSTER_ID}-adapter2-namespace --ignore-not-found # Delete the resource bundle on Maestro (via Maestro API) kubectl exec -n maestro deployment/maestro -- \ - curl -s -X DELETE http://localhost:8000/api/maestro/v1/resource-bundles/<RESOURCE_BUNDLE_ID> + curl -s -X DELETE http://localhost:8000/api/maestro/v1/resource-bundles/${RESOURCE_BUNDLE_ID}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 192 - 201, Update Step 8 cleanup to also remove the HyperFleet Cluster record by issuing a DELETE to the HyperFleet API for the Cluster identified by <CLUSTER_ID> (after or alongside the resource-bundle deletion), ensuring the request uses the same auth/context as other test API calls and treats missing/404 responses as non-fatal; modify the test text around "kubectl exec ... resource-bundles/<RESOURCE_BUNDLE_ID>" to add the HyperFleet DELETE for the Cluster so the DB record is removed and subsequent tests are not polluted.
292-298:⚠️ Potential issue | 🟠 MajorCleanup incomplete in Test 2.
Same cleanup issue as Test 1: missing HyperFleet API delete call and using placeholders instead of variables. Apply the same cleanup pattern as suggested for Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 292 - 298, Step 6's cleanup is missing the HyperFleet API delete and still uses literal placeholders; update the cleanup sequence to mirror Test 1 by invoking the HyperFleet API resource deletion (same call pattern used in Test 1) and replace hardcoded placeholders like <CLUSTER_ID> and <RESOURCE_BUNDLE_ID> with the corresponding test variables used elsewhere in the doc (use the same variable names/templating as Test 1), and ensure the kubectl namespace delete and the kubectl exec curl DELETE lines follow the same ordering and error-safe flags as Test 1.
396-401:⚠️ Potential issue | 🟠 MajorCleanup incomplete in Test 3.
Missing HyperFleet API delete call and variable usage. The cleanup should delete the cluster via API before removing artifacts.
🧹 Suggested cleanup
+# Delete cluster via API +curl -s -X DELETE http://localhost:8000/api/hyperfleet/v1/clusters/${CLUSTER_ID} + # Cleanup adapter1 resources (K8s transport) -kubectl delete configmap <CLUSTER_ID>-adapter1-configmap -n hyperfleet --ignore-not-found +kubectl delete configmap ${CLUSTER_ID}-adapter1-configmap -n hyperfleet --ignore-not-found + # Cleanup adapter2 resources (Maestro transport) -kubectl delete ns <CLUSTER_ID>-adapter2-namespace --ignore-not-found +kubectl delete ns ${CLUSTER_ID}-adapter2-namespace --ignore-not-found🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 396 - 401, Update Test 3's "Step 6: Cleanup" to first call the HyperFleet API to delete the cluster using the same cluster identifier variable used elsewhere (e.g., CLUSTER_ID or clusterId) before running the kubectl cleanup commands; ensure the API delete call is included and uses the existing variable rather than a hardcoded value, then keep the two kubectl lines (configmap and namespace deletes) as follow-up cleanup steps.
799-820:⚠️ Potential issue | 🟠 MajorMissing command to create the modified config file.
Similar to Test 5, the test extracts the original config (line 804) and provides modification instructions in comments (lines 806-810), but never generates
/tmp/adapter2-task-modified.yaml. Line 814 attempts to apply this non-existent file.This issue was previously flagged and remains unaddressed.
🔧 Suggested fix
# Backup original config kubectl get configmap hyperfleet-adapter2-task -n hyperfleet \ -o jsonpath='{.data.task-config\.yaml}' > /tmp/adapter2-task-original.yaml -# Modify: change placementClusterName from "cluster1" to "non-existent-cluster" -# In the task config, change: -# expression: "\"cluster1\"" -# To: -# expression: "\"non-existent-cluster\"" +# Generate modified config: change placementClusterName to non-existent consumer +sed 's/expression: "\\"cluster1\\""/expression: "\\"non-existent-cluster\\""/' \ + /tmp/adapter2-task-original.yaml > /tmp/adapter2-task-modified.yaml + +# Verify the change +echo "Modified placementClusterName to non-existent-cluster:" +grep -A1 placementClusterName /tmp/adapter2-task-modified.yaml # Apply modified config kubectl create configmap hyperfleet-adapter2-task -n hyperfleet \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 799 - 820, The test fails to actually create /tmp/adapter2-task-modified.yaml before applying it; after backing up the original config (kubectl get ... > /tmp/adapter2-task-original.yaml) you must generate the modified file by editing the extracted task-config.yaml to change placementClusterName's expression to "\"non-existent-cluster\"" (e.g., use sed/yq/jq to replace the expression value in /tmp/adapter2-task-original.yaml and write the result to /tmp/adapter2-task-modified.yaml), then proceed with kubectl create configmap hyperfleet-adapter2-task ... --from-file=task-config.yaml=/tmp/adapter2-task-modified.yaml and the subsequent rollout restart/status calls.
828-838:⚠️ Potential issue | 🟠 MajorCluster ID not captured in Test 7.
Same variable capture issue. Apply the fix pattern from Test 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-design/testcases/maestro.md` around lines 828 - 838, The curl in Test 7 that creates "invalid-consumer-test-$(date...)" currently pipes directly to jq and doesn't capture the created Cluster ID; mirror the fix from Test 1 by saving the full curl response into a shell variable (e.g., resp) and then extract CLUSTER_ID via jq -r '.id' into another variable for later use; replace the direct pipe to jq '{id: .id, name: .name}' with this two-step capture so the CLUSTER_ID from the Cluster creation is available for subsequent steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test-design/testcases/maestro.md`:
- Around line 138-161: The test currently queries the Maestro resource-bundles
API but never captures the returned resource bundle ID, yet Step 5 uses the
placeholder <RESOURCE_BUNDLE_ID>; update Step 4 to parse the JSON from the
resource-bundles API call (e.g., using jq) to extract the desired bundle's .id
(filtering by consumer_name or cluster target like "cluster1" if needed) and
store it in a shell variable (e.g., RESOURCE_BUNDLE_ID) so Step 5 can substitute
that variable into the subsequent curl to
/api/maestro/v1/resource-bundles/<RESOURCE_BUNDLE_ID>; ensure the extraction
uses a raw string output option (jq -r) or equivalent so the ID is usable
directly in the shell command.
---
Duplicate comments:
In `@test-design/testcases/maestro.md`:
- Around line 498-502: Test 4's cleanup (Step 6) only runs the kubectl namespace
deletion and misses the HyperFleet API deletion; replicate the cleanup pattern
used in Test 1 by adding the same HyperFleet API delete call to Test 4's cleanup
section (Step 6) so the adapter is removed from HyperFleet as well—locate the
Test 4 cleanup block around "Step 6: Cleanup" and the Test 1 cleanup block to
copy the exact HyperFleet delete request/sequence and insert it alongside the
existing kubectl delete namespace command.
- Around line 103-119: The cluster creation curl currently pipes JSON to jq but
doesn't save the returned id; modify the curl pipeline in the "Create a cluster
via HyperFleet API" step so it captures the cluster id into a shell variable
(e.g., CLUSTER_ID) by extracting the id with jq -r .id, validate it's non-empty,
and then reuse that CLUSTER_ID in subsequent steps that reference <CLUSTER_ID>
(the existing curl invocation that produces id/name/generation is the place to
change). Ensure the script echoes or exports CLUSTER_ID so later commands can
consume it.
- Around line 575-585: The cluster creation curl in Test 5 fails to capture the
created Cluster ID; replicate the fix from Test 1 by assigning the response id
to a shell variable (e.g., CLUSTER_ID) using jq -r '.id' and then use that
variable in subsequent test steps; update the POST block that creates
"multi-consumer-test-..." to capture and export CLUSTER_ID so later steps
reference it.
- Around line 345-355: The curl in Test 3 doesn't capture the created Cluster's
ID; change the command to follow Test 1's pattern by assigning the response to a
shell variable (e.g., cluster_id) and extracting the id with jq -r .id instead
of only printing '{id: .id, name: .name}'; update any subsequent references to
use that cluster_id (the existing name pattern "transport-compare-...'$(date
+%Y%m%d-%H%M%S)'" and the jq extraction are the locations to modify).
- Around line 438-448: The Test 4 cluster-creation curl invocation doesn't
capture the created cluster's ID into a shell variable; update the POST in Test
4 (the curl that creates "nested-discovery-test-...") to assign the response id
to a variable (e.g., CLUSTER_ID) by piping the curl output to jq -r '.id' and
storing it (similar to Test 1), and if needed also capture the cluster name
similarly; ensure subsequent test steps use this CLUSTER_ID variable.
- Around line 940-956: The test is missing the step that generates
/tmp/adapter2-config-tls.yaml from the backed-up
/tmp/adapter2-config-original.yaml before applying the ConfigMap; add an
explicit step that reads /tmp/adapter2-config-original.yaml, updates the
insecure field from true to false (keeping httpServerAddress as http:// and not
adding certs), writes the result to /tmp/adapter2-config-tls.yaml, and then run
the existing kubectl create configmap ...
--from-file=adapter-config.yaml=/tmp/adapter2-config-tls.yaml and kubectl
rollout restart deployment/hyperfleet-adapter2 -n hyperfleet to apply the
change.
- Around line 759-765: In Test 6's Step 9 cleanup replace the literal
placeholder "<CLUSTER_ID>-adapter2-namespace" with the actual test variable
(e.g., CLUSTER_ID) and add the missing HyperFleet API DELETE call to fully
remove the cluster from HyperFleet; locate the cleanup block referenced by "Step
9: Cleanup" / the kubectl delete line and insert an HTTP DELETE (or call to the
existing hyperfleet client method used elsewhere in tests) that targets the
cluster by CLUSTER_ID so both the Kubernetes namespace and the HyperFleet
resource are removed.
- Around line 237-247: The cluster creation step in Test 2 runs the curl that
returns JSON but doesn't capture the created cluster's id into a variable, yet
later steps reference <CLUSTER_ID>; modify the curl invocation that posts the
Cluster (the same block that currently pipes to jq '{id: .id, name: .name,
generation: .generation}') to extract just the id and assign it to CLUSTER_ID
(e.g., use jq -r .id) so subsequent steps use that CLUSTER_ID variable; ensure
the rest of the test uses the CLUSTER_ID placeholder consistently.
- Around line 545-567: The script extracts /tmp/adapter2-task-original.yaml but
never creates /tmp/adapter2-task-cluster2.yaml, so kubectl apply fails; update
the steps around the hyperfleet-adapter2-task configmap to generate the modified
file (e.g., read /tmp/adapter2-task-original.yaml and replace the
placementClusterName expression value "cluster1" -> "cluster2") and write the
result to /tmp/adapter2-task-cluster2.yaml before the kubectl create configmap
... --from-file=task-config.yaml=/tmp/adapter2-task-cluster2.yaml command;
target the configmap name hyperfleet-adapter2-task and the extracted file path
/tmp/adapter2-task-original.yaml when implementing the replace (using sed, yq,
awk, or similar) so the subsequent kubectl apply and rollout
restart/deployment/hyperfleet-adapter2 steps operate on the actual modified
file.
- Around line 675-685: The Test 6 curl invocation currently prints JSON via jq
('jq \'{id: .id, name: .name}\'') but does not capture the new cluster's ID into
a shell variable, causing downstream steps to lose the Cluster ID; change the
invocation to follow Test 1's pattern by assigning the created cluster's id to a
variable (e.g., CLUSTER_ID) by extracting .id from the curl output (using jq -r
.id or equivalent) and reuse that variable for later steps—locate the curl block
that creates "maestro-unavail-test-..." and replace the current jq-only print
with the variable capture approach used in Test 1.
- Around line 875-887: The cleanup in Test 7's "Step 6: Restore and cleanup" is
incomplete: before running the configmap restore and rolling the
hyperfleet-adapter2 deployment (the kubectl create configmap ... and kubectl
rollout restart deployment/hyperfleet-adapter2 commands), add a call to the
HyperFleet API to delete the test cluster (e.g., invoke the DELETE cluster
endpoint for the created cluster ID) so the cluster is removed via API prior to
restoring adapter config; locate the Test 7 teardown logic surrounding the
configmap restore/rollout and insert the API delete call there, ensuring it uses
the same cluster identifier produced earlier in the test and handles errors
before proceeding with the kubectl restore and rollout commands.
- Around line 192-201: Update Step 8 cleanup to also remove the HyperFleet
Cluster record by issuing a DELETE to the HyperFleet API for the Cluster
identified by <CLUSTER_ID> (after or alongside the resource-bundle deletion),
ensuring the request uses the same auth/context as other test API calls and
treats missing/404 responses as non-fatal; modify the test text around "kubectl
exec ... resource-bundles/<RESOURCE_BUNDLE_ID>" to add the HyperFleet DELETE for
the Cluster so the DB record is removed and subsequent tests are not polluted.
- Around line 292-298: Step 6's cleanup is missing the HyperFleet API delete and
still uses literal placeholders; update the cleanup sequence to mirror Test 1 by
invoking the HyperFleet API resource deletion (same call pattern used in Test 1)
and replace hardcoded placeholders like <CLUSTER_ID> and <RESOURCE_BUNDLE_ID>
with the corresponding test variables used elsewhere in the doc (use the same
variable names/templating as Test 1), and ensure the kubectl namespace delete
and the kubectl exec curl DELETE lines follow the same ordering and error-safe
flags as Test 1.
- Around line 396-401: Update Test 3's "Step 6: Cleanup" to first call the
HyperFleet API to delete the cluster using the same cluster identifier variable
used elsewhere (e.g., CLUSTER_ID or clusterId) before running the kubectl
cleanup commands; ensure the API delete call is included and uses the existing
variable rather than a hardcoded value, then keep the two kubectl lines
(configmap and namespace deletes) as follow-up cleanup steps.
- Around line 799-820: The test fails to actually create
/tmp/adapter2-task-modified.yaml before applying it; after backing up the
original config (kubectl get ... > /tmp/adapter2-task-original.yaml) you must
generate the modified file by editing the extracted task-config.yaml to change
placementClusterName's expression to "\"non-existent-cluster\"" (e.g., use
sed/yq/jq to replace the expression value in /tmp/adapter2-task-original.yaml
and write the result to /tmp/adapter2-task-modified.yaml), then proceed with
kubectl create configmap hyperfleet-adapter2-task ...
--from-file=task-config.yaml=/tmp/adapter2-task-modified.yaml and the subsequent
rollout restart/status calls.
- Around line 828-838: The curl in Test 7 that creates
"invalid-consumer-test-$(date...)" currently pipes directly to jq and doesn't
capture the created Cluster ID; mirror the fix from Test 1 by saving the full
curl response into a shell variable (e.g., resp) and then extract CLUSTER_ID via
jq -r '.id' into another variable for later use; replace the direct pipe to jq
'{id: .id, name: .name}' with this two-step capture so the CLUSTER_ID from the
Cluster creation is available for subsequent steps.
| #### Step 4: Verify ManifestWork was created on Maestro (via Maestro HTTP API) | ||
| **Action:** | ||
| - Query the Maestro resource-bundles API from inside the maestro pod: | ||
| ```bash | ||
| kubectl exec -n maestro deployment/maestro -- \ | ||
| curl -s http://localhost:8000/api/maestro/v1/resource-bundles \ | ||
| | jq '.items[] | {id: .id, consumer_name: .consumer_name, version: .version, | ||
| manifest_names: [.manifests[].metadata.name]}' | ||
| ``` | ||
|
|
||
| **Expected Result:** | ||
| - A resource bundle exists targeting `cluster1` | ||
| - Contains 2 manifests (Namespace and ConfigMap) | ||
|
|
||
| #### Step 5: Verify ManifestWork metadata (labels and annotations) | ||
| **Action:** | ||
| ```bash | ||
| kubectl exec -n maestro deployment/maestro -- \ | ||
| curl -s http://localhost:8000/api/maestro/v1/resource-bundles/<RESOURCE_BUNDLE_ID> \ | ||
| | jq '.metadata | {labels, annotations}' | ||
| ``` | ||
|
|
||
| **Expected Result:** | ||
| - Labels include `hyperfleet.io/cluster-id`, `hyperfleet.io/generation`, `hyperfleet.io/adapter` |
There was a problem hiding this comment.
Resource bundle ID must be captured from the query response.
Line 156 references <RESOURCE_BUNDLE_ID> as a placeholder, but the query at lines 142-146 never captures the ID from the response. The test should extract and store the resource bundle ID for use in subsequent steps.
💾 Suggested fix to capture the resource bundle ID
# Query the Maestro resource-bundles API from inside the maestro pod:
+RESOURCE_BUNDLE_ID=$(kubectl exec -n maestro deployment/maestro -- \
+ curl -s http://localhost:8000/api/maestro/v1/resource-bundles \
+ | jq -r --arg cid "${CLUSTER_ID}" \
+ '.items[] | select(.metadata.labels["hyperfleet.io/cluster-id"] == $cid) | .id')
+
+echo "Found resource bundle ID: ${RESOURCE_BUNDLE_ID}"
+
+# Display resource bundle details:
kubectl exec -n maestro deployment/maestro -- \
curl -s http://localhost:8000/api/maestro/v1/resource-bundles \
| jq '.items[] | {id: .id, consumer_name: .consumer_name, version: .version,
manifest_names: [.manifests[].metadata.name]}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-design/testcases/maestro.md` around lines 138 - 161, The test currently
queries the Maestro resource-bundles API but never captures the returned
resource bundle ID, yet Step 5 uses the placeholder <RESOURCE_BUNDLE_ID>; update
Step 4 to parse the JSON from the resource-bundles API call (e.g., using jq) to
extract the desired bundle's .id (filtering by consumer_name or cluster target
like "cluster1" if needed) and store it in a shell variable (e.g.,
RESOURCE_BUNDLE_ID) so Step 5 can substitute that variable into the subsequent
curl to /api/maestro/v1/resource-bundles/<RESOURCE_BUNDLE_ID>; ensure the
extraction uses a raw string output option (jq -r) or equivalent so the ID is
usable directly in the shell command.
test-design/testcases/maestro.md
Outdated
| #### Step 2: Create a cluster via HyperFleet API | ||
| **Action:** | ||
| ```bash | ||
| curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ |
There was a problem hiding this comment.
It can use placeholder to replace http://localhost:8000/
There was a problem hiding this comment.
You are right. Replaced all HyperFleet API URLs with ${API_URL} placeholder.
Summary by CodeRabbit
Tests
Documentation
Bug Fixes