openshift/velero:oadp-dev rebase + Adapt ToSystemAffinity calls to new Velero API signature#2096
Conversation
WalkthroughUpdated Go module dependencies; controller code changed to aggregate per-entry LoadAffinity NodeSelectorTerms using the revised ToSystemAffinity signature; CRD manifests add a new Changes
Sequence Diagram(s)sequenceDiagram
participant Controller as Controller
participant KubeUtil as ToSystemAffinity()
participant K8sObj as PodTemplate (Deployment/DaemonSet)
Controller->>KubeUtil: ToSystemAffinity(LoadAffinity#1, nil)
KubeUtil-->>Controller: NodeSelectorTerms#1
Controller->>KubeUtil: ToSystemAffinity(LoadAffinity#N, nil)
KubeUtil-->>Controller: NodeSelectorTerms#N
Controller->>Controller: Aggregate NodeSelectorTerms[]
alt terms exist
Controller->>K8sObj: set .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms
else no terms
Controller-->>K8sObj: leave affinity unset
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
This PR rebases on the openshift/velero:oadp-dev branch and adapts code to match a breaking change in the upstream Velero API, where kube.ToSystemAffinity now accepts (*LoadAffinity, *NodeSelector) instead of []*LoadAffinity.
Changes:
- Updated go.mod/go.sum to use a newer version of openshift/velero (0.10.2-0.20260219202759-5cf28471570d) that includes the API signature change
- Modified affinity handling logic in nodeagent.go and velero.go to merge multiple LoadAffinityConfig entries into a single LoadAffinity structure
- Updated numerous indirect dependency versions in go.mod and go.sum
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| internal/controller/velero.go | Merges multiple LoadAffinityConfig entries into a single LoadAffinity struct before calling ToSystemAffinity with the new signature |
| internal/controller/nodeagent.go | Applies the same LoadAffinity merging logic for node agent configuration |
| go.mod | Updates the openshift/velero replace directive to the newer version and bumps various direct dependencies |
| go.sum | Updates checksums for all updated dependencies |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
internal/controller/nodeagent.go (1)
276-289: Code duplication: Identical affinity merging logic exists in velero.go.The affinity merging logic (lines 277-287) is duplicated verbatim in
internal/controller/velero.go(lines 283-293). Consider extracting this into a shared helper function to:
- Ensure consistent behavior across both controllers
- Reduce maintenance burden when the merging logic needs updates
- Make testing easier
♻️ Proposed helper function
Add to a shared location (e.g.,
pkg/common/affinity.goor within the controller package):// MergeLoadAffinityConfigs consolidates multiple LoadAffinityConfig entries // into a single kube.LoadAffinity by merging MatchLabels and concatenating // MatchExpressions. func MergeLoadAffinityConfigs(configs []*oadpv1alpha1.LoadAffinity) *kube.LoadAffinity { if len(configs) == 0 { return nil } merged := &kube.LoadAffinity{} merged.NodeSelector.MatchLabels = make(map[string]string) for _, aff := range configs { for k, v := range aff.NodeSelector.MatchLabels { merged.NodeSelector.MatchLabels[k] = v } merged.NodeSelector.MatchExpressions = append( merged.NodeSelector.MatchExpressions, aff.NodeSelector.MatchExpressions..., ) } return merged }Then use in both files:
-mergedAffinity := &kube.LoadAffinity{} -for _, aff := range dpa.Spec.Configuration.NodeAgent.NodeAgentConfigMapSettings.LoadAffinityConfig { - if mergedAffinity.NodeSelector.MatchLabels == nil { - mergedAffinity.NodeSelector.MatchLabels = make(map[string]string) - } - for k, v := range aff.NodeSelector.MatchLabels { - mergedAffinity.NodeSelector.MatchLabels[k] = v - } - mergedAffinity.NodeSelector.MatchExpressions = append(mergedAffinity.NodeSelector.MatchExpressions, aff.NodeSelector.MatchExpressions...) -} +mergedAffinity := MergeLoadAffinityConfigs(dpa.Spec.Configuration.NodeAgent.NodeAgentConfigMapSettings.LoadAffinityConfig)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/controller/nodeagent.go` around lines 276 - 289, The affinity merging block duplicated in nodeagent.go (the loop building mergedAffinity from dpa.Spec.Configuration.NodeAgent.NodeAgentConfigMapSettings.LoadAffinityConfig and calling kube.ToSystemAffinity to set ds.Spec.Template.Spec.Affinity) should be extracted into a shared helper (e.g., MergeLoadAffinityConfigs) in a common package; implement MergeLoadAffinityConfigs to accept the slice of LoadAffinity configs, merge MatchLabels into a single map and append MatchExpressions, return the resulting *kube.LoadAffinity, then replace the duplicated code in both nodeagent.go and velero.go to call the helper and pass its result into kube.ToSystemAffinity before assigning ds.Spec.Template.Spec.Affinity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/controller/velero.go`:
- Around line 282-295: The code currently merges all
dpa.Spec.Configuration.Velero.LoadAffinityConfig entries into one mergedAffinity
(variable mergedAffinity) and calls kube.ToSystemAffinity, producing a single
AND-term; instead, for each LoadAffinityConfig entry you must create a separate
NodeSelectorTerm so they become ORed. Update the Velero affinity construction to
iterate over dpa.Spec.Configuration.Velero.LoadAffinityConfig, convert each
aff.NodeSelector.MatchLabels into equivalent MatchExpressions (In operator) and
include aff.NodeSelector.MatchExpressions, build a slice of
corev1.NodeSelectorTerm and set it on
veleroDeployment.Spec.Template.Spec.Affinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms
(via kube.ToSystemAffinity or by constructing the NodeAffinity directly) instead
of merging into mergedAffinity; apply the same change in nodeagent.go for the
NodeAgent affinity logic (replace the mergedAffinity pattern with per-entry
NodeSelectorTerms).
---
Nitpick comments:
In `@internal/controller/nodeagent.go`:
- Around line 276-289: The affinity merging block duplicated in nodeagent.go
(the loop building mergedAffinity from
dpa.Spec.Configuration.NodeAgent.NodeAgentConfigMapSettings.LoadAffinityConfig
and calling kube.ToSystemAffinity to set ds.Spec.Template.Spec.Affinity) should
be extracted into a shared helper (e.g., MergeLoadAffinityConfigs) in a common
package; implement MergeLoadAffinityConfigs to accept the slice of LoadAffinity
configs, merge MatchLabels into a single map and append MatchExpressions, return
the resulting *kube.LoadAffinity, then replace the duplicated code in both
nodeagent.go and velero.go to call the helper and pass its result into
kube.ToSystemAffinity before assigning ds.Spec.Template.Spec.Affinity.
|
/test 4.21-images |
2218743 to
2658c9a
Compare
The upstream kube.ToSystemAffinity function changed from accepting []*LoadAffinity to (*LoadAffinity, *NodeSelector). Update both call sites in nodeagent.go and velero.go to call ToSystemAffinity per LoadAffinityConfig entry and collect NodeSelectorTerms, preserving the existing OR semantics across multiple affinity entries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2658c9a to
5502e59
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml`:
- Around line 179-208: The manifest's caCertRef field is not implemented by the
controller; either remove caCertRef from the CRD to match the existing
controller that only uses CACert, or implement full support: add a CACertRef
field to the ObjectStorageLocation Go type (matching the CRD shape: name, key,
optional), update the BSL controller logic (bsl controller) to resolve the
referenced Secret (respecting name/key/optional), load its contents and populate
the CACert byte slice used for TLS verification, and keep the CRD validation
(required:key) in sync with the Go type.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml (1)
178-183: Add schema-level deprecation flag forcaCert.Line 178-183: the description marks the field deprecated, but schema consumers won’t see it unless
deprecated: trueis set.♻️ Suggested diff
caCert: description: |- CACert defines a CA bundle to use when verifying TLS connections to the provider. Deprecated: Use CACertRef instead. + deprecated: true format: byte type: string🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml` around lines 178 - 183, The caCert field is only marked deprecated in its description but lacks the OpenAPI schema deprecation flag; update the CRD schema for the caCert property so it includes deprecated: true alongside the existing type/format/description (i.e., add deprecated: true to the caCert property definition that currently references CACert/CACertRef) so schema consumers will recognize it as deprecated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml`:
- Around line 178-208: The CRD exposes caCertRef/CACertRef but the Go API types
and controllers may not handle it; either implement full support or remove the
field from the CRD. Search for the API struct (e.g., BackupStorageLocationSpec
or similar) and controller code that builds provider TLS config for occurrences
of caCertRef/CACertRef; if missing, add a field to the Go type (use
corev1.SecretKeySelector or equivalent, JSON tag matching the CRD), update
deepcopy/gen annotations and client generation, then update the controller
reconciliation logic (where provider config / TLS is constructed) to read
spec.CACertRef, fetch the referenced Secret/key, and use its value for TLS
verification; run codegen and regenerate the CRD. If you cannot or do not want
to implement this, remove caCertRef and its schema entry from the CRD manifest
so the API surface matches implemented Go types/controllers.
---
Nitpick comments:
In `@config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml`:
- Around line 178-183: The caCert field is only marked deprecated in its
description but lacks the OpenAPI schema deprecation flag; update the CRD schema
for the caCert property so it includes deprecated: true alongside the existing
type/format/description (i.e., add deprecated: true to the caCert property
definition that currently references CACert/CACertRef) so schema consumers will
recognize it as deprecated.
|
@coderabbitai regarding |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 54
|
|
Summarizing 1 Failure: |
|
/retest |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kaovilai, mpryc, weshayutin The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@kaovilai: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
The upstream
kube.ToSystemAffinityfunction changed from accepting[]*LoadAffinityto(*LoadAffinity, *NodeSelector). Update both call sites innodeagent.goandvelero.goto callToSystemAffinityperLoadAffinityConfigentry and collectNodeSelectorTerms, preserving the existing OR semantics across multiple affinity entries.Why the changes were made
Velero dependency update changed the
kube.ToSystemAffinityAPI from slice-based to single-entry. Without this fix, the operator fails to compile against the updated dependency.How to test the changes made
make test— all 54 controller unit tests pass, including the complex multi-entry loadAffinity test case that verifies OR semantics (separateNodeSelectorTerms) are preserved.Note
Responses generated with Claude
Summary by CodeRabbit
Chores
New Features
Refactor