Skip to content

chore: Describe RBAC rules, remove unnecessary rules#380

Draft
NickLarsenNZ wants to merge 2 commits intomainfrom
chore/rbac-review
Draft

chore: Describe RBAC rules, remove unnecessary rules#380
NickLarsenNZ wants to merge 2 commits intomainfrom
chore/rbac-review

Conversation

@NickLarsenNZ
Copy link
Copy Markdown
Member

Part of stackabletech/issues#798

Note

This was initially generated by a coding assistant to see how well it can inspect code and review the RBAC rules. the changes will be properly checked before reviews are requested.

  • Document each rule
  • Check the docs make sense. Rewrite where necessary
  • Remove unnecessary permissions
  • Attach explanations to PR description
  • Run all tests
  • Split operator and product roles into separate files

Removed

Resource Verbs removed Reason
"" events (entire resource) get, list, watch, create, delete, patch The kube-rs Recorder uses events.k8s.io/v1 (confirmed in kube-rs@fe69cc4/kube-runtime/src/events.rs). The csi-provisioner v5.3.0 sidecar also uses events.k8s.io. Core v1 events are never written.
listeners.stackable.tech listeners update No client.update() calls exist anywhere in the codebase. All mutation uses apply_patch (SSA → patch verb) or merge_patch (patch verb).
listeners.stackable.tech listeners/finalizers (entire resource) patch, create, delete, update No finalizer management exists in the operator code. The kube-rs Controller (confirmed in kube-rs@fe69cc4/kube-runtime/src/controller/mod.rs) does not add finalizers automatically. Added in commit b2559c3 but never used.
listeners.stackable.tech listeners/status create, delete, update Status subresources only need patch (via client.apply_patch_status()). create, delete, and update on a subresource are not meaningful here.
listeners.stackable.tech podlisteners delete, update PodListeners is not in delete_orphaned_resources (confirmed in operator-rs@7486017/src/cluster_resources.rs). There are no client.delete::<PodListeners>() calls. update is unused for the same reason as above.

Fixed

What Why
create on listenerclasses made unconditional (moved outside {{- if .Values.maintenance.customResourceDefinitions.maintain }}) main.rs calls client.create_if_missing() on preset ListenerClass objects unconditionally at startup. With the old conditional, create would be absent when maintain: false, silently breaking preset ListenerClass deployment.
Misleading comment on services delete corrected The old comment read "Needed to set an ownerRef on already existing Services". delete is actually needed for ClusterResources orphan cleanup (commit c1f49eb).

Kept (with justification)

Resource Verbs Justification
"" services get, list, watch, create, patch, delete Applied via SSA (create+patch); get for ReconciliationPaused strategy; watched by controller (list+watch); orphan-cleaned by ClusterResources (delete).
"" persistentvolumes get, list, watch, create, patch, delete get: CSI node driver fetches PVs on volume publish. list+watch: controller watches PVs for reconciliation triggers; reconcile lists PVs by label selector. patch+create: CSI node driver patches PV labels via SSA. create+delete: external-provisioner sidecar creates/deletes PVs for PVC lifecycle.
"" nodes get, list, watch get: operator fetches specific nodes to resolve NodePort addresses. list+watch: external-provisioner sidecar requires these for topology-aware provisioning (--feature-gates=Topology=true).
"" persistentvolumeclaims get, list, watch get: CSI controller and node driver fetch PVCs to read Listener selector annotations. list+watch: external-provisioner sidecar monitors PVCs to trigger provisioning.
"" endpoints get, list, watch Controller watches Endpoints to retrigger Listener reconciliation when pod readiness changes. get_opt is called in node_names_for_nodeport_listener as a fallback for older volumes.
"" nodes/proxy get Used by KubeletConfig::fetch in operator-rs@7486017/src/utils/kubelet.rs to read the kubelet's configz API for automatic cluster domain detection.
storage.k8s.io csinodes, storageclasses get, list, watch Required by the external-provisioner sidecar to discover driver topology keys (CSINodes) and determine volume binding mode (StorageClasses).
"" pods get, patch CSI node driver fetches Pods to read container ports and node assignment (get), and merge-patches them to add a Listener membership label used as a Service selector (patch).
events.k8s.io events create, patch The kube-rs Recorder creates new events.k8s.io/v1 Event objects and merge-patches existing ones to increment the repeat count.
listeners.stackable.tech listenerclasses get, list, watch, create (+ conditional patch) Watched by controller (list+watch+get). create: main.rs calls client.create_if_missing() for preset ListenerClasses unconditionally at startup. Conditional patch retained for potential CRD maintenance field-manager ownership.
listeners.stackable.tech listeners get, list, watch, create, patch, delete Primary reconciled resource (list+watch+get). CSI node driver applies Listeners via SSA for class-based volumes (create+patch). Orphaned Listeners are removed by ClusterResources (delete).
listeners.stackable.tech listeners/status patch client.apply_patch_status() in listener_controller.rs writes ingress address status via SSA.
listeners.stackable.tech podlisteners create, patch CSI node driver creates a PodListeners object on first volume mount (create), then merge-patches it to add entries for additional volumes (patch).
security.openshift.io securitycontextconstraints listener-scc use Allows the operator Pod to run under the listener-scc SCC on OpenShift (conditional on OpenShift detection).
apiextensions.k8s.io customresourcedefinitions create, patch, list, watch Required when maintenance.customResourceDefinitions.maintain: true (the default). The operator patches CRDs to inject the conversion webhook certificate, and lists/watches CRDs for the startup readiness condition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant