-
Notifications
You must be signed in to change notification settings - Fork 287
Description
Feature Description
Problem Statement:
I'm working with a ResourceGraphDefinition that includes two mutually exclusive resources (local and remote), where only one is created based on input conditions. I'm trying to expose the status of whichever resource exists through the custom resource's status fields, but I'm encountering a type-checking error when using the has() macro in status expressions.
The error I'm seeing is:
failed to build resourcegraphdefinition 'k0smotrondockercluster.cluster.example.com':
failed to build OpenAPI schema for instance status:
failed to type-check status expression "has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady"
at path "clusterStatus.infrastructureReady":
ERROR: <input>:1:5: invalid argument to has() macro
| has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady
| ....^
Interestingly, the has() macro works perfectly in includeWhen conditions:
includeWhen:
- ${!(has(schema.spec.baseCluster) && has(schema.spec.baseCluster.kubeconfigRef))}However, when I try to use it in status field expressions, the type-checker rejects it:
status:
clusterStatus:
infrastructureReady: '${has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady}'Proposed Solution:
Would it be possible to support the has() macro in status field expressions? This would allow conditional status field selection based on which resource exists at runtime, similar to how includeWhen conditions work.
The use case is: when you have mutually exclusive resources in a ResourceGraph, being able to conditionally reference their status fields would provide a consistent status interface for the custom resource, regardless of which variant was created.
Alternatives Considered:
I've considered a few alternatives, but none seem as clean:
- Creating both resources and having empty status fields for the non-existent one (doesn't work due to
includeWhen) - Duplicating the entire ResourceGraphDefinition for each variant (maintenance burden)
- Using a different status structure that doesn't directly map to the underlying resource (breaks the natural abstraction)
I'm curious if there's a recommended pattern I'm missing for this scenario.
Additional Context:
Here's my complete ResourceGraphDefinition for reference:
Click to expand full ResourceGraphDefinition YAML
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
annotations:
projectsveltos.io/hash: sha256:53cc9b9a88bc0416c80aa24c240de699896da8d8e73eb0ffc8375766d81debd7
projectsveltos.io/owner-kind: Profile
projectsveltos.io/owner-name: k0smotrondockercluster-rgd-installation
projectsveltos.io/owner-tier: "100"
projectsveltos.io/reference-kind: ConfigMap
projectsveltos.io/reference-name: k0smotrondockercluster-rgd-template
projectsveltos.io/reference-namespace: mgmt
projectsveltos.io/reference-tier: "100"
creationTimestamp: "2025-12-16T05:59:50Z"
finalizers:
- kro.run/finalizer
generation: 14
labels:
projectsveltos.io/reason: Resources
name: k0smotrondockercluster.cluster.example.com
resourceVersion: "1025013"
uid: 4571adde-01b6-45df-8243-e83064caf3fc
spec:
resources:
- externalRef:
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: ${schema.spec.clusterclassName}
namespace: ${schema.metadata.namespace}
id: clusterClass
readyWhen:
- ${clusterClass.status.conditions.all(x, x.status == "True")}
- id: local
includeWhen:
- ${!(has(schema.spec.baseCluster) && has(schema.spec.baseCluster.kubeconfigRef))}
template:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
annotations: ${schema.spec.template.metadata.annotations}
labels: ${schema.spec.template.metadata.labels}
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
clusterNetwork:
pods:
cidrBlocks:
- ${schema.spec.podCIDR}
serviceDomain: cluster.local
services:
cidrBlocks:
- ${schema.spec.serviceCIDR}
topology:
class: ${clusterClass.metadata.name}
controlPlane:
replicas: ${schema.spec.controlPlaneReplicas}
version: ${schema.spec.k0sVersion}
workers:
machineDeployments:
- class: default-worker
name: md-0
replicas: ${schema.spec.machineDeploymentReplicas}
- id: remote
includeWhen:
- ${has(schema.spec.baseCluster) && has(schema.spec.baseCluster.kubeconfigRef)}
template:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
annotations: ${schema.spec.template.metadata.annotations}
labels: ${schema.spec.template.metadata.labels}
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
clusterNetwork:
pods:
cidrBlocks:
- ${schema.spec.podCIDR}
serviceDomain: cluster.local
services:
cidrBlocks:
- ${schema.spec.serviceCIDR}
topology:
class: ${clusterClass.metadata.name}
controlPlane:
replicas: ${schema.spec.controlPlaneReplicas}
variables:
- name: kubeconfigRef
value:
key: value
name: ${schema.spec.baseCluster.kubeconfigRef.name}
namespace: ${schema.spec.baseCluster.kubeconfigRef.namespace}
version: ${schema.spec.k0sVersion}
workers:
machineDeployments:
- class: default-worker
name: md-0
replicas: ${schema.spec.machineDeploymentReplicas}
schema:
apiVersion: v1alpha1
group: cluster.example.com
kind: K0smotronDockerCluster
spec:
baseCluster:
kubeconfigRef:
name: string | required=true
namespace: string | required=true
clusterclassName: string | required=true
controlPlaneReplicas: integer | required=true
k0sVersion: string | required=true pattern="^v[0-9[]+\.[0-9[]+\.[0-9[]+\+k0s\.[0-9]+$"
machineDeploymentReplicas: integer | required=true
podCIDR: string | default="192.168.0.0/16" pattern="^([0-9[]{1,3}\\.){3}[0-9[]{1,3}/[0-9]{1,2}$"
serviceCIDR: string | default="10.128.0.0/12" pattern="^([0-9[]{1,3}\\.){3}[0-9[]{1,3}/[0-9]{1,2}$"
template:
metadata:
annotations: map[string]string | default={}
labels: map[string]string | required=true
status:
clusterStatus:
conditions: '${has(local) ? local.status.conditions : remote.status.conditions}'
controlPlaneReady: '${has(local) ? local.status.controlPlaneReady : remote.status.controlPlaneReady}'
infrastructureReady: '${has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady}'
observedGeneration: '${has(local) ? local.status.observedGeneration : remote.status.observedGeneration}'
phase: '${has(local) ? local.status.phase : remote.status.phase}'
v1beta2:
conditions: '${has(local) ? local.status.v1beta2.conditions : remote.status.v1beta2.conditions}'
status:
conditions:
- lastTransitionTime: "2025-12-16T06:03:21Z"
message: kind K0smotronDockerCluster has been accepted and ready
observedGeneration: 12
reason: Ready
status: "True"
type: KindReady
- lastTransitionTime: "2025-12-16T06:03:21Z"
message: controller is running
observedGeneration: 12
reason: Running
status: "True"
type: ControllerReady
- lastTransitionTime: "2025-12-18T02:20:04Z"
message: |-
failed to build resourcegraphdefinition 'k0smotrondockercluster.cluster.example.com': failed to build OpenAPI schema for instance status: failed to type-check status expression "has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady" at path "clusterStatus.infrastructureReady": ERROR: <input>:1:5: invalid argument to has() macro
| has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady
| ....^
observedGeneration: 14
reason: InvalidResourceGraph
status: "False"
type: ResourceGraphAccepted
- lastTransitionTime: "2025-12-18T02:20:04Z"
message: |-
failed to build resourcegraphdefinition 'k0smotrondockercluster.cluster.example.com': failed to build OpenAPI schema for instance status: failed to type-check status expression "has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady" at path "clusterStatus.infrastructureReady": ERROR: <input>:1:5: invalid argument to has() macro
| has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady
| ....^
observedGeneration: 14
reason: InvalidResourceGraph
status: "False"
type: Ready
state: InactiveThe ResourceGraph defines:
- Two mutually exclusive resources (
localandremote) that create Cluster API Cluster resources localis created whenbaseCluster.kubeconfigRefis not specifiedremoteis created whenbaseCluster.kubeconfigRefis specified- Both resources have the same status structure, and I'd like to expose whichever one exists through the custom resource's status
Current status showing the error:
status:
conditions:
- lastTransitionTime: "2025-12-18T02:20:04Z"
message: |-
failed to build resourcegraphdefinition 'k0smotrondockercluster.cluster.example.com': failed to build OpenAPI schema for instance status: failed to type-check status expression "has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady" at path "clusterStatus.infrastructureReady": ERROR: <input>:1:5: invalid argument to has() macro
| has(local) ? local.status.infrastructureReady : remote.status.infrastructureReady
| ....^
observedGeneration: 14
reason: InvalidResourceGraph
status: "False"
type: ResourceGraphAccepted
state: InactiveI appreciate any guidance on whether this is something that could be supported, or if there's a different pattern I should be using for this use case. Thank you for your time and consideration!
- Please vote on this issue by adding a 👍 reaction to the original issue
- If you are interested in working on this feature, please leave a comment