Skip to content

CONSOLE-4946: Add Configuration tab to Node view#16124

Open
jeff-phillips-18 wants to merge 1 commit intoopenshift:mainfrom
jeff-phillips-18:node-configuration
Open

CONSOLE-4946: Add Configuration tab to Node view#16124
jeff-phillips-18 wants to merge 1 commit intoopenshift:mainfrom
jeff-phillips-18:node-configuration

Conversation

@jeff-phillips-18
Copy link
Member

@jeff-phillips-18 jeff-phillips-18 commented Mar 9, 2026

Closes CONSOLE-4946
Closes CONSOLE-4948
Closes CONSOLE-4949

Description

Add Configuration tab to Node details page with extensible sub-navigation

Summary

This PR adds a new "Configuration" tab to the Node details page that displays machine and storage information using an extensible sub-navigation pattern. It also introduces a new plugin extension type (console.tab/nodeSubNavTab) that allows dynamic plugins to contribute additional sub-tabs to Node detail pages.

Key changes:

  • New plugin extension: Added console.tab/nodeSubNavTab extension type to allow plugins to add custom sub-tabs to Node detail pages (configuration or workload parent tabs)
  • Configuration tab: New tab on Node details page with vertical sub-navigation
  • Storage sub-tab (priority 70): Displays local disks (from BareMetalHost) and mounted persistent volumes with associated PVCs and Pods
  • Machine sub-tab (priority 50): Shows Machine details and MachineConfigPool information
  • Extensibility: External plugins can add their own sub-tabs using the new extension point
  • Permissions-aware: Uses new useAccessibleResources hook to handle admin vs non-admin access appropriately
  • Code organization: Refactored Node utilities into utils/ folder for better structure

Extension API

The new console.tab/nodeSubNavTab extension allows plugins to contribute sub-tabs:

{
  type: 'console.tab/nodeSubNavTab',
  properties: {
    parentTab: 'configuration' | 'workload',
    page: {
      tabId: 'my-tab',
      name: 'My Tab',
      priority: 60
    },
    component: { $codeRef: 'MyComponent' }
  }
}

Tabs are displayed in priority order (highest to lowest). Current priorities:

  • Configuration: Storage (70), Machine (50)
  • Workload: Pods (30)

Test plan

  • Unit tests added for all new components (NodeConfiguration, NodeMachine, NodeStorage, LocalDisks, PersistentVolumes)
  • Extension integration tested with mock extensions
  • Priority-based sorting verified
  • Parent tab filtering verified
  • Manual testing: Navigate to Compute → Nodes → [any node] → Configuration tab
  • Verify Storage sub-tab displays local disks for bare metal nodes
  • Verify Storage sub-tab displays mounted PVs
  • Verify Machine sub-tab displays machine and MCP details
  • Verify permissions work correctly for non-admin users
  • Verify loading and error states display correctly

Screenshots

Node Configuration - Storage

image

Node Configuration - Machine

image image

🤖 Generated with https://claude.com/claude-code

Summary by CodeRabbit

Release Notes

  • New Features

    • Added Configuration tab to Node Details page with Storage and Machine sub-sections
    • Added local disk information display showing disk type, model, serial number, and vendor details
    • Added persistent volumes display for mounted volumes on nodes
    • Added machine and machine config pool details viewing in node configuration
    • Enabled plugin extensions to add custom sub-tabs to node configuration
  • Documentation

    • Added documentation for node sub-navigation tab extensions

/cc @kybaker

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 9, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 9, 2026

@jeff-phillips-18: This pull request references CONSOLE-4946 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add Configuration tab to Node details page with extensible sub-navigation

Summary

This PR adds a new "Configuration" tab to the Node details page that displays machine and storage information using an extensible sub-navigation pattern. It also introduces a new plugin extension type (console.tab/nodeSubNavTab) that allows dynamic plugins to contribute additional sub-tabs to Node detail pages.

Key changes:

  • New plugin extension: Added console.tab/nodeSubNavTab extension type to allow plugins to add custom sub-tabs to Node detail pages (configuration or workload parent tabs)
  • Configuration tab: New tab on Node details page with vertical sub-navigation
  • Storage sub-tab (priority 70): Displays local disks (from BareMetalHost) and mounted persistent volumes with associated PVCs and Pods
  • Machine sub-tab (priority 50): Shows Machine details and MachineConfigPool information
  • Extensibility: External plugins can add their own sub-tabs using the new extension point
  • Permissions-aware: Uses new useAccessibleResources hook to handle admin vs non-admin access appropriately
  • Code organization: Refactored Node utilities into utils/ folder for better structure

Extension API

The new console.tab/nodeSubNavTab extension allows plugins to contribute sub-tabs:

{
  type: 'console.tab/nodeSubNavTab',
  properties: {
    parentTab: 'configuration' | 'workload',
    page: {
      tabId: 'my-tab',
      name: 'My Tab',
      priority: 60
    },
    component: { $codeRef: 'MyComponent' }
  }
}

Tabs are displayed in priority order (highest to lowest). Current priorities:

  • Configuration: Storage (70), Machine (50)
  • Workload: Pods (30)

Test plan

  • Unit tests added for all new components (NodeConfiguration, NodeMachine, NodeStorage, LocalDisks, PersistentVolumes)
  • Extension integration tested with mock extensions
  • Priority-based sorting verified
  • Parent tab filtering verified
  • Manual testing: Navigate to Compute → Nodes → [any node] → Configuration tab
  • Verify Storage sub-tab displays local disks for bare metal nodes
  • Verify Storage sub-tab displays mounted PVs
  • Verify Machine sub-tab displays machine and MCP details
  • Verify permissions work correctly for non-admin users
  • Verify loading and error states display correctly

Screenshots

Node Configuration - Storage

image

Node Configuration - Machine

image image

🤖 Generated with https://claude.com/claude-code

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from TheRealJon and spadgett March 9, 2026 20:59
@openshift-ci openshift-ci bot added component/core Related to console core functionality component/sdk Related to console-plugin-sdk kind/i18n Indicates issue or PR relates to internationalization or has content that needs to be translated plugin-api-changed Categorizes a PR as containing plugin API changes labels Mar 9, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 9, 2026

@jeff-phillips-18: This pull request references CONSOLE-4946 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add Configuration tab to Node details page with extensible sub-navigation

Summary

This PR adds a new "Configuration" tab to the Node details page that displays machine and storage information using an extensible sub-navigation pattern. It also introduces a new plugin extension type (console.tab/nodeSubNavTab) that allows dynamic plugins to contribute additional sub-tabs to Node detail pages.

Key changes:

  • New plugin extension: Added console.tab/nodeSubNavTab extension type to allow plugins to add custom sub-tabs to Node detail pages (configuration or workload parent tabs)
  • Configuration tab: New tab on Node details page with vertical sub-navigation
  • Storage sub-tab (priority 70): Displays local disks (from BareMetalHost) and mounted persistent volumes with associated PVCs and Pods
  • Machine sub-tab (priority 50): Shows Machine details and MachineConfigPool information
  • Extensibility: External plugins can add their own sub-tabs using the new extension point
  • Permissions-aware: Uses new useAccessibleResources hook to handle admin vs non-admin access appropriately
  • Code organization: Refactored Node utilities into utils/ folder for better structure

Extension API

The new console.tab/nodeSubNavTab extension allows plugins to contribute sub-tabs:

{
  type: 'console.tab/nodeSubNavTab',
  properties: {
    parentTab: 'configuration' | 'workload',
    page: {
      tabId: 'my-tab',
      name: 'My Tab',
      priority: 60
    },
    component: { $codeRef: 'MyComponent' }
  }
}

Tabs are displayed in priority order (highest to lowest). Current priorities:

  • Configuration: Storage (70), Machine (50)
  • Workload: Pods (30)

Test plan

  • Unit tests added for all new components (NodeConfiguration, NodeMachine, NodeStorage, LocalDisks, PersistentVolumes)
  • Extension integration tested with mock extensions
  • Priority-based sorting verified
  • Parent tab filtering verified
  • Manual testing: Navigate to Compute → Nodes → [any node] → Configuration tab
  • Verify Storage sub-tab displays local disks for bare metal nodes
  • Verify Storage sub-tab displays mounted PVs
  • Verify Machine sub-tab displays machine and MCP details
  • Verify permissions work correctly for non-admin users
  • Verify loading and error states display correctly

Screenshots

Node Configuration - Storage

image

Node Configuration - Machine

image image

🤖 Generated with https://claude.com/claude-code

Summary by CodeRabbit

Release Notes

  • New Features

  • Added Configuration tab to Node Details page with Storage and Machine sub-sections

  • Added local disk information display showing disk type, model, serial number, and vendor details

  • Added persistent volumes display for mounted volumes on nodes

  • Added machine and machine config pool details viewing in node configuration

  • Enabled plugin extensions to add custom sub-tabs to node configuration

  • Documentation

  • Added documentation for node sub-navigation tab extensions

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

This pull request adds a Configuration tab to the Node Details page with two standard sections: Storage and Machine. The Storage section displays local disk and persistent volume information for the node. The Machine section renders machine resource details and machine config pool configuration. The implementation includes a plugin extension mechanism (NodeSubNavTab) enabling additional configuration sub-tabs via dynamic extensions. A new useAccessibleResources hook provides role-aware resource fetching. Existing utilities are reorganized into a utils/ directory structure. Comprehensive unit tests cover all new components and edge cases including loading, error, and empty states.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and concisely describes the main change: adding a Configuration tab to the Node view. It directly corresponds to the primary feature introduced across the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeVmUtils.spec.ts (1)

299-353: ⚠️ Potential issue | 🟡 Minor

Cover the new non-admin watch path too.

These cases drive useFlag with one mockReturnValue, so KUBEVIRT_DYNAMIC and FLAGS.CAN_LIST_NS always resolve to the same boolean. That means the new useK8sWatchResources aggregation branch in useAccessibleResources never runs here, and the permissions-aware behavior can regress without this suite noticing. Please add a case that returns different values per flag and asserts the per-namespace watch path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeVmUtils.spec.ts`
around lines 299 - 353, Add a test for useWatchVirtualMachineInstances that
covers the non-admin/per-namespace watch path by making useFlagMock return
different values for the two flags instead of a single mockReturnValue;
specifically, use mockImplementation or mockReturnValueOnce so KUBEVIRT_DYNAMIC
is true but FLAGS.CAN_LIST_NS is false (or vice versa) to trigger the
useK8sWatchResources branch in useAccessibleResources, then assert that
useK8sWatchResourcesMock was called with the per-namespace resources aggregation
and that the hook returns the expected filtered VMIs for the node; locate the
logic around useWatchVirtualMachineInstances, useFlagMock,
useK8sWatchResourcesMock, and useAccessibleResources to add this case.
frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeBareMetalUtils.spec.ts (1)

194-205: ⚠️ Potential issue | 🟡 Minor

Assert the machine watch as well.

This case only proves that the BareMetalHost request is issued. If the new Machine watch disappears, the test still passes even though useWatchBareMetalHost can no longer resolve the host for a node. Please add an expectation for the Machine groupVersionKind request too.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeBareMetalUtils.spec.ts`
around lines 194 - 205, Test currently only asserts the BareMetalHost watch;
update the test for useWatchBareMetalHost to also assert that
useK8sWatchResourceMock was called for the Machine resource by adding an
expectation that a call was made with isList: true and groupVersionKind:
MachineGroupVersionKind (or the exact Machine GVK constant used in the code) and
namespaced: true. Locate the test for useWatchBareMetalHost, keep the existing
BareMetalHost assertion (BareMetalHostGroupVersionKind) and add a second
expect(...) to verify the Machine watch via useK8sWatchResourceMock so the test
fails if the Machine watch is removed.
🧹 Nitpick comments (12)
frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/NodeStorage.spec.tsx (1)

1-42: LGTM! Solid unit tests for the composition component.

The tests correctly verify that NodeStorage passes the node object to its child components. The mock setup is clean and beforeEach properly clears mocks between tests.

Consider adding a test that verifies both child components render together in a single assertion, which would catch any conditional rendering logic:

it('should render both LocalDisks and PersistentVolumes components', () => {
  render(<NodeStorage obj={mockNode} />);

  expect(screen.getByText('LocalDisks for test-node')).toBeInTheDocument();
  expect(screen.getByText('PersistentVolumes for test-node')).toBeInTheDocument();
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/NodeStorage.spec.tsx`
around lines 1 - 42, Add a new test to NodeStorage.spec.tsx that renders the
NodeStorage component and asserts both child mocks are present in a single test;
specifically, create a test (e.g., "should render both LocalDisks and
PersistentVolumes components") that calls render(<NodeStorage obj={mockNode} />)
and then verifies screen.getByText('LocalDisks for test-node') and
screen.getByText('PersistentVolumes for test-node') are in the document to catch
any conditional rendering issues.
frontend/packages/console-app/src/components/nodes/configuration/node-storage/LocalDisks.tsx (2)

80-90: Consider adding a unique key fallback.

Using disk.name as the React key is reasonable, but if two disks could theoretically share a name (e.g., multi-path devices), consider using an index fallback or combining multiple fields:

💡 Optional: More robust key
-                disks.map((disk) => <LocalDiskRow key={disk.name} obj={disk} />)
+                disks.map((disk, idx) => (
+                  <LocalDiskRow key={disk.serialNumber || disk.name || idx} obj={disk} />
+                ))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/LocalDisks.tsx`
around lines 80 - 90, The keys for the mapped rows use disk.name which may not
be unique; update the mapping in LocalDisks.tsx where disks.map(...) renders
<LocalDiskRow key={disk.name} obj={disk} /> to provide a stable unique fallback
(e.g., combine disk.name with another unique field like disk.path or disk.uuid,
or use the map index as a last-resort fallback) so each LocalDiskRow has a truly
unique key and avoids React key collisions.

67-92: Accessibility: Consider adding scope to table headers.

For better screen reader support, adding scope="col" to the <th> elements helps assistive technologies understand the table structure.

♿ Accessibility improvement
-                <th className="pf-v6-c-table__th">{t('console-app~Name')}</th>
+                <th className="pf-v6-c-table__th" scope="col">{t('console-app~Name')}</th>

Apply similarly to all <th> elements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/LocalDisks.tsx`
around lines 67 - 92, The table headers in the LocalDisks component lack column
scope attributes; update each <th> in the header row inside the LocalDisks JSX
(the header rendering that currently lists Name, Size, Type, Model, Serial
number, Vendor, HCTL) to include scope="col" so screen readers can correctly
interpret the table columns; no other logic changes needed.
frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/LocalDisks.spec.tsx (2)

107-119: Good coverage, but consider adding a test for undefined sizeBytes.

The LocalDisks component falls back to DASH when sizeBytes is undefined. Adding a test case where one disk has sizeBytes: undefined would verify this edge case renders correctly.

💡 Suggested additional test case
it('should display dash when disk size is undefined', () => {
  const hostWithUndefinedSize = {
    ...mockBareMetalHost,
    status: {
      hardware: {
        storage: [
          {
            name: '/dev/sdc',
            rotational: false,
            model: 'Unknown',
          },
        ],
      },
    },
  };
  useWatchBareMetalHostMock.mockReturnValue([hostWithUndefinedSize, true, undefined]);

  render(<LocalDisks node={mockNode} />);

  const rows = screen.getAllByRole('row');
  expect(rows[1]).toHaveTextContent('-'); // DASH for missing size
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/LocalDisks.spec.tsx`
around lines 107 - 119, Add a test that verifies LocalDisks renders a DASH when
a disk's sizeBytes is undefined: create a host variant (e.g.,
hostWithUndefinedSize) by cloning mockBareMetalHost and setting
status.hardware.storage to include an entry for '/dev/sdc' without sizeBytes,
call useWatchBareMetalHostMock.mockReturnValue([hostWithUndefinedSize, true,
undefined]), render <LocalDisks node={mockNode} />, and assert the rendered
table row for that disk contains the DASH (e.g., via screen.getAllByRole('row')
or screen.getByText('-')) to confirm the fallback behavior in LocalDisks.

75-81: Consider using a data-test attribute instead of class selector.

Querying by .loading-skeleton--table class couples the test to CSS implementation. A data-test or data-testid attribute would be more resilient to styling changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/LocalDisks.spec.tsx`
around lines 75 - 81, The test couples to CSS by querying
'.loading-skeleton--table'; update the component and test: add a stable data
attribute (e.g. data-testid="loading-skeleton-table") to the loading skeleton
element in the LocalDisks component, then change the test in LocalDisks.spec.tsx
to render(<LocalDisks node={mockNode} />) and assert using
screen.getByTestId('loading-skeleton-table') (or
container.querySelector('[data-testid="loading-skeleton-table"]')) after mocking
useWatchBareMetalHostMock; reference LocalDisks and useWatchBareMetalHostMock
when making these changes.
frontend/packages/console-app/locales/en/console-app.json (1)

365-365: Verify "None" key usage scope.

The generic key "None": "None" could potentially be reused across different contexts. Ensure this is intentional and that it doesn't conflict with any existing or future usage that might require different translations in other locales (e.g., "None" for a missing value vs. "None" for a selection option).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/packages/console-app/locales/en/console-app.json` at line 365, The
"None" translation key is too generic and may conflict across contexts; either
confirm its intended global reuse or replace it with context-specific keys
(e.g., "no_value" or "selection_none") in
frontend/packages/console-app/locales/en/console-app.json and update all code
references that use the "None" key (search for string/lookup usages that
reference "None") to the new key names; ensure corresponding entries are added
to other locale files if needed and run a quick translation lookup test to
validate no breaks.
frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx (4)

315-325: Table accessibility: Consider adding scope="col" to header cells.

For screen readers, explicit scope attributes improve table navigation:

♻️ Proposed improvement
             <thead className="pf-v6-c-table__thead">
               <tr className="pf-v6-c-table__tr">
-                <th className="pf-v6-c-table__th">{t('console-app~Name')}</th>
-                <th className="pf-v6-c-table__th">{t('console-app~PVC')}</th>
+                <th className="pf-v6-c-table__th" scope="col">{t('console-app~Name')}</th>
+                <th className="pf-v6-c-table__th" scope="col">{t('console-app~PVC')}</th>
                 ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx`
around lines 315 - 325, The table header in the PersistentVolumes component
renders plain <th> elements without scope attributes which hurts screen reader
navigation; update the header row in PersistentVolumes.tsx (the table within the
PersistentVolumes component) to add scope="col" to each <th> (Name, PVC,
Namespace, Pod, StorageClass, Capacity) so each header cell explicitly indicates
it is a column header for accessibility.

267-280: PVs with hostname label but no bound PVC are silently dropped.

In nodePVCs, if a PV has the kubernetes.io/hostname label but no matching PVC (or spec.claimRef is unset), it's filtered out. This could hide unbound local PVs from the user.

Consider whether showing unbound PVs is valuable—if so, push them to acc with persistentVolumeClaim as undefined.

♻️ Proposed fix to include unbound PVs
       return (
         nodePVs?.reduce<NodePersistentVolumeData[]>((acc, persistentVolume) => {
           const persistentVolumeClaim = pvcs.find(
             (pvc) =>
               pvc.metadata.name === persistentVolume.spec?.claimRef?.name &&
               pvc.metadata.namespace === persistentVolume.spec?.claimRef?.namespace,
           );
-          if (persistentVolumeClaim) {
-            acc.push({
-              persistentVolume,
-              persistentVolumeClaim,
-            });
-          }
+          acc.push({
+            persistentVolume,
+            persistentVolumeClaim, // may be undefined for unbound PVs
+          });
           return acc;
         }, []) ?? []
       );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx`
around lines 267 - 280, The reduce that builds nodePVCs currently drops PVs that
have the kubernetes.io/hostname label but lack a matching PVC or a
spec.claimRef; update the logic in the nodePVs?.reduce callback so that when a
persistentVolume has the hostname label (check
persistentVolume.metadata?.labels?.['kubernetes.io/hostname']) but no matching
pvc is found (or spec?.claimRef is unset), you still acc.push an entry of type
NodePersistentVolumeData with persistentVolumeClaim set to undefined; keep
existing behavior of pushing paired PV+PVC when PVC exists (the
persistentVolume, pvcs.find(...) branch) and ensure you use optional chaining on
spec.claimRef to avoid exceptions.

133-176: Performance consideration: Five simultaneous watch/list operations.

This component establishes watches for PVs, PVCs, DataVolumes, VMIs, and Pods. For clusters with many resources, this could cause:

  • Significant memory pressure from caching lists
  • Network overhead from concurrent watch connections

Consider whether PVs and PVCs could be fetched with label/field selectors to reduce payload, or if lazy loading would help.

That said, for the Node Configuration tab use case where users explicitly navigate here, the current approach is reasonable. Just flag it for monitoring if performance issues arise in large clusters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx`
around lines 133 - 176, PersistentVolumes currently opens five concurrent
watches (useK8sWatchResource for persistentVolumes/pvcs, useAccessibleResources
for dataVolumes/pods, and useWatchVirtualMachineInstances for vms) which can be
heavy; modify PersistentVolumes to reduce watch scope by adding
fieldSelector/labelSelector where possible (e.g., filter PVs/PVCs by
node-related labels or PVC namespace) or convert some watches to on-demand
fetches (useK8sGet or fetch in effect) and/or only start watches when the Node
Configuration tab/component is visible; update usages of useK8sWatchResource,
useAccessibleResources, and useWatchVirtualMachineInstances accordingly so heavy
lists are either selector-filtered or lazily loaded to lower memory/network
overhead.

178-183: Error state conflates multiple distinct failures.

loadError is a union of five possible errors. When displayed (line 312), users see a generic message without knowing which resource failed. For debugging, consider logging the specific error or showing which resource couldn't load.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx`
around lines 178 - 183, The current loadError combines multiple distinct errors
(persistentVolumesLoadError, pvcsLoadError, dataVolumesLoadError, vmsLoadError,
podsLoadError) into a single value which hides which resource failed; update the
logic that computes loadError (in the PersistentVolumes component) to produce an
error object or tuple that includes the failing resource name and its error
(e.g., { resource: 'PVCs', error: pvcsLoadError }) or a map of resource->error,
and then use that structured value where the error is displayed to show the
resource name and the specific error message and also log the full error
(console.error or app logger) for debugging; replace references to loadError
with the new loadErrorInfo (or similar) and update the UI render path that
currently shows the generic message to include loadErrorInfo.resource and
loadErrorInfo.error.message.
frontend/packages/console-app/src/components/nodes/configuration/__tests__/NodeConfiguration.spec.tsx (1)

68-83: Test uses render but extension tests use renderWithProviders—ensure consistency.

Lines 72, 89, 101, 113, 123 use bare render(), while extension tests use renderWithProviders. If the component tree requires Redux/plugin store context for any path, inconsistent wrapper usage could mask integration issues. Consider standardizing on renderWithProviders for all tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/__tests__/NodeConfiguration.spec.tsx`
around lines 68 - 83, Tests in NodeConfiguration.spec.tsx use the bare render()
helper instead of the standardized renderWithProviders, which may miss required
Redux/plugin context; replace each render(<NodeConfiguration obj={mockNode} />)
call with renderWithProviders(<NodeConfiguration obj={mockNode} />) (including
in the "should render Storage tab by default" test and the other tests that
currently call render), and ensure renderWithProviders is imported into the test
file; keep the rest of the assertions unchanged.
frontend/packages/console-app/src/components/nodes/configuration/NodeMachine.tsx (1)

86-96: Consider using Alert components for error states to improve accessibility and UX consistency.

The plain <div> elements for error messages (lines 87, 92, 98, 117) lack semantic meaning and visual prominence. PatternFly Alert with variant="danger" provides consistent styling, proper ARIA roles, and screen reader support.

♻️ Proposed refactor using Alert
+import { Alert } from '@patternfly/react-core';
...
       {machineLoadError ? (
-        <div>{t('console-app~Error loading machine')}</div>
+        <Alert variant="danger" isInline title={t('console-app~Error loading machine')} />
       ) : machineLoaded ? (
         machine ? (
           <MachineDetails obj={machine} hideConditions />
         ) : (
-          <div>{t('console-app~Machine not found')}</div>
+          <Alert variant="info" isInline title={t('console-app~Machine not found')} />
         )
       ) : (

Also applies to: 97-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/NodeMachine.tsx`
around lines 86 - 96, Replace the plain <div> messages in the NodeMachine render
with PatternFly Alert components to improve accessibility and UX: import Alert
from '@patternfly/react-core' and replace the machineLoadError branch (currently
rendering <div>{t('...Error loading machine')}</div>) with <Alert
variant="danger" isInline title={t('...Error loading machine')}> (or similar
inline Alert with the translated string); replace the "Machine not found" <div>
with an Alert (variant="warning" or "info") using t('...Machine not found') as
the title/content; apply the same Alert substitution for the other plain-message
<div>s in this component (lines 97-121) while keeping the existing
MachineDetails and SkeletonDetails usage intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/PersistentVolumes.spec.tsx`:
- Around line 242-258: Rename the test in PersistentVolumes.spec.tsx so its
description matches the assertion: update the it(...) title from 'should display
"No claim" when PVC is not found' to something like 'should display "No
persistent volumes found" when PV has no claimRef' so it clearly reflects the
mocked pvNoClaim setup and the expectation in the PersistentVolumes test block.

In `@frontend/packages/console-app/src/components/nodes/NodeSubNavPage.tsx`:
- Around line 57-58: activePage can be null when pages is empty, causing a
TypeError on reading activePage.component; update the code that sets Component
to safely handle null (e.g., use optional chaining or a fallback) and ensure the
render logic handles a null Component. Specifically, change the assignment
around activePage/component (referencing pages, activeTabKey, activePage,
Component) so Component = activePage?.component ?? null (or return early if no
activePage) and adjust downstream rendering to skip or render a safe placeholder
when Component is null.

In
`@frontend/packages/console-app/src/components/nodes/utils/useAccessibleResources.ts`:
- Around line 61-76: The hook currently collapses to an empty result when any
namespace has a loadError; update useAccessibleResources to treat per-namespace
failures as partial results by: 1) computing allResources by flat-mapping
namespacedResources[key].data only for namespaces that have data (so successful
watches are merged even if others errored), 2) computing loaded as
projectsLoaded && Object.keys(namespacedResources).every(key =>
namespacedResources[key].loaded || namespacedResources[key].loadError) (so
“loaded” reflects that all watches settled, either success or error), and 3)
keeping loadError as projectsLoadError ||
Object.values(namespacedResources).find(res => res.loadError)?.loadError but do
NOT return [] when loadError is present—return [allResources, loaded, loadError]
instead; update references to loaded, loadError, namespacedResources and the
return path in useAccessibleResources accordingly.

In `@frontend/packages/console-dynamic-plugin-sdk/docs/console-extensions.md`:
- Line 61: The markdown link fragment uses mixed case and won't match the
generated lowercase anchor; update the link target for the
console.tab/nodeSubNavTab entry so the fragment is all lowercase (e.g., change
the current "(`#consoletabnodeSubNavTab`)" to "(`#consoletabnodesubnavtab`)")
ensuring the link fragment matches the header-generated anchor for
console.tab/nodeSubNavTab.

---

Outside diff comments:
In
`@frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeBareMetalUtils.spec.ts`:
- Around line 194-205: Test currently only asserts the BareMetalHost watch;
update the test for useWatchBareMetalHost to also assert that
useK8sWatchResourceMock was called for the Machine resource by adding an
expectation that a call was made with isList: true and groupVersionKind:
MachineGroupVersionKind (or the exact Machine GVK constant used in the code) and
namespaced: true. Locate the test for useWatchBareMetalHost, keep the existing
BareMetalHost assertion (BareMetalHostGroupVersionKind) and add a second
expect(...) to verify the Machine watch via useK8sWatchResourceMock so the test
fails if the Machine watch is removed.

In
`@frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeVmUtils.spec.ts`:
- Around line 299-353: Add a test for useWatchVirtualMachineInstances that
covers the non-admin/per-namespace watch path by making useFlagMock return
different values for the two flags instead of a single mockReturnValue;
specifically, use mockImplementation or mockReturnValueOnce so KUBEVIRT_DYNAMIC
is true but FLAGS.CAN_LIST_NS is false (or vice versa) to trigger the
useK8sWatchResources branch in useAccessibleResources, then assert that
useK8sWatchResourcesMock was called with the per-namespace resources aggregation
and that the hook returns the expected filtered VMIs for the node; locate the
logic around useWatchVirtualMachineInstances, useFlagMock,
useK8sWatchResourcesMock, and useAccessibleResources to add this case.

---

Nitpick comments:
In `@frontend/packages/console-app/locales/en/console-app.json`:
- Line 365: The "None" translation key is too generic and may conflict across
contexts; either confirm its intended global reuse or replace it with
context-specific keys (e.g., "no_value" or "selection_none") in
frontend/packages/console-app/locales/en/console-app.json and update all code
references that use the "None" key (search for string/lookup usages that
reference "None") to the new key names; ensure corresponding entries are added
to other locale files if needed and run a quick translation lookup test to
validate no breaks.

In
`@frontend/packages/console-app/src/components/nodes/configuration/__tests__/NodeConfiguration.spec.tsx`:
- Around line 68-83: Tests in NodeConfiguration.spec.tsx use the bare render()
helper instead of the standardized renderWithProviders, which may miss required
Redux/plugin context; replace each render(<NodeConfiguration obj={mockNode} />)
call with renderWithProviders(<NodeConfiguration obj={mockNode} />) (including
in the "should render Storage tab by default" test and the other tests that
currently call render), and ensure renderWithProviders is imported into the test
file; keep the rest of the assertions unchanged.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/LocalDisks.spec.tsx`:
- Around line 107-119: Add a test that verifies LocalDisks renders a DASH when a
disk's sizeBytes is undefined: create a host variant (e.g.,
hostWithUndefinedSize) by cloning mockBareMetalHost and setting
status.hardware.storage to include an entry for '/dev/sdc' without sizeBytes,
call useWatchBareMetalHostMock.mockReturnValue([hostWithUndefinedSize, true,
undefined]), render <LocalDisks node={mockNode} />, and assert the rendered
table row for that disk contains the DASH (e.g., via screen.getAllByRole('row')
or screen.getByText('-')) to confirm the fallback behavior in LocalDisks.
- Around line 75-81: The test couples to CSS by querying
'.loading-skeleton--table'; update the component and test: add a stable data
attribute (e.g. data-testid="loading-skeleton-table") to the loading skeleton
element in the LocalDisks component, then change the test in LocalDisks.spec.tsx
to render(<LocalDisks node={mockNode} />) and assert using
screen.getByTestId('loading-skeleton-table') (or
container.querySelector('[data-testid="loading-skeleton-table"]')) after mocking
useWatchBareMetalHostMock; reference LocalDisks and useWatchBareMetalHostMock
when making these changes.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/NodeStorage.spec.tsx`:
- Around line 1-42: Add a new test to NodeStorage.spec.tsx that renders the
NodeStorage component and asserts both child mocks are present in a single test;
specifically, create a test (e.g., "should render both LocalDisks and
PersistentVolumes components") that calls render(<NodeStorage obj={mockNode} />)
and then verifies screen.getByText('LocalDisks for test-node') and
screen.getByText('PersistentVolumes for test-node') are in the document to catch
any conditional rendering issues.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/LocalDisks.tsx`:
- Around line 80-90: The keys for the mapped rows use disk.name which may not be
unique; update the mapping in LocalDisks.tsx where disks.map(...) renders
<LocalDiskRow key={disk.name} obj={disk} /> to provide a stable unique fallback
(e.g., combine disk.name with another unique field like disk.path or disk.uuid,
or use the map index as a last-resort fallback) so each LocalDiskRow has a truly
unique key and avoids React key collisions.
- Around line 67-92: The table headers in the LocalDisks component lack column
scope attributes; update each <th> in the header row inside the LocalDisks JSX
(the header rendering that currently lists Name, Size, Type, Model, Serial
number, Vendor, HCTL) to include scope="col" so screen readers can correctly
interpret the table columns; no other logic changes needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx`:
- Around line 315-325: The table header in the PersistentVolumes component
renders plain <th> elements without scope attributes which hurts screen reader
navigation; update the header row in PersistentVolumes.tsx (the table within the
PersistentVolumes component) to add scope="col" to each <th> (Name, PVC,
Namespace, Pod, StorageClass, Capacity) so each header cell explicitly indicates
it is a column header for accessibility.
- Around line 267-280: The reduce that builds nodePVCs currently drops PVs that
have the kubernetes.io/hostname label but lack a matching PVC or a
spec.claimRef; update the logic in the nodePVs?.reduce callback so that when a
persistentVolume has the hostname label (check
persistentVolume.metadata?.labels?.['kubernetes.io/hostname']) but no matching
pvc is found (or spec?.claimRef is unset), you still acc.push an entry of type
NodePersistentVolumeData with persistentVolumeClaim set to undefined; keep
existing behavior of pushing paired PV+PVC when PVC exists (the
persistentVolume, pvcs.find(...) branch) and ensure you use optional chaining on
spec.claimRef to avoid exceptions.
- Around line 133-176: PersistentVolumes currently opens five concurrent watches
(useK8sWatchResource for persistentVolumes/pvcs, useAccessibleResources for
dataVolumes/pods, and useWatchVirtualMachineInstances for vms) which can be
heavy; modify PersistentVolumes to reduce watch scope by adding
fieldSelector/labelSelector where possible (e.g., filter PVs/PVCs by
node-related labels or PVC namespace) or convert some watches to on-demand
fetches (useK8sGet or fetch in effect) and/or only start watches when the Node
Configuration tab/component is visible; update usages of useK8sWatchResource,
useAccessibleResources, and useWatchVirtualMachineInstances accordingly so heavy
lists are either selector-filtered or lazily loaded to lower memory/network
overhead.
- Around line 178-183: The current loadError combines multiple distinct errors
(persistentVolumesLoadError, pvcsLoadError, dataVolumesLoadError, vmsLoadError,
podsLoadError) into a single value which hides which resource failed; update the
logic that computes loadError (in the PersistentVolumes component) to produce an
error object or tuple that includes the failing resource name and its error
(e.g., { resource: 'PVCs', error: pvcsLoadError }) or a map of resource->error,
and then use that structured value where the error is displayed to show the
resource name and the specific error message and also log the full error
(console.error or app logger) for debugging; replace references to loadError
with the new loadErrorInfo (or similar) and update the UI render path that
currently shows the generic message to include loadErrorInfo.resource and
loadErrorInfo.error.message.

In
`@frontend/packages/console-app/src/components/nodes/configuration/NodeMachine.tsx`:
- Around line 86-96: Replace the plain <div> messages in the NodeMachine render
with PatternFly Alert components to improve accessibility and UX: import Alert
from '@patternfly/react-core' and replace the machineLoadError branch (currently
rendering <div>{t('...Error loading machine')}</div>) with <Alert
variant="danger" isInline title={t('...Error loading machine')}> (or similar
inline Alert with the translated string); replace the "Machine not found" <div>
with an Alert (variant="warning" or "info") using t('...Machine not found') as
the title/content; apply the same Alert substitution for the other plain-message
<div>s in this component (lines 97-121) while keeping the existing
MachineDetails and SkeletonDetails usage intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 633fcc13-3a0a-4fcf-b5f7-0bd1f115d660

📥 Commits

Reviewing files that changed from the base of the PR and between 874c37e and 7989685.

📒 Files selected for processing (28)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodeDetailsPage.tsx
  • frontend/packages/console-app/src/components/nodes/NodeSubNavPage.tsx
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/NodeConfiguration.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/NodeMachine.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/__tests__/NodeConfiguration.spec.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/__tests__/NodeMachine.spec.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/node-storage/LocalDisks.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/node-storage/NodeStorage.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/node-storage/PersistentVolumes.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/LocalDisks.spec.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/NodeStorage.spec.tsx
  • frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/PersistentVolumes.spec.tsx
  • frontend/packages/console-app/src/components/nodes/node-dashboard/BareMetalInventoryItems.tsx
  • frontend/packages/console-app/src/components/nodes/node-dashboard/InventoryCard.tsx
  • frontend/packages/console-app/src/components/nodes/node-dashboard/__tests__/BareMetalInventoryItems.spec.tsx
  • frontend/packages/console-app/src/components/nodes/utils/NodeBareMetalUtils.ts
  • frontend/packages/console-app/src/components/nodes/utils/NodeVmUtils.ts
  • frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeBareMetalUtils.spec.ts
  • frontend/packages/console-app/src/components/nodes/utils/__tests__/NodeVmUtils.spec.ts
  • frontend/packages/console-app/src/components/nodes/utils/useAccessibleResources.ts
  • frontend/packages/console-dynamic-plugin-sdk/docs/console-extensions.md
  • frontend/packages/console-dynamic-plugin-sdk/src/extensions/index.ts
  • frontend/packages/console-dynamic-plugin-sdk/src/extensions/node-subnav-tabs.ts
  • frontend/packages/console-dynamic-plugin-sdk/src/schema/console-extensions.ts
  • frontend/public/components/machine-config-pool.tsx
  • frontend/public/components/machine.tsx

Comment on lines +242 to +258
it('should display "No claim" when PVC is not found', () => {
const pvNoClaim = {
...mockPV,
spec: {
...mockPV.spec,
claimRef: undefined,
},
};

useK8sWatchResourceMock
.mockReturnValueOnce([[pvNoClaim], true, undefined])
.mockReturnValueOnce([[], true, undefined]);

render(<PersistentVolumes node={mockNode} />);

expect(screen.getByText('No persistent volumes found')).toBeInTheDocument();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test name doesn't match the assertion.

The test is named 'should display "No claim" when PVC is not found' but the assertion checks for 'No persistent volumes found'. This indicates either:

  1. The test name is misleading, or
  2. The assertion is incorrect and should check for "No claim" text.

Based on the mock setup where the PV has no claimRef, if the component filters out unbound PVs, then "No persistent volumes found" is expected. Please rename the test to accurately reflect what it validates.

📝 Suggested fix
-  it('should display "No claim" when PVC is not found', () => {
+  it('should filter out persistent volumes without a claim', () => {
     const pvNoClaim = {
       ...mockPV,
       spec: {
         ...mockPV.spec,
         claimRef: undefined,
       },
     };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should display "No claim" when PVC is not found', () => {
const pvNoClaim = {
...mockPV,
spec: {
...mockPV.spec,
claimRef: undefined,
},
};
useK8sWatchResourceMock
.mockReturnValueOnce([[pvNoClaim], true, undefined])
.mockReturnValueOnce([[], true, undefined]);
render(<PersistentVolumes node={mockNode} />);
expect(screen.getByText('No persistent volumes found')).toBeInTheDocument();
});
it('should filter out persistent volumes without a claim', () => {
const pvNoClaim = {
...mockPV,
spec: {
...mockPV.spec,
claimRef: undefined,
},
};
useK8sWatchResourceMock
.mockReturnValueOnce([[pvNoClaim], true, undefined])
.mockReturnValueOnce([[], true, undefined]);
render(<PersistentVolumes node={mockNode} />);
expect(screen.getByText('No persistent volumes found')).toBeInTheDocument();
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/configuration/node-storage/__tests__/PersistentVolumes.spec.tsx`
around lines 242 - 258, Rename the test in PersistentVolumes.spec.tsx so its
description matches the assertion: update the it(...) title from 'should display
"No claim" when PVC is not found' to something like 'should display "No
persistent volumes found" when PV has no claimRef' so it clearly reflects the
mocked pvNoClaim setup and the expectation in the PersistentVolumes test block.

Comment on lines +57 to +58
const activePage = pages.find((page) => page.tabId === activeTabKey) ?? (pages[0] || null);
const Component = activePage.component;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential null pointer dereference when pages is empty.

If standardPages is empty and no extensions match, pages[0] is undefined, making activePage null. Line 58 then accesses activePage.component, which throws a TypeError.

🐛 Proposed fix with guard
-  const activePage = pages.find((page) => page.tabId === activeTabKey) ?? (pages[0] || null);
-  const Component = activePage.component;
+  const activePage = pages.find((page) => page.tabId === activeTabKey) ?? pages[0];
+  const Component = activePage?.component;

   return (
     <Flex
...
-          {Component ? (
+          {activePage && Component ? (
             <FlexItem flex={{ default: 'flex_1' }} className="pf-v6-u-h-100">
               <Component obj={obj} />
             </FlexItem>
           ) : null}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/packages/console-app/src/components/nodes/NodeSubNavPage.tsx` around
lines 57 - 58, activePage can be null when pages is empty, causing a TypeError
on reading activePage.component; update the code that sets Component to safely
handle null (e.g., use optional chaining or a fallback) and ensure the render
logic handles a null Component. Specifically, change the assignment around
activePage/component (referencing pages, activeTabKey, activePage, Component) so
Component = activePage?.component ?? null (or return early if no activePage) and
adjust downstream rendering to skip or render a safe placeholder when Component
is null.

Comment on lines +61 to +76
const loaded =
projectsLoaded &&
Object.keys(namespacedResources).every((key) => namespacedResources[key].loaded);
const loadError =
projectsLoadError ||
Object.values(namespacedResources).find((res) => res.loadError)?.loadError;

if (!loaded || loadError) {
return [[], loaded, loadError];
}

const allResources = Object.keys(namespacedResources)
.map((key) => namespacedResources[key].data)
.flat();

return [allResources, loaded, loadError];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't fail the whole aggregate on a single namespace error.

For non-admin users, the first failing namespace becomes loadError and this hook returns [] for everything. Since this now feeds both the VMI and bare-metal node views, one forbidden or missing namespace can blank the entire feature even when other namespaces loaded fine. Please treat namespace failures as partial results and merge the successful watches instead of collapsing the whole response.

Possible direction
-    const loaded =
-      projectsLoaded &&
-      Object.keys(namespacedResources).every((key) => namespacedResources[key].loaded);
-    const loadError =
-      projectsLoadError ||
-      Object.values(namespacedResources).find((res) => res.loadError)?.loadError;
-
-    if (!loaded || loadError) {
-      return [[], loaded, loadError];
-    }
-
-    const allResources = Object.keys(namespacedResources)
-      .map((key) => namespacedResources[key].data)
-      .flat();
-
-    return [allResources, loaded, loadError];
+    const namespaceResults = Object.values(namespacedResources);
+    const settled =
+      projectsLoaded && namespaceResults.every((res) => res.loaded || res.loadError);
+
+    if (!settled) {
+      return [[], false, undefined];
+    }
+
+    const successfulResults = namespaceResults.filter((res) => !res.loadError);
+    const allResources = successfulResults.flatMap((res) => res.data ?? []);
+    const loadError =
+      projectsLoadError ||
+      (successfulResults.length === 0
+        ? namespaceResults.find((res) => res.loadError)?.loadError
+        : undefined);
+
+    return [allResources, true, loadError];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const loaded =
projectsLoaded &&
Object.keys(namespacedResources).every((key) => namespacedResources[key].loaded);
const loadError =
projectsLoadError ||
Object.values(namespacedResources).find((res) => res.loadError)?.loadError;
if (!loaded || loadError) {
return [[], loaded, loadError];
}
const allResources = Object.keys(namespacedResources)
.map((key) => namespacedResources[key].data)
.flat();
return [allResources, loaded, loadError];
const namespaceResults = Object.values(namespacedResources);
const settled =
projectsLoaded && namespaceResults.every((res) => res.loaded || res.loadError);
if (!settled) {
return [[], false, undefined];
}
const successfulResults = namespaceResults.filter((res) => !res.loadError);
const allResources = successfulResults.flatMap((res) => res.data ?? []);
const loadError =
projectsLoadError ||
(successfulResults.length === 0
? namespaceResults.find((res) => res.loadError)?.loadError
: undefined);
return [allResources, true, loadError];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@frontend/packages/console-app/src/components/nodes/utils/useAccessibleResources.ts`
around lines 61 - 76, The hook currently collapses to an empty result when any
namespace has a loadError; update useAccessibleResources to treat per-namespace
failures as partial results by: 1) computing allResources by flat-mapping
namespacedResources[key].data only for namespaces that have data (so successful
watches are merged even if others errored), 2) computing loaded as
projectsLoaded && Object.keys(namespacedResources).every(key =>
namespacedResources[key].loaded || namespacedResources[key].loadError) (so
“loaded” reflects that all watches settled, either success or error), and 3)
keeping loadError as projectsLoadError ||
Object.values(namespacedResources).find(res => res.loadError)?.loadError but do
NOT return [] when loadError is present—return [allResources, loaded, loadError]
instead; update references to loaded, loadError, namespacedResources and the
return path in useAccessibleResources accordingly.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 10, 2026

@jeff-phillips-18: This pull request references CONSOLE-4946 which is a valid jira issue.

Details

In response to this:

Description

Add Configuration tab to Node details page with extensible sub-navigation

Summary

This PR adds a new "Configuration" tab to the Node details page that displays machine and storage information using an extensible sub-navigation pattern. It also introduces a new plugin extension type (console.tab/nodeSubNavTab) that allows dynamic plugins to contribute additional sub-tabs to Node detail pages.

Key changes:

  • New plugin extension: Added console.tab/nodeSubNavTab extension type to allow plugins to add custom sub-tabs to Node detail pages (configuration or workload parent tabs)
  • Configuration tab: New tab on Node details page with vertical sub-navigation
  • Storage sub-tab (priority 70): Displays local disks (from BareMetalHost) and mounted persistent volumes with associated PVCs and Pods
  • Machine sub-tab (priority 50): Shows Machine details and MachineConfigPool information
  • Extensibility: External plugins can add their own sub-tabs using the new extension point
  • Permissions-aware: Uses new useAccessibleResources hook to handle admin vs non-admin access appropriately
  • Code organization: Refactored Node utilities into utils/ folder for better structure

Extension API

The new console.tab/nodeSubNavTab extension allows plugins to contribute sub-tabs:

{
  type: 'console.tab/nodeSubNavTab',
  properties: {
    parentTab: 'configuration' | 'workload',
    page: {
      tabId: 'my-tab',
      name: 'My Tab',
      priority: 60
    },
    component: { $codeRef: 'MyComponent' }
  }
}

Tabs are displayed in priority order (highest to lowest). Current priorities:

  • Configuration: Storage (70), Machine (50)
  • Workload: Pods (30)

Test plan

  • Unit tests added for all new components (NodeConfiguration, NodeMachine, NodeStorage, LocalDisks, PersistentVolumes)
  • Extension integration tested with mock extensions
  • Priority-based sorting verified
  • Parent tab filtering verified
  • Manual testing: Navigate to Compute → Nodes → [any node] → Configuration tab
  • Verify Storage sub-tab displays local disks for bare metal nodes
  • Verify Storage sub-tab displays mounted PVs
  • Verify Machine sub-tab displays machine and MCP details
  • Verify permissions work correctly for non-admin users
  • Verify loading and error states display correctly

Screenshots

Node Configuration - Storage

image

Node Configuration - Machine

image image

🤖 Generated with https://claude.com/claude-code

Summary by CodeRabbit

Release Notes

  • New Features

  • Added Configuration tab to Node Details page with Storage and Machine sub-sections

  • Added local disk information display showing disk type, model, serial number, and vendor details

  • Added persistent volumes display for mounted volumes on nodes

  • Added machine and machine config pool details viewing in node configuration

  • Enabled plugin extensions to add custom sub-tabs to node configuration

  • Documentation

  • Added documentation for node sub-navigation tab extensions

/cc @kybaker

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@kybaker
Copy link

kybaker commented Mar 10, 2026

Looks good Jeff, send it.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 10, 2026

@jeff-phillips-18: This pull request references CONSOLE-4946 which is a valid jira issue.

Details

In response to this:

Closes CONSOLE-4946
Closes CONSOLE-4948
Closes CONSOLE-4949

Description

Add Configuration tab to Node details page with extensible sub-navigation

Summary

This PR adds a new "Configuration" tab to the Node details page that displays machine and storage information using an extensible sub-navigation pattern. It also introduces a new plugin extension type (console.tab/nodeSubNavTab) that allows dynamic plugins to contribute additional sub-tabs to Node detail pages.

Key changes:

  • New plugin extension: Added console.tab/nodeSubNavTab extension type to allow plugins to add custom sub-tabs to Node detail pages (configuration or workload parent tabs)
  • Configuration tab: New tab on Node details page with vertical sub-navigation
  • Storage sub-tab (priority 70): Displays local disks (from BareMetalHost) and mounted persistent volumes with associated PVCs and Pods
  • Machine sub-tab (priority 50): Shows Machine details and MachineConfigPool information
  • Extensibility: External plugins can add their own sub-tabs using the new extension point
  • Permissions-aware: Uses new useAccessibleResources hook to handle admin vs non-admin access appropriately
  • Code organization: Refactored Node utilities into utils/ folder for better structure

Extension API

The new console.tab/nodeSubNavTab extension allows plugins to contribute sub-tabs:

{
  type: 'console.tab/nodeSubNavTab',
  properties: {
    parentTab: 'configuration' | 'workload',
    page: {
      tabId: 'my-tab',
      name: 'My Tab',
      priority: 60
    },
    component: { $codeRef: 'MyComponent' }
  }
}

Tabs are displayed in priority order (highest to lowest). Current priorities:

  • Configuration: Storage (70), Machine (50)
  • Workload: Pods (30)

Test plan

  • Unit tests added for all new components (NodeConfiguration, NodeMachine, NodeStorage, LocalDisks, PersistentVolumes)
  • Extension integration tested with mock extensions
  • Priority-based sorting verified
  • Parent tab filtering verified
  • Manual testing: Navigate to Compute → Nodes → [any node] → Configuration tab
  • Verify Storage sub-tab displays local disks for bare metal nodes
  • Verify Storage sub-tab displays mounted PVs
  • Verify Machine sub-tab displays machine and MCP details
  • Verify permissions work correctly for non-admin users
  • Verify loading and error states display correctly

Screenshots

Node Configuration - Storage

image

Node Configuration - Machine

image image

🤖 Generated with https://claude.com/claude-code

Summary by CodeRabbit

Release Notes

  • New Features

  • Added Configuration tab to Node Details page with Storage and Machine sub-sections

  • Added local disk information display showing disk type, model, serial number, and vendor details

  • Added persistent volumes display for mounted volumes on nodes

  • Added machine and machine config pool details viewing in node configuration

  • Enabled plugin extensions to add custom sub-tabs to node configuration

  • Documentation

  • Added documentation for node sub-navigation tab extensions

/cc @kybaker

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jeff-phillips-18
Once this PR has been reviewed and has the lgtm label, please ask for approval from jhadvig. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jeff-phillips-18 jeff-phillips-18 added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 10, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

@jeff-phillips-18: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-console bc86e8f link true /test e2e-gcp-console

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

component/core Related to console core functionality component/sdk Related to console-plugin-sdk do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. kind/i18n Indicates issue or PR relates to internationalization or has content that needs to be translated plugin-api-changed Categorizes a PR as containing plugin API changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants