Skip to content

test(hooks): add coverage for hook plugin system#1034

Open
mnadzam wants to merge 1 commit intopython-wheel-build:mainfrom
mnadzam:test_hooks
Open

test(hooks): add coverage for hook plugin system#1034
mnadzam wants to merge 1 commit intopython-wheel-build:mainfrom
mnadzam:test_hooks

Conversation

@mnadzam
Copy link
Copy Markdown

@mnadzam mnadzam commented Apr 7, 2026

Closes #1033

@mergify mergify bot added the ci label Apr 7, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 7, 2026

📝 Walkthrough

Walkthrough

Adds a new test module tests/test_hooks.py that exercises fromager.hooks. Introduces an autouse fixture to clear hooks._mgrs before and after each test. Tests verify _die_on_plugin_load_failure raises RuntimeError naming the failing entry point and preserving the original exception as __cause__. _get_hooks is tested for creating a HookManager with namespace="fromager.hooks", correct parameters, and caching behavior. The three public runners are tested with mocked managers to assert exception propagation and exact keyword arguments (ctx, req, dist_name, dist_version, plus filename params). log_hooks is tested by mocking _get_dist_info and ExtensionManager and asserting expected calls.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding test coverage for the hook plugin system.
Description check ✅ Passed The description directly references the linked issue #1033, which is related to the changeset of adding test coverage for hooks.
Linked Issues check ✅ Passed The PR addresses all acceptance criteria from #1033: tests _get_hooks manager creation/caching, run_post_build_hooks with arguments, exception propagation, and log_hooks functionality.
Out of Scope Changes check ✅ Passed All changes are strictly within scope: the new test file exercises only the hooks.py module functions outlined in the linked issue requirements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/test_hooks.py (1)

78-78: Consider using a lambda for __iter__ to avoid one-shot iterator brittleness.

Mock(return_value=iter([...])) returns the same iterator object on every call. Once exhausted, subsequent iterations yield nothing. If the implementation ever iterates twice, this mock silently returns an empty sequence on the second pass.

Using __iter__ = lambda: iter([...]) creates a fresh iterator per call, matching real manager behavior.

Suggested pattern (applies to lines 78, 101, 135, 167)
-    fake_mgr.__iter__ = Mock(return_value=iter([_make_fake_ext(bad_plugin)]))
+    fake_mgr.__iter__ = lambda: iter([_make_fake_ext(bad_plugin)])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_hooks.py` at line 78, The test uses Mock(return_value=iter([...]))
for fake_mgr.__iter__ which returns a single, one-shot iterator that becomes
empty after the first iteration; update each occurrence (fake_mgr.__iter__
assignments at the spots using _make_fake_ext, good_plugin/bad_plugin fixtures)
to provide a fresh iterator per call by assigning a callable that returns a new
iterator (e.g., set fake_mgr.__iter__ to a lambda that returns iter([...])) so
iterations behave like a real manager across multiple passes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/test_hooks.py`:
- Line 78: The test uses Mock(return_value=iter([...])) for fake_mgr.__iter__
which returns a single, one-shot iterator that becomes empty after the first
iteration; update each occurrence (fake_mgr.__iter__ assignments at the spots
using _make_fake_ext, good_plugin/bad_plugin fixtures) to provide a fresh
iterator per call by assigning a callable that returns a new iterator (e.g., set
fake_mgr.__iter__ to a lambda that returns iter([...])) so iterations behave
like a real manager across multiple passes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c86be25d-d112-4e48-af6c-a3e1875f1488

📥 Commits

Reviewing files that changed from the base of the PR and between d86f938 and c2cdc3b.

📒 Files selected for processing (1)
  • tests/test_hooks.py

@mnadzam mnadzam marked this pull request as ready for review April 7, 2026 16:57
@mnadzam mnadzam requested a review from a team as a code owner April 7, 2026 16:57
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/test_hooks.py (1)

126-155: Consider adding exception propagation test for run_post_bootstrap_hooks.

The PR objectives mention verifying exception propagation behavior across hook runners. Currently only run_post_build_hooks has an exception propagation test. For complete coverage, consider adding similar tests for run_post_bootstrap_hooks and run_prebuilt_wheel_hooks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_hooks.py` around lines 126 - 155, Add tests that verify exceptions
raised by hook plugins propagate from run_post_bootstrap_hooks and
run_prebuilt_wheel_hooks the same way as run_post_build_hooks: create a fake
plugin that raises a specific exception, register it via the _get_hooks mock
(similar to test_run_post_bootstrap_hooks_calls_plugin using _make_fake_ext),
call run_post_bootstrap_hooks and run_prebuilt_wheel_hooks with appropriate
minimal args (ctx, req, dist_name, dist_version, sdist_filename/wheel_filename
as needed), and assert the exception is raised (use pytest.raises or
equivalent). Ensure the tests reference the run_post_bootstrap_hooks and
run_prebuilt_wheel_hooks functions and mimic the pattern used in the existing
run_post_build_hooks exception test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/test_hooks.py`:
- Around line 126-155: Add tests that verify exceptions raised by hook plugins
propagate from run_post_bootstrap_hooks and run_prebuilt_wheel_hooks the same
way as run_post_build_hooks: create a fake plugin that raises a specific
exception, register it via the _get_hooks mock (similar to
test_run_post_bootstrap_hooks_calls_plugin using _make_fake_ext), call
run_post_bootstrap_hooks and run_prebuilt_wheel_hooks with appropriate minimal
args (ctx, req, dist_name, dist_version, sdist_filename/wheel_filename as
needed), and assert the exception is raised (use pytest.raises or equivalent).
Ensure the tests reference the run_post_bootstrap_hooks and
run_prebuilt_wheel_hooks functions and mimic the pattern used in the existing
run_post_build_hooks exception test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 59f3bb97-1d43-467c-ae2a-ac3696c5997b

📥 Commits

Reviewing files that changed from the base of the PR and between c2cdc3b and 67dc1c9.

📒 Files selected for processing (1)
  • tests/test_hooks.py

Copy link
Copy Markdown
Contributor

@rd4398 rd4398 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good! I have left couple of comments after which this should be ready to merge

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/test_hooks.py`:
- Around line 93-124: The test currently checks plugin kwargs but doesn't verify
hook routing; update the test for run_post_build_hooks to assert that the hooks
discovery function (_get_hooks / mock_get) was invoked with the "post_build"
hook name. Specifically, after calling hooks.run_post_build_hooks(...) add an
assertion like mock_get.assert_called_with("post_build") (or the equivalent call
check matching how _get_hooks is mocked) so the test ensures correct routing to
the post_build hook.
- Around line 231-256: Update the test_log_hooks_logs_each_extension test to
assert emitted log lines from hooks.log_hooks(): use pytest's caplog (or
patching the logger used in fromager.hooks) to capture logs around the
hooks.log_hooks() call and assert that the captured records include messages
referencing ext_a.module_name ("my_plugins.hooks") and ext_b.module_name
("other_plugins.hooks") (and optionally the distribution info returned by
_get_dist_info, i.e., "mypkg" and "1.0.0"); keep the existing mocks
(ExtensionManager and _get_dist_info) and existing assertions about their calls,
but add explicit assertions that the logger produced one log entry per extension
with the expected text.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 86ac5682-3acc-4c87-992e-0fa8c0958582

📥 Commits

Reviewing files that changed from the base of the PR and between 67dc1c9 and abfbd10.

📒 Files selected for processing (1)
  • tests/test_hooks.py

Comment on lines +231 to +256
@patch("fromager.hooks.overrides._get_dist_info", return_value=("mypkg", "1.0.0"))
@patch("fromager.hooks.extension.ExtensionManager")
def test_log_hooks_logs_each_extension(
mock_em_cls: Mock,
mock_dist_info: Mock,
) -> None:
ext_a = Mock()
ext_a.name = "post_build"
ext_a.module_name = "my_plugins.hooks"

ext_b = Mock()
ext_b.name = "post_bootstrap"
ext_b.module_name = "other_plugins.hooks"

mock_em_cls.return_value = [ext_a, ext_b]

hooks.log_hooks()

mock_em_cls.assert_called_once_with(
namespace="fromager.hooks",
invoke_on_load=False,
on_load_failure_callback=hooks._die_on_plugin_load_failure,
)
assert mock_dist_info.call_count == 2
mock_dist_info.assert_any_call("my_plugins.hooks")
mock_dist_info.assert_any_call("other_plugins.hooks")
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

test_log_hooks_logs_each_extension does not verify log output.

log_hooks()’s core behavior is emitting per-extension log lines; the current assertions only check collaborator calls.

Suggested assertion of emitted logs
@@
 def test_log_hooks_logs_each_extension(
     mock_em_cls: Mock,
     mock_dist_info: Mock,
+    caplog: pytest.LogCaptureFixture,
 ) -> None:
@@
-    hooks.log_hooks()
+    with caplog.at_level("DEBUG", logger=hooks.logger.name):
+        hooks.log_hooks()
@@
     assert mock_dist_info.call_count == 2
     mock_dist_info.assert_any_call("my_plugins.hooks")
     mock_dist_info.assert_any_call("other_plugins.hooks")
+    messages = [r.getMessage() for r in caplog.records if r.name == hooks.logger.name]
+    assert any("loaded hook 'post_build': from my_plugins.hooks" in m for m in messages)
+    assert any("loaded hook 'post_bootstrap': from other_plugins.hooks" in m for m in messages)
As per coding guidelines: `tests/**: Verify test actually tests the intended behavior.`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@patch("fromager.hooks.overrides._get_dist_info", return_value=("mypkg", "1.0.0"))
@patch("fromager.hooks.extension.ExtensionManager")
def test_log_hooks_logs_each_extension(
mock_em_cls: Mock,
mock_dist_info: Mock,
) -> None:
ext_a = Mock()
ext_a.name = "post_build"
ext_a.module_name = "my_plugins.hooks"
ext_b = Mock()
ext_b.name = "post_bootstrap"
ext_b.module_name = "other_plugins.hooks"
mock_em_cls.return_value = [ext_a, ext_b]
hooks.log_hooks()
mock_em_cls.assert_called_once_with(
namespace="fromager.hooks",
invoke_on_load=False,
on_load_failure_callback=hooks._die_on_plugin_load_failure,
)
assert mock_dist_info.call_count == 2
mock_dist_info.assert_any_call("my_plugins.hooks")
mock_dist_info.assert_any_call("other_plugins.hooks")
`@patch`("fromager.hooks.overrides._get_dist_info", return_value=("mypkg", "1.0.0"))
`@patch`("fromager.hooks.extension.ExtensionManager")
def test_log_hooks_logs_each_extension(
mock_em_cls: Mock,
mock_dist_info: Mock,
caplog: pytest.LogCaptureFixture,
) -> None:
ext_a = Mock()
ext_a.name = "post_build"
ext_a.module_name = "my_plugins.hooks"
ext_b = Mock()
ext_b.name = "post_bootstrap"
ext_b.module_name = "other_plugins.hooks"
mock_em_cls.return_value = [ext_a, ext_b]
with caplog.at_level("DEBUG", logger=hooks.logger.name):
hooks.log_hooks()
mock_em_cls.assert_called_once_with(
namespace="fromager.hooks",
invoke_on_load=False,
on_load_failure_callback=hooks._die_on_plugin_load_failure,
)
assert mock_dist_info.call_count == 2
mock_dist_info.assert_any_call("my_plugins.hooks")
mock_dist_info.assert_any_call("other_plugins.hooks")
messages = [r.getMessage() for r in caplog.records if r.name == hooks.logger.name]
assert any("loaded hook 'post_build': from my_plugins.hooks" in m for m in messages)
assert any("loaded hook 'post_bootstrap': from other_plugins.hooks" in m for m in messages)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_hooks.py` around lines 231 - 256, Update the
test_log_hooks_logs_each_extension test to assert emitted log lines from
hooks.log_hooks(): use pytest's caplog (or patching the logger used in
fromager.hooks) to capture logs around the hooks.log_hooks() call and assert
that the captured records include messages referencing ext_a.module_name
("my_plugins.hooks") and ext_b.module_name ("other_plugins.hooks") (and
optionally the distribution info returned by _get_dist_info, i.e., "mypkg" and
"1.0.0"); keep the existing mocks (ExtensionManager and _get_dist_info) and
existing assertions about their calls, but add explicit assertions that the
logger produced one log entry per extension with the expected text.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not do this because I did not want to test brittle log output strings. The test instead just checks that _get_dist_info is called for each extension

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mnadzam, fair point — log message strings are implementation details that can change independently of behavior, and asserting on them adds maintenance cost for little value. Verifying _get_dist_info is called per extension is a reasonable proxy that log_hooks() actually iterated and processed each one. I'll drop this suggestion.


✏️ Learnings added
Learnt from: mnadzam
Repo: python-wheel-build/fromager PR: 1034
File: tests/test_hooks.py:234-259
Timestamp: 2026-04-09T08:58:42.152Z
Learning: In fromager (python-wheel-build/fromager), the maintainer (mnadzam) prefers not to assert on log output strings in tests (e.g., tests/test_hooks.py) because they are considered brittle implementation details. Instead, tests should verify functional behavior such as checking that collaborator functions like `_get_dist_info` are called the expected number of times with the expected arguments.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: smoparth
Repo: python-wheel-build/fromager PR: 1030
File: tests/test_suggest_collection.py:5-5
Timestamp: 2026-04-08T22:13:20.914Z
Learning: In fromager CLI test files (e.g., tests/test_suggest_collection.py and other tests/test_*.py), it’s acceptable to extract JSON arrays from mixed stdout using the existing helper `_extract_json_from_output(output)` (implemented with `re.search`). This is an intentional project pattern driven by the shared `cli_runner` fixture using Click’s default `mix_stderr=True`, which can inline log/warning messages into `result.output`/`result.stdout`. Don’t flag this approach as fragile as long as the helper is used consistently and the extraction targets the expected JSON array format.

Closes python-wheel-build#1033

Signed-off-by: Marcel Nadzam <mnadzam@redhat.com>

Co-Authored-By: Cursor
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/test_hooks.py (1)

236-259: ⚠️ Potential issue | 🟠 Major

test_log_hooks_logs_each_extension still doesn’t assert emitted log lines.

This test currently validates collaborators only; it does not verify the behavior log_hooks() is meant to provide (per-extension logging), so logging regressions can slip through.

Proposed test hardening
 `@patch`("fromager.hooks.overrides._get_dist_info", return_value=("mypkg", "1.0.0"))
 `@patch`("fromager.hooks.extension.ExtensionManager")
 def test_log_hooks_logs_each_extension(
     mock_em_cls: Mock,
     mock_dist_info: Mock,
+    caplog: pytest.LogCaptureFixture,
 ) -> None:
@@
-    hooks.log_hooks()
+    with caplog.at_level("DEBUG", logger=hooks.logger.name):
+        hooks.log_hooks()
@@
     assert mock_dist_info.call_count == 2
     mock_dist_info.assert_any_call("my_plugins.hooks")
     mock_dist_info.assert_any_call("other_plugins.hooks")
+    messages = [r.getMessage() for r in caplog.records if r.name == hooks.logger.name]
+    assert any("post_build" in m and "my_plugins.hooks" in m for m in messages)
+    assert any("post_bootstrap" in m and "other_plugins.hooks" in m for m in messages)

As per coding guidelines: tests/**: Verify test actually tests the intended behavior. Check for missing edge cases. Flag overly brittle mocks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_hooks.py` around lines 236 - 259, The test currently only asserts
collaborators were called; update test_log_hooks_logs_each_extension to assert
the actual log output from hooks.log_hooks() by capturing emitted logs (e.g.,
using caplog or a logging.Mock) and verifying that a log entry exists for each
extension (ext_a and ext_b) including their names/module_name; locate the test
setup using mocks ext_a/ext_b and the call to hooks.log_hooks() and after
invoking it assert that the captured logs contain messages referencing
"post_build" / "my_plugins.hooks" and "post_bootstrap" / "other_plugins.hooks"
so the test verifies per-extension logging behavior rather than just call
interactions with mock_em_cls and mock_dist_info.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/test_hooks.py`:
- Around line 236-259: The test currently only asserts collaborators were
called; update test_log_hooks_logs_each_extension to assert the actual log
output from hooks.log_hooks() by capturing emitted logs (e.g., using caplog or a
logging.Mock) and verifying that a log entry exists for each extension (ext_a
and ext_b) including their names/module_name; locate the test setup using mocks
ext_a/ext_b and the call to hooks.log_hooks() and after invoking it assert that
the captured logs contain messages referencing "post_build" / "my_plugins.hooks"
and "post_bootstrap" / "other_plugins.hooks" so the test verifies per-extension
logging behavior rather than just call interactions with mock_em_cls and
mock_dist_info.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b96525ed-041d-4005-b8f0-2ff813eb11bd

📥 Commits

Reviewing files that changed from the base of the PR and between abfbd10 and 0b016cd.

📒 Files selected for processing (1)
  • tests/test_hooks.py

@mnadzam mnadzam requested a review from rd4398 April 9, 2026 12:06
Copy link
Copy Markdown
Contributor

@rd4398 rd4398 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is good to go. I will wait for others to review as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

test: add coverage for hook plugin system (hooks.py)

2 participants