Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdjusts a wrapper accessor to guard missing vehicle IDs, adds transit-time matrix support in routing, restructures many Python routing tests (removals and additions), adds API coverage docs, and extends C++ routing tests/utilities for transit times and break validation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
python/cuopt/cuopt/tests/routing/test_solver.py (1)
60-253: Add at least one degenerate-case solver scenario in this new test module.These additions are strong happy-path cases, but this file still misses explicit infeasible/empty/singleton coverage in solver-level assertions.
As per coding guidelines,
**/*test*.{cpp,cu,py}should include degenerate cases (infeasible, unbounded, empty, singleton) and validate optimization correctness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 60 - 253, Add a new test (e.g., test_degenerate_cases) alongside test_pdptw/test_prize_collection that exercises solver degenerate scenarios: (1) empty problem — construct a routing.DataModel with zero orders (or zero vehicles) and call routing.Solve with routing.SolverSettings to assert solution.get_status() is a valid code (no crash), solution.get_vehicle_count()==0 and accepted/infeasible getters return empty cudf.Series; (2) infeasible problem — create a DataModel where demand exceeds all vehicle capacities or time windows are impossible, call routing.Solve and assert solution.get_status() indicates infeasible and solution.get_infeasible_orders() lists the problematic orders; and (3) singleton feasible problem — one order and one vehicle, assert status==0, vehicle_count==1 and objective equals expected cost from the cost matrix; reuse routing.SolverSettings, routing.Solve, DataModel methods (add_cost_matrix, add_transit_time_matrix, add_capacity_dimension, set_order_time_windows, etc.) and validate getters like get_accepted_solutions(), get_infeasible_orders(), get_total_objective() and get_vehicle_count().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx`:
- Around line 657-661: get_order_service_times currently returns an empty
cudf.Series when vehicle_id is missing, skipping any default mapping stored
under key -1; update the function (get_order_service_times) to first check
self.order_service_times for vehicle_id, then fallback to
self.order_service_times.get(-1) (or check "-1" as appropriate for the dict
keys) and return that default if present, and only return cudf.Series([]) when
neither vehicle_id nor the -1 default exists.
In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md`:
- Around line 98-99: The coverage table incorrectly marks set_verbose_mode() as
"No" while the summary states all SolverSettings setters are covered; update the
table row for `set_verbose_mode()` to reflect coverage (change "No" to "Yes" and
reference the corresponding test name if applicable) or adjust the summary to
match the table; ensure consistency between the table entries for
`set_verbose_mode()` and `set_error_logging_mode()` and the summary sentence
about `SolverSettings` setters so the API_COVERAGE.md accurately reflects
current test coverage.
In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py`:
- Around line 128-134: The test currently only checks lengths of
get_initial_solutions() results; change it to assert value-level equality for
each component returned by d.get_initial_solutions() against the originals
(vehicle_ids, routes, types, sol_offsets). Locate the call to
d.get_initial_solutions() and the ret_initial variable, unpack or iterate the
returned items and compare the actual vehicle id lists, route contents, node
type lists, and solution offset lists/arrays to the original variables
(vehicle_ids, routes, types, sol_offsets) so ordering and values are validated
rather than just sizes.
In `@python/cuopt/cuopt/tests/routing/test_solver_settings.py`:
- Around line 51-71: The test test_dump_config uses a fixed filename
"solver_cfg.yaml" which can collide across runs; change the test to use a unique
temporary path (via pytest tmp_path or tmp_path_factory) and pass that path to
SolverSettings.dump_config_file and to utils.create_data_model_from_yaml so the
file is written and read from an isolated location; update references in the
test for s.dump_config_file(...), s.get_config_file_name(), and
utils.create_data_model_from_yaml(...) to use the tmp path.
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 138-140: The test uses exact float equality on
solution.get_total_objective(), which is brittle; change those assertions to
tolerant comparisons (e.g., use pytest.approx or math.isclose) so small numeric
differences won't fail the test. Update the assertion(s) that currently read
assert total_cost == 13.0 and the similar assertions later in the file (the
block checking total objective around the second solution) to assert total_cost
== pytest.approx(13.0) or equivalent with a defined rel/abs tolerance, keeping
the same expected value but allowing a small tolerance.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py`:
- Around line 496-497: The test currently only checks
routing_solution.get_status() == 0 and misses validating that break constraints
are respected; update the test to iterate the returned routes/trips from the
solution (use the existing routing_solution or solution accessor methods) and
assert for each scheduled break that its start time lies within the specified
break window and that its duration matches the expected break duration (and that
breaks do not overlap work segments); use the route/vehicle identifiers and
break objects already present in the test fixtures (e.g., the route/trip list,
break.window.start/end, and break.duration fields) to locate and validate each
break, and add the same assertions for the other test instance that currently
omits them to ensure numerical correctness of break adherence.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py`:
- Around line 25-36: The assertions in test_warnings_exceptions.py rely on exact
positions w[0]/w[1] which is flaky; update the checks to search the captured
warnings list for messages containing the expected substrings instead of
indexing. For example, after calling dm.add_cost_matrix,
dm.set_order_time_windows, and dm.set_order_service_times, replace uses of
w[0].message and w[1].message with assertions using any("Casting cost_matrix
from int64 to float32" in str(m.message) for m in w) and any("Casting
service_times from float64 to int32" in str(m.message) for m in w) so the test
passes regardless of warning order.
---
Nitpick comments:
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 60-253: Add a new test (e.g., test_degenerate_cases) alongside
test_pdptw/test_prize_collection that exercises solver degenerate scenarios: (1)
empty problem — construct a routing.DataModel with zero orders (or zero
vehicles) and call routing.Solve with routing.SolverSettings to assert
solution.get_status() is a valid code (no crash),
solution.get_vehicle_count()==0 and accepted/infeasible getters return empty
cudf.Series; (2) infeasible problem — create a DataModel where demand exceeds
all vehicle capacities or time windows are impossible, call routing.Solve and
assert solution.get_status() indicates infeasible and
solution.get_infeasible_orders() lists the problematic orders; and (3) singleton
feasible problem — one order and one vehicle, assert status==0, vehicle_count==1
and objective equals expected cost from the cost matrix; reuse
routing.SolverSettings, routing.Solve, DataModel methods (add_cost_matrix,
add_transit_time_matrix, add_capacity_dimension, set_order_time_windows, etc.)
and validate getters like get_accepted_solutions(), get_infeasible_orders(),
get_total_objective() and get_vehicle_count().
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 885022f2-de43-4f90-bae5-2d699c0381b0
📒 Files selected for processing (26)
python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyxpython/cuopt/cuopt/tests/routing/API_COVERAGE.mdpython/cuopt/cuopt/tests/routing/save_restore_test.pypython/cuopt/cuopt/tests/routing/solver_test.pypython/cuopt/cuopt/tests/routing/test_cuopt_exception.pypython/cuopt/cuopt/tests/routing/test_cvrptw.pypython/cuopt/cuopt/tests/routing/test_data_model.pypython/cuopt/cuopt/tests/routing/test_error_logging.pypython/cuopt/cuopt/tests/routing/test_initial_solutions.pypython/cuopt/cuopt/tests/routing/test_pickup_delivery.pypython/cuopt/cuopt/tests/routing/test_prize_collection.pypython/cuopt/cuopt/tests/routing/test_solver.pypython/cuopt/cuopt/tests/routing/test_solver_settings.pypython/cuopt/cuopt/tests/routing/test_tsp.pypython/cuopt/cuopt/tests/routing/test_validation.pypython/cuopt/cuopt/tests/routing/test_vehicle_breaks.pypython/cuopt/cuopt/tests/routing/test_vehicle_dependent_service_times.pypython/cuopt/cuopt/tests/routing/test_vehicle_fixed_costs.pypython/cuopt/cuopt/tests/routing/test_vehicle_max_cost.pypython/cuopt/cuopt/tests/routing/test_vehicle_max_time.pypython/cuopt/cuopt/tests/routing/test_vehicle_order_match.pypython/cuopt/cuopt/tests/routing/test_vehicle_properties.pypython/cuopt/cuopt/tests/routing/test_vehicle_routing.pypython/cuopt/cuopt/tests/routing/test_vehicle_types.pypython/cuopt/cuopt/tests/routing/test_warnings.pypython/cuopt/cuopt/tests/routing/test_warnings_exceptions.py
💤 Files with no reviewable changes (18)
- python/cuopt/cuopt/tests/routing/test_pickup_delivery.py
- python/cuopt/cuopt/tests/routing/test_vehicle_types.py
- python/cuopt/cuopt/tests/routing/test_error_logging.py
- python/cuopt/cuopt/tests/routing/test_vehicle_max_time.py
- python/cuopt/cuopt/tests/routing/test_prize_collection.py
- python/cuopt/cuopt/tests/routing/test_vehicle_breaks.py
- python/cuopt/cuopt/tests/routing/test_vehicle_routing.py
- python/cuopt/cuopt/tests/routing/test_vehicle_dependent_service_times.py
- python/cuopt/cuopt/tests/routing/test_warnings.py
- python/cuopt/cuopt/tests/routing/test_vehicle_max_cost.py
- python/cuopt/cuopt/tests/routing/test_vehicle_fixed_costs.py
- python/cuopt/cuopt/tests/routing/test_cvrptw.py
- python/cuopt/cuopt/tests/routing/test_validation.py
- python/cuopt/cuopt/tests/routing/solver_test.py
- python/cuopt/cuopt/tests/routing/test_cuopt_exception.py
- python/cuopt/cuopt/tests/routing/test_vehicle_order_match.py
- python/cuopt/cuopt/tests/routing/test_tsp.py
- python/cuopt/cuopt/tests/routing/save_restore_test.py
| def get_order_service_times(self, vehicle_id): | ||
| return self.order_service_times[vehicle_id] | ||
| if vehicle_id in self.order_service_times: | ||
| return self.order_service_times[vehicle_id] | ||
| else: | ||
| return cudf.Series([]) |
There was a problem hiding this comment.
Missing fallback to default service-times entry (vehicle_id=-1).
On Line 658-661, a missing vehicle-specific key returns an empty series immediately. This skips the default service-time mapping (if set under -1) and can produce incorrect getter behavior.
🔧 Proposed fix
def get_order_service_times(self, vehicle_id):
- if vehicle_id in self.order_service_times:
- return self.order_service_times[vehicle_id]
- else:
- return cudf.Series([])
+ if vehicle_id in self.order_service_times:
+ return self.order_service_times[vehicle_id]
+ if -1 in self.order_service_times:
+ return self.order_service_times[-1]
+ return cudf.Series([], dtype=np.int32)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx` around lines 657 -
661, get_order_service_times currently returns an empty cudf.Series when
vehicle_id is missing, skipping any default mapping stored under key -1; update
the function (get_order_service_times) to first check self.order_service_times
for vehicle_id, then fallback to self.order_service_times.get(-1) (or check "-1"
as appropriate for the dict keys) and return that default if present, and only
return cudf.Series([]) when neither vehicle_id nor the -1 default exists.
| ret_initial = d.get_initial_solutions() | ||
| assert len(ret_initial) == 4 | ||
| ret_sizes = sorted(len(x) for x in ret_initial) | ||
| expected_sizes = sorted([ | ||
| len(vehicle_ids), len(routes), len(types), len(sol_offsets) | ||
| ]) | ||
| assert ret_sizes == expected_sizes |
There was a problem hiding this comment.
Initial-solution round-trip check is too weak (size-only).
The new assertions verify only lengths, so a reordered/corrupted payload could still pass. This should assert value-level equality for vehicle ids, routes, node types, and offsets.
✅ Suggested stronger assertions
- ret_initial = d.get_initial_solutions()
- assert len(ret_initial) == 4
- ret_sizes = sorted(len(x) for x in ret_initial)
- expected_sizes = sorted([
- len(vehicle_ids), len(routes), len(types), len(sol_offsets)
- ])
- assert ret_sizes == expected_sizes
+ ret_vehicle_ids, ret_routes, ret_types, ret_offsets = d.get_initial_solutions()
+ assert (ret_vehicle_ids == vehicle_ids).all()
+ assert (ret_routes == routes).all()
+ assert (ret_types == types).all()
+ assert (ret_offsets == sol_offsets).all()As per coding guidelines, tests should verify transformation/index consistency across representations, not just structural size checks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py` around lines 128
- 134, The test currently only checks lengths of get_initial_solutions()
results; change it to assert value-level equality for each component returned by
d.get_initial_solutions() against the originals (vehicle_ids, routes, types,
sol_offsets). Locate the call to d.get_initial_solutions() and the ret_initial
variable, unpack or iterate the returned items and compare the actual vehicle id
lists, route contents, node type lists, and solution offset lists/arrays to the
original variables (vehicle_ids, routes, types, sol_offsets) so ordering and
values are validated rather than just sizes.
| def test_dump_config(): | ||
| """Test SolverSettings solve with config file""" | ||
| s = routing.SolverSettings() | ||
| config_file = "solver_cfg.yaml" | ||
| s.dump_config_file(config_file) | ||
| assert s.get_config_file_name() == config_file | ||
|
|
||
| # Small example data model: 3 locations, 1 vehicle | ||
| cost = cudf.DataFrame( | ||
| [[0.0, 1.0, 1.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0]], | ||
| dtype=np.float32, | ||
| ) | ||
| dm = routing.DataModel(3, 1) | ||
| dm.add_cost_matrix(cost) | ||
| s.set_time_limit(2) | ||
| routing_solution = routing.Solve(dm, s) | ||
| assert routing_solution.get_status() == 0 | ||
|
|
||
| # Load from written solver_cfg.yaml and solve again | ||
| dm_from_yaml, s_from_yaml = utils.create_data_model_from_yaml(config_file) | ||
| solution_from_yaml = routing.Solve(dm_from_yaml, s_from_yaml) |
There was a problem hiding this comment.
Use an isolated temp path instead of a fixed config filename.
Using "solver_cfg.yaml" in CWD can collide across parallel or repeated runs. Prefer tmp_path to keep this test hermetic.
🧪 Proposed fix
-def test_dump_config():
+def test_dump_config(tmp_path):
"""Test SolverSettings solve with config file"""
s = routing.SolverSettings()
- config_file = "solver_cfg.yaml"
+ config_file = str(tmp_path / "solver_cfg.yaml")
s.dump_config_file(config_file)
assert s.get_config_file_name() == config_fileAs per coding guidelines, test isolation should prevent file/global state leakage between test cases.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_solver_settings.py` around lines 51 -
71, The test test_dump_config uses a fixed filename "solver_cfg.yaml" which can
collide across runs; change the test to use a unique temporary path (via pytest
tmp_path or tmp_path_factory) and pass that path to
SolverSettings.dump_config_file and to utils.create_data_model_from_yaml so the
file is written and read from an isolated location; update references in the
test for s.dump_config_file(...), s.get_config_file_name(), and
utils.create_data_model_from_yaml(...) to use the tmp path.
| # Optional: basic objective check | ||
| total_cost = solution.get_total_objective() | ||
| assert total_cost == 13.0 |
There was a problem hiding this comment.
Use tolerant comparisons for floating objectives.
Line 140 and Line 248-250 use exact float equality; these can be flaky with solver/runtime numeric variance.
🔧 Suggested change
- assert total_cost == 13.0
+ assert math.isclose(total_cost, 13.0, rel_tol=1e-6, abs_tol=1e-6)
- assert sol.get_total_objective() == -13.0
- assert objectives[routing.Objective.PRIZE] == -26.0
- assert objectives[routing.Objective.COST] == 13.0
+ assert math.isclose(sol.get_total_objective(), -13.0, rel_tol=1e-6, abs_tol=1e-6)
+ assert math.isclose(objectives[routing.Objective.PRIZE], -26.0, rel_tol=1e-6, abs_tol=1e-6)
+ assert math.isclose(objectives[routing.Objective.COST], 13.0, rel_tol=1e-6, abs_tol=1e-6)Also applies to: 247-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 138 - 140, The
test uses exact float equality on solution.get_total_objective(), which is
brittle; change those assertions to tolerant comparisons (e.g., use
pytest.approx or math.isclose) so small numeric differences won't fail the test.
Update the assertion(s) that currently read assert total_cost == 13.0 and the
similar assertions later in the file (the block checking total objective around
the second solution) to assert total_cost == pytest.approx(13.0) or equivalent
with a defined rel/abs tolerance, keeping the same expected value but allowing a
small tolerance.
| # TO DO: Check if breaks are adhered to | ||
| assert routing_solution.get_status() == 0 |
There was a problem hiding this comment.
Break tests are missing constraint-validation assertions.
Line 496 and Line 564 explicitly leave break adherence unverified, so these tests currently only check “solver runs” and not whether computed routes satisfy break windows/durations.
✅ Suggested assertion pattern
- # TO DO: Check if breaks are adhered to
- assert routing_solution.get_status() == 0
+ assert routing_solution.get_status() == 0
+ routes = routing_solution.get_route().to_pandas()
+ break_rows = routes[routes["type"] == "Break"]
+ assert not break_rows.empty
+ # Validate break arrival stamps are within configured windows
+ # (adapt indexing by break dimension as needed for this route schema).As per coding guidelines, **/*test*.{cpp,cu,py} must validate numerical correctness of optimization results, not just successful execution.
Also applies to: 564-565
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py` around lines 496
- 497, The test currently only checks routing_solution.get_status() == 0 and
misses validating that break constraints are respected; update the test to
iterate the returned routes/trips from the solution (use the existing
routing_solution or solution accessor methods) and assert for each scheduled
break that its start time lies within the specified break window and that its
duration matches the expected break duration (and that breaks do not overlap
work segments); use the route/vehicle identifiers and break objects already
present in the test fixtures (e.g., the route/trip list, break.window.start/end,
and break.duration fields) to locate and validate each break, and add the same
assertions for the other test instance that currently omits them to ensure
numerical correctness of break adherence.
| with warnings.catch_warnings(record=True) as w: | ||
| dm.add_cost_matrix(cost_matrix) | ||
| assert "Casting cost_matrix from int64 to float32" in str(w[0].message) | ||
|
|
||
| dm.set_order_time_windows( | ||
| constraints["earliest"], constraints["latest"] | ||
| ) | ||
|
|
||
| dm.set_order_service_times(constraints["service"]) | ||
| assert "Casting service_times from float64 to int32" in str( | ||
| w[1].message | ||
| ) |
There was a problem hiding this comment.
Warning assertions are order-dependent and flaky.
This test assumes exact warning positions (w[0], w[1]), but additional cast warnings can shift ordering and break the test intermittently.
🛠️ Proposed stabilization
with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
dm.add_cost_matrix(cost_matrix)
- assert "Casting cost_matrix from int64 to float32" in str(w[0].message)
+ msgs = [str(item.message) for item in w]
+ assert any("Casting cost_matrix from int64 to float32" in m for m in msgs)
dm.set_order_time_windows(
constraints["earliest"], constraints["latest"]
)
dm.set_order_service_times(constraints["service"])
- assert "Casting service_times from float64 to int32" in str(
- w[1].message
- )
+ msgs = [str(item.message) for item in w]
+ assert any("Casting service_times from float64 to int32" in m for m in msgs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py` around lines 25
- 36, The assertions in test_warnings_exceptions.py rely on exact positions
w[0]/w[1] which is flaky; update the checks to search the captured warnings list
for messages containing the expected substrings instead of indexing. For
example, after calling dm.add_cost_matrix, dm.set_order_time_windows, and
dm.set_order_service_times, replace uses of w[0].message and w[1].message with
assertions using any("Casting cost_matrix from int64 to float32" in
str(m.message) for m in w) and any("Casting service_times from float64 to int32"
in str(m.message) for m in w) so the test passes regardless of warning order.
|
/ok to test 4a6e0f5 |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
python/cuopt/cuopt/tests/routing/API_COVERAGE.md (1)
13-17: Prefer concrete test references overMost/All/etc..This file is acting as an audit trail, so entries like
Most routing tests,All solve tests, and repeatedetc.are hard to verify and will drift quickly. Either list a small set of canonical test names for each API or explicitly mark the column as non-exhaustive examples.Also applies to: 33-34, 40-56, 113-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md` around lines 13 - 17, The table entries for API functions like get_vehicle_count(), get_total_objective(), get_objective_values(), get_route(), and get_status() use vague references ("Most routing tests", "All solve tests", "etc.")—replace these with either a short canonical list of concrete test names that exercise each API (e.g., cite test_solver, test_vehicle_properties, test_solver_settings, test_batch_solve, test_initial_solutions) or annotate the column header as "examples (non‑exhaustive)"; update the rows you touched (and the similar ranges noted: 33-34, 40-56, 113-124) so every cell contains either specific test identifiers or the explicit "non‑exhaustive examples" label to avoid ambiguous references.cpp/tests/routing/utilities/check_constraints.cu (1)
171-171: Remove commented-out debug statement.The commented-out debug print statement should be removed to keep the test code clean.
🧹 Proposed fix
if (use_uniform && break_dim < static_cast<int>(uniform_break_earliest_h.size())) { - //std::cout<<"VEHID: "<<id<<" ARRIVAL_REAL: "<<arrival<<" EARLIEST: "<<uniform_break_earliest_h[break_dim][id]<<" LATEST: "<<uniform_break_latest_h[break_dim][id]<<"\n"; ASSERT_GE(arrival,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/tests/routing/utilities/check_constraints.cu` at line 171, Remove the commented-out debug cout statement containing std::cout<<"VEHID: "<<id<<" ARRIVAL_REAL: "<<arrival<<" EARLIEST: "<<uniform_break_earliest_h[break_dim][id]<<" LATEST: "<<uniform_break_latest_h[break_dim][id] from the test source (check_constraints.cu) to clean up the test code; simply delete that entire commented line so no leftover debug prints remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md`:
- Line 3: The documentation references the old module name vehicle_routing.py;
update all occurrences to routing.py in the API_COVERAGE.md file so the coverage
inventory points to the actual module under python/cuopt/cuopt/routing.py —
search for and replace "vehicle_routing.py" with "routing.py" in the sections
around the top summary (line ~3), the block around line ~27, and the section
spanning lines ~93-109 to ensure all mentions are corrected.
---
Nitpick comments:
In `@cpp/tests/routing/utilities/check_constraints.cu`:
- Line 171: Remove the commented-out debug cout statement containing
std::cout<<"VEHID: "<<id<<" ARRIVAL_REAL: "<<arrival<<" EARLIEST:
"<<uniform_break_earliest_h[break_dim][id]<<" LATEST:
"<<uniform_break_latest_h[break_dim][id] from the test source
(check_constraints.cu) to clean up the test code; simply delete that entire
commented line so no leftover debug prints remain.
In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md`:
- Around line 13-17: The table entries for API functions like
get_vehicle_count(), get_total_objective(), get_objective_values(), get_route(),
and get_status() use vague references ("Most routing tests", "All solve tests",
"etc.")—replace these with either a short canonical list of concrete test names
that exercise each API (e.g., cite test_solver, test_vehicle_properties,
test_solver_settings, test_batch_solve, test_initial_solutions) or annotate the
column header as "examples (non‑exhaustive)"; update the rows you touched (and
the similar ranges noted: 33-34, 40-56, 113-124) so every cell contains either
specific test identifiers or the explicit "non‑exhaustive examples" label to
avoid ambiguous references.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c4c984c8-a685-4a49-97ec-1a94272e6802
📒 Files selected for processing (6)
cpp/tests/routing/unit_tests/breaks.cucpp/tests/routing/unit_tests/heterogenous_breaks.cucpp/tests/routing/utilities/check_constraints.cucpp/tests/routing/utilities/test_utilities.hpppython/cuopt/cuopt/tests/routing/API_COVERAGE.mdpython/cuopt/cuopt/tests/routing/test_vehicle_properties.py
🚧 Files skipped from review as they are similar to previous changes (1)
- python/cuopt/cuopt/tests/routing/test_vehicle_properties.py
| @@ -0,0 +1,124 @@ | |||
| # Routing Python Tests – API Coverage | |||
|
|
|||
| Summary of which APIs from `assignment.py` and `vehicle_routing.py` are exercised by tests under `cuopt/tests/routing/`. | |||
There was a problem hiding this comment.
Update the module name to routing.py.
These sections point readers to vehicle_routing.py, but the routing API covered by this PR lives under python/cuopt/cuopt/routing.py. Leaving the old filename here makes the coverage inventory misleading.
Also applies to: 27-27, 93-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md` at line 3, The
documentation references the old module name vehicle_routing.py; update all
occurrences to routing.py in the API_COVERAGE.md file so the coverage inventory
points to the actual module under python/cuopt/cuopt/routing.py — search for and
replace "vehicle_routing.py" with "routing.py" in the sections around the top
summary (line ~3), the block around line ~27, and the section spanning lines
~93-109 to ensure all mentions are corrected.
| SOLOMON_DATASETS_PATH = os.path.join(utils.RAPIDS_DATASET_ROOT_DIR, "solomon/In/") | ||
|
|
||
| """def test_solomon(): | ||
| SOLOMON_DATASET = "r107.txt" |
There was a problem hiding this comment.
removed the comment block, not sure why it still shows up in the snippet as commented
There was a problem hiding this comment.
it's our single e2e large test so it needed to be uncommented instead of deleted. Thanks
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
python/cuopt/cuopt/tests/routing/test_solver.py (1)
60-64: Add an infeasible PDPTW case alongside this happy-path test.The new pickup/delivery + transit-time path only exercises a feasible instance. An impossible-window or over-capacity case would catch validation regressions that this test currently misses.
As per coding guidelines, "test degenerate cases (infeasible, unbounded, empty, singleton problems)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 60 - 64, Add a second unit that asserts an infeasible PDPTW alongside the existing test_pdptw: create a new test (e.g., test_pdptw_infeasible) that reuses the same 5-location topology but modifies either time windows (make a delivery window end before its pickup's earliest time or set transit times so arrival must be late) or vehicle capacity (set vehicle capacity lower than total pickup demand) to guarantee infeasibility, call the same solver path used in test_pdptw, and assert the solver returns an infeasible/failure status (or raises the expected validation exception) instead of a feasible route; reference the existing test_pdptw and the solver invocation used there to mirror setup and validation logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 53-57: The test currently only validates numeric correctness when
vehicle_size == 11, leaving vehicle_size == 10 or 12 unchecked; update the
assertions in test_solver.py to assert final_cost is within an acceptable
tolerance for all returned vehicle counts (inspect and use vehicle_size,
final_cost, ref_cost, and cu_status to implement). Replace the single
conditional with explicit checks for each possible vehicle_size (e.g., if
vehicle_size == 10, elif == 11, elif == 12) or a table-driven mapping of
expected ref_cost/tolerance per vehicle_size and assert math.fabs((final_cost -
expected_ref) / expected_ref) < tolerance, keeping the cu_status and
vehicle_size upper-bound assertions intact.
- Around line 120-136: The test currently checks pickup-before-delivery ordering
but never verifies both endpoints of pairs are served by the same vehicle;
update the checks using solution.get_infeasible_orders() and
solution.get_route() so that before asserting idx_1 < idx_3 (and idx_2 < idx_4)
you first confirm both order IDs are not listed as infeasible and both appear on
the same truck (compare their truck_id values in the dataframe filtered from
get_route()); only perform the positional index assertion when both endpoints
are present and assigned to the same truck to avoid false passes when an
endpoint is dropped or split across vehicles.
- Around line 18-23: The test currently calls
utils.convert_solomon_inp_file_to_yaml(SOLOMON_DATASETS_PATH + SOLOMON_DATASET)
which writes r107.yaml into the shared dataset tree; change the test to generate
the YAML into a test-local temp directory instead of the shared dataset. Call
convert_solomon_inp_file_to_yaml with a temp output path (use the test's
tmp_path/tmpdir fixture) and then pass that temp YAML path into
utils.create_from_yaml_file (rather than SOLOMON_YAML), or update
convert_solomon_inp_file_to_yaml to accept an explicit output file/directory and
use that in this test; ensure references to SOLOMON_DATASETS_PATH,
SOLOMON_DATASET, SOLOMON_YAML are replaced with the temp path so no files are
written into the shared dataset.
---
Nitpick comments:
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 60-64: Add a second unit that asserts an infeasible PDPTW
alongside the existing test_pdptw: create a new test (e.g.,
test_pdptw_infeasible) that reuses the same 5-location topology but modifies
either time windows (make a delivery window end before its pickup's earliest
time or set transit times so arrival must be late) or vehicle capacity (set
vehicle capacity lower than total pickup demand) to guarantee infeasibility,
call the same solver path used in test_pdptw, and assert the solver returns an
infeasible/failure status (or raises the expected validation exception) instead
of a feasible route; reference the existing test_pdptw and the solver invocation
used there to mirror setup and validation logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 47439409-c808-41f7-95e6-7e9bb42984d1
📒 Files selected for processing (1)
python/cuopt/cuopt/tests/routing/test_solver.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/tests/routing/unit_tests/breaks.cu`:
- Around line 406-417: The loop creates temporary device copies v_e, v_l, v_d
and passes their raw pointers to data_model.add_break_dimension, but those
temporaries are destroyed each iteration leaving break_dimensions_ with dangling
device pointers; fix by hoisting storage so the device vectors outlive solve —
declare a container (e.g., std::vector<cuopt::device_vector<int>>
device_storage) before the for-loop, move or emplace each v_e, v_l, v_d into
device_storage and pass the .data() from the stored objects to
add_break_dimension, ensuring device_storage remains alive until after calling
solve so break_dimensions_ pointers remain valid.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fca5cdba-aa38-4d3a-a11a-6c562cd15609
📒 Files selected for processing (1)
cpp/tests/routing/unit_tests/breaks.cu
| for (int b = 0; b < num_breaks; ++b) { | ||
| std::vector<int> e(vehicle_num), l(vehicle_num), d(vehicle_num); | ||
| for (int v = 0; v < vehicle_num; ++v) { | ||
| e[v] = break_earliest[v * num_breaks + b]; | ||
| l[v] = break_latest[v * num_breaks + b]; | ||
| d[v] = break_duration[v * num_breaks + b]; | ||
| } | ||
| auto v_e = cuopt::device_copy(e, stream); | ||
| auto v_l = cuopt::device_copy(l, stream); | ||
| auto v_d = cuopt::device_copy(d, stream); | ||
| data_model.add_break_dimension(v_e.data(), v_l.data(), v_d.data()); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how add_break_dimension stores data (pointer vs copy)
rg -n -A10 "add_break_dimension" --type-add 'cuda:*.{cu,cuh}' --type-add 'cpp:*.{cpp,hpp,h}' -t cuda -t cpp | head -100
# Check device_copy return type
rg -n -A5 "device_copy" --type-add 'cuda:*.{cu,cuh}' --type-add 'cpp:*.{cpp,hpp,h}' -t cuda -t cpp | grep -E "(rmm|device_uvector|return)" | head -20Repository: NVIDIA/cuopt
Length of output: 10389
🏁 Script executed:
#!/bin/bash
# Get full implementation of add_break_dimension
cat -n cpp/src/routing/data_model_view.cu | sed -n '113,160p'
# Check if there's any storage of pointers in data_model_view_t
rg -n "break_earliest|break_latest|break_duration" cpp/src/routing/data_model_view.cu | head -30Repository: NVIDIA/cuopt
Length of output: 4201
🏁 Script executed:
#!/bin/bash
# Check how break_dimensions_ is used during solve
rg -n "break_dimensions_" cpp/src/routing/data_model_view.cu | head -20
# Check the definition of break_dimensions_ to understand the data structure
rg -n "break_dimensions_" cpp/include/cuopt/routing/data_model_view.hpp | head -20Repository: NVIDIA/cuopt
Length of output: 316
🏁 Script executed:
#!/bin/bash
# Find break_dimension_t definition
rg -n "struct break_dimension_t|class break_dimension_t" cpp/include cpp/src -A 10 | head -40Repository: NVIDIA/cuopt
Length of output: 1023
🏁 Script executed:
#!/bin/bash
# Check the full non_uniform_breaks test to see solve call
sed -n '329,425p' cpp/tests/routing/unit_tests/breaks.cu | cat -nRepository: NVIDIA/cuopt
Length of output: 4815
Critical use-after-free: device memory freed before solve accesses stored pointers.
Device copies v_e, v_l, v_d are destroyed at the end of each loop iteration, but add_break_dimension stores raw pointers to this memory in break_dimensions_ (line 146 of data_model_view.cu). When solve is called at line 94, these pointers become dangling references to freed device memory.
All other tests in this file keep device memory alive until after solve completes (e.g., uniform_breaks at lines 42–47).
🐛 Proposed fix: keep device memory alive until after solve
+ std::vector<rmm::device_uvector<int>> break_device_data;
+ break_device_data.reserve(num_breaks * 3);
+
for (int b = 0; b < num_breaks; ++b) {
std::vector<int> e(vehicle_num), l(vehicle_num), d(vehicle_num);
for (int v = 0; v < vehicle_num; ++v) {
e[v] = break_earliest[v * num_breaks + b];
l[v] = break_latest[v * num_breaks + b];
d[v] = break_duration[v * num_breaks + b];
}
- auto v_e = cuopt::device_copy(e, stream);
- auto v_l = cuopt::device_copy(l, stream);
- auto v_d = cuopt::device_copy(d, stream);
- data_model.add_break_dimension(v_e.data(), v_l.data(), v_d.data());
+ break_device_data.push_back(cuopt::device_copy(e, stream));
+ break_device_data.push_back(cuopt::device_copy(l, stream));
+ break_device_data.push_back(cuopt::device_copy(d, stream));
+ data_model.add_break_dimension(
+ break_device_data[b * 3].data(),
+ break_device_data[b * 3 + 1].data(),
+ break_device_data[b * 3 + 2].data());
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (int b = 0; b < num_breaks; ++b) { | |
| std::vector<int> e(vehicle_num), l(vehicle_num), d(vehicle_num); | |
| for (int v = 0; v < vehicle_num; ++v) { | |
| e[v] = break_earliest[v * num_breaks + b]; | |
| l[v] = break_latest[v * num_breaks + b]; | |
| d[v] = break_duration[v * num_breaks + b]; | |
| } | |
| auto v_e = cuopt::device_copy(e, stream); | |
| auto v_l = cuopt::device_copy(l, stream); | |
| auto v_d = cuopt::device_copy(d, stream); | |
| data_model.add_break_dimension(v_e.data(), v_l.data(), v_d.data()); | |
| } | |
| std::vector<rmm::device_uvector<int>> break_device_data; | |
| break_device_data.reserve(num_breaks * 3); | |
| for (int b = 0; b < num_breaks; ++b) { | |
| std::vector<int> e(vehicle_num), l(vehicle_num), d(vehicle_num); | |
| for (int v = 0; v < vehicle_num; ++v) { | |
| e[v] = break_earliest[v * num_breaks + b]; | |
| l[v] = break_latest[v * num_breaks + b]; | |
| d[v] = break_duration[v * num_breaks + b]; | |
| } | |
| break_device_data.push_back(cuopt::device_copy(e, stream)); | |
| break_device_data.push_back(cuopt::device_copy(l, stream)); | |
| break_device_data.push_back(cuopt::device_copy(d, stream)); | |
| data_model.add_break_dimension( | |
| break_device_data[b * 3].data(), | |
| break_device_data[b * 3 + 1].data(), | |
| break_device_data[b * 3 + 2].data()); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/tests/routing/unit_tests/breaks.cu` around lines 406 - 417, The loop
creates temporary device copies v_e, v_l, v_d and passes their raw pointers to
data_model.add_break_dimension, but those temporaries are destroyed each
iteration leaving break_dimensions_ with dangling device pointers; fix by
hoisting storage so the device vectors outlive solve — declare a container
(e.g., std::vector<cuopt::device_vector<int>> device_storage) before the
for-loop, move or emplace each v_e, v_l, v_d into device_storage and pass the
.data() from the stored objects to add_break_dimension, ensuring device_storage
remains alive until after calling solve so break_dimensions_ pointers remain
valid.
|
/ok to test 0826f7d |
|
/ok to test 8b5914e |
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (3)
python/cuopt/cuopt/tests/routing/test_vehicle_properties.py (1)
499-521:⚠️ Potential issue | 🟠 MajorBreak duration is still not validated.
The new assertions cover break count, locations, and window membership, but a solution with the wrong break length would still pass. Please also verify each break consumes the configured duration, e.g. from route timestamps or the break-specific fields exposed by the solution schema.
As per coding guidelines,
**/*test*.{cpp,cu,py}must validate numerical correctness of optimization results (not just 'runs without error').🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py` around lines 499 - 521, The test currently checks break count, location and time window but not break length; update the loop that inspects routing_solution.get_route().to_pandas() (the routes DataFrame used with break_locations_1_list, break_times_1/2 and num_v_type_1) to also validate each Break row's duration: for each row where routes["type"] == "Break" compute the break duration from available timestamps (e.g. routes["departure_stamp"] - routes["arrival_stamp"] or routes["end_stamp"] - routes["arrival_stamp"]) or from any break-specific field exposed by the schema (e.g. routes["service_time"] / routes["break_duration"]) and assert it equals the expected configured duration for that break_dim (use break_durations_1[break_dim] for truck_id < num_v_type_1, else break_durations_2[break_dim]); fail the test if the duration does not match.cpp/tests/routing/unit_tests/breaks.cu (1)
410-421:⚠️ Potential issue | 🔴 CriticalCritical use-after-free: device memory freed before solve accesses stored pointers.
The device copies
v_e,v_l,v_dare destroyed at the end of each loop iteration, butadd_break_dimensionstores raw pointers to this memory. Whensolveis called at line 426, these pointers are dangling references to freed device memory.Compare with the
uniform_breakstest (lines 308-318) which correctly keeps device memory alive until aftersolvecompletes.🐛 Proposed fix: keep device memory alive until after solve
+ std::vector<rmm::device_uvector<int>> break_device_data; + break_device_data.reserve(num_breaks * 3); + for (int b = 0; b < num_breaks; ++b) { std::vector<int> e(vehicle_num), l(vehicle_num), d(vehicle_num); for (int v = 0; v < vehicle_num; ++v) { e[v] = break_earliest[v * num_breaks + b]; l[v] = break_latest[v * num_breaks + b]; d[v] = break_duration[v * num_breaks + b]; } - auto v_e = cuopt::device_copy(e, stream); - auto v_l = cuopt::device_copy(l, stream); - auto v_d = cuopt::device_copy(d, stream); - data_model.add_break_dimension(v_e.data(), v_l.data(), v_d.data()); + break_device_data.push_back(cuopt::device_copy(e, stream)); + break_device_data.push_back(cuopt::device_copy(l, stream)); + break_device_data.push_back(cuopt::device_copy(d, stream)); + data_model.add_break_dimension( + break_device_data[b * 3].data(), + break_device_data[b * 3 + 1].data(), + break_device_data[b * 3 + 2].data()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/tests/routing/unit_tests/breaks.cu` around lines 410 - 421, The test leaks dangling device pointers because v_e, v_l, v_d (returned from cuopt::device_copy inside the for-loop) are destroyed at loop end while data_model.add_break_dimension stores their raw .data() pointers; keep the device buffers alive until after the call to solve by allocating a container outside the loop (e.g., vectors or a tuple list) and push_back each v_e, v_l, v_d into that container instead of allowing them to go out of scope, then call solve, and finally let the container be destroyed (or clear it) after solve completes; refer to device_copy, v_e/v_l/v_d, add_break_dimension and the subsequent solve call when applying the fix.python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py (1)
25-36:⚠️ Potential issue | 🟡 MinorWarning assertions are still order-dependent.
This is still indexing
w[0]/w[1], so an extra warning or a filter change can make the test flaky. Search the captured warnings by message and force the filter to"always"inside the context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py` around lines 25 - 36, Inside the warnings.catch_warnings context used around calls to dm.add_cost_matrix, dm.set_order_time_windows, and dm.set_order_service_times, set the warnings filter to "always" and stop indexing w by position; instead iterate or search the captured warnings list `w` for messages containing "Casting cost_matrix from int64 to float32" and "Casting service_times from float64 to int32" to assert presence, so the assertions no longer rely on w[0]/w[1] ordering.
🧹 Nitpick comments (3)
cpp/tests/routing/utilities/check_constraints.cu (3)
171-173: Remove or formalize commented-out debug output.Consider removing this debug statement or using a proper logging/debug macro if it's useful for troubleshooting test failures.
- if (use_uniform && break_dim < static_cast<int>(uniform_break_earliest_h.size())) { - // std::cout<<"VEHID: "<<id<<" ARRIVAL_REAL: "<<arrival<<" EARLIEST: - // "<<uniform_break_earliest_h[break_dim][id]<<" LATEST: - // "<<uniform_break_latest_h[break_dim][id]<<"\n"; + if (use_uniform && break_dim < static_cast<int>(uniform_break_earliest_h.size())) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/tests/routing/utilities/check_constraints.cu` around lines 171 - 173, Remove or formalize the commented-out debug print: either delete the std::cout line entirely or replace it with a proper test/debug logging mechanism (e.g., a LOG_DEBUG or ASSERT_MESSAGE) that uses the same variables (id, arrival, break_dim, uniform_break_earliest_h, uniform_break_latest_h) so diagnostic output is available when tests fail; ensure the replacement follows project logging conventions and is gated by a debug/log level flag.
114-118: Unusedd_ptrvariable from structured binding.The structured binding captures
d_ptr(likely break duration) but it's never used. If break duration validation is intentionally omitted, consider usingstd::ignoreor a comment to clarify. Otherwise, this may indicate missing validation logic.- auto [e_ptr, l_ptr, d_ptr] = dim.get_breaks(); + auto [e_ptr, l_ptr, /*duration_ptr=*/std::ignore] = dim.get_breaks();Alternatively, if duration should be validated:
// TODO: Add break duration validation if needed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/tests/routing/utilities/check_constraints.cu` around lines 114 - 118, The structured binding from dim.get_breaks() binds e_ptr, l_ptr and d_ptr but d_ptr is never used; either ignore it explicitly or handle/validate it: update the binding at the dim.get_breaks() call to use std::ignore for the third element (or cast/use (void)d_ptr) if break duration is intentionally not needed, or add validation logic for d_ptr (e.g., check duration constraints) and then incorporate it where uniform_break_earliest_h/uniform_break_latest_h are populated so d_ptr is no longer unused.
193-200: Device-to-host copy inside loop may cause synchronization overhead.
cuopt::host_copy(b.locations_, stream)is called inside the break iteration loop. While acceptable for test code, this triggers a device-to-host synchronization per break with location constraints. Consider caching non-uniform break locations before the loop if vehicles have many breaks.♻️ Optional: Pre-cache locations outside the inner loop
// Before the loop over nodes, pre-copy all non-uniform break locations for this vehicle std::vector<std::vector<i_t>> non_uniform_locs_cache; if (use_non_uniform) { auto const& breaks = non_uniform.at(id); for (auto const& b : breaks) { if (b.locations_.size() > 0) { non_uniform_locs_cache.push_back(cuopt::host_copy(b.locations_, stream)); } else { non_uniform_locs_cache.push_back({}); } } } // Then use non_uniform_locs_cache[break_dim] inside the loop🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/tests/routing/utilities/check_constraints.cu` around lines 193 - 200, The cuopt::host_copy(b.locations_, stream) call inside the break iteration causes repeated device-to-host syncs; pre-copy non-uniform break locations for each vehicle before iterating breaks (e.g. iterate non_uniform.at(id) and cache cuopt::host_copy results for each b.locations_ into a vector), then inside the loop use the cached vector by index (use_non_uniform/non_uniform, break_dim, id, b.locations_) instead of calling cuopt::host_copy repeatedly to eliminate per-break synchronization overhead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py`:
- Around line 80-85: The test uses integer Series for boolean flags; change the
inputs to explicit boolean Series (e.g., cudf.Series([True, True, True, True,
True], dtype=bool)) when calling d.set_skip_first_trips and
d.set_drop_return_trips, keep the value equality checks (e.g.,
(d.get_skip_first_trips() == skip_first).all()), and add dtype assertions to
verify the API contract (e.g., assert d.get_skip_first_trips().dtype == bool and
assert d.get_drop_return_trips().dtype == bool) so both setters and getters are
validated for boolean dtype.
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 4-5: The test fails due to a missing import for the math module
used by math.fabs; add an import for the math module alongside the existing
imports (near the top of python/cuopt/cuopt/tests/routing/test_solver.py where
numpy is imported) so references to math.fabs() resolve at runtime.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py`:
- Around line 728-735: The test currently iterates only over truck_ids present
in solution_cudf and misses vehicles that should be in the fleet but have empty
routes; update the test to first assert that sol.get_route() contains entries
for all expected vehicle IDs (or explicitly represents empty routes) by
comparing solution_cudf["truck_id"].unique().to_arrow().to_pylist() to the
expected fleet ID list (or checking that missing IDs are represented as
empty-route records), then proceed to the per-vehicle loop that checks route
lengths using solution_cudf and sol.get_route(); reference the symbols
solution_cudf, sol.get_route(), and "truck_id" when making these assertions so
empty-route degenerate cases are validated before the existing route_len
assertions.
- Around line 360-365: The test currently only checks solve success and vehicle
count; extend it to assert the actual assignment for vehicle 0 so the single
truck serves all three orders. After routing.Solve(d, s) and obtaining
routing_solution, query the solution's assignment/route API on routing_solution
(e.g., get_assignments(), get_route(vehicle_id), or equivalent) for vehicle 0
and assert that the returned stops include all configured order IDs/locations
(count == 3 and membership equals the test orders). Keep existing checks
(get_vehicle_count(), get_status()) and add the explicit assertions against
routing_solution's route/assignments for vehicle 0.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py`:
- Around line 51-70: Replace the broad pytest.raises(Exception) with the
concrete validation exception that routing.DataModel.add_cost_matrix actually
raises (for example pytest.raises(ValueError) or the specific ValidationError
class used by the module) in the three assertions around the first block and the
similar assertions later (the other block noted at 93-123); update all
occurrences so tests assert the exact exception type from
DataModel.add_cost_matrix instead of Exception.
- Around line 113-123: Add a new assertion in test_range to cover the exact
upper-bound boundary case: after the existing case with value 4, add another
with order_locations containing the value 3 (e.g., cudf.Series([0,1,2,3,1])) and
assert that dm.set_order_locations(...) raises the same Exception and message;
this ensures DataModel.set_order_locations enforces indices < num_locations
(DataModel, set_order_locations, test_range).
---
Duplicate comments:
In `@cpp/tests/routing/unit_tests/breaks.cu`:
- Around line 410-421: The test leaks dangling device pointers because v_e, v_l,
v_d (returned from cuopt::device_copy inside the for-loop) are destroyed at loop
end while data_model.add_break_dimension stores their raw .data() pointers; keep
the device buffers alive until after the call to solve by allocating a container
outside the loop (e.g., vectors or a tuple list) and push_back each v_e, v_l,
v_d into that container instead of allowing them to go out of scope, then call
solve, and finally let the container be destroyed (or clear it) after solve
completes; refer to device_copy, v_e/v_l/v_d, add_break_dimension and the
subsequent solve call when applying the fix.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py`:
- Around line 499-521: The test currently checks break count, location and time
window but not break length; update the loop that inspects
routing_solution.get_route().to_pandas() (the routes DataFrame used with
break_locations_1_list, break_times_1/2 and num_v_type_1) to also validate each
Break row's duration: for each row where routes["type"] == "Break" compute the
break duration from available timestamps (e.g. routes["departure_stamp"] -
routes["arrival_stamp"] or routes["end_stamp"] - routes["arrival_stamp"]) or
from any break-specific field exposed by the schema (e.g. routes["service_time"]
/ routes["break_duration"]) and assert it equals the expected configured
duration for that break_dim (use break_durations_1[break_dim] for truck_id <
num_v_type_1, else break_durations_2[break_dim]); fail the test if the duration
does not match.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py`:
- Around line 25-36: Inside the warnings.catch_warnings context used around
calls to dm.add_cost_matrix, dm.set_order_time_windows, and
dm.set_order_service_times, set the warnings filter to "always" and stop
indexing w by position; instead iterate or search the captured warnings list `w`
for messages containing "Casting cost_matrix from int64 to float32" and "Casting
service_times from float64 to int32" to assert presence, so the assertions no
longer rely on w[0]/w[1] ordering.
---
Nitpick comments:
In `@cpp/tests/routing/utilities/check_constraints.cu`:
- Around line 171-173: Remove or formalize the commented-out debug print: either
delete the std::cout line entirely or replace it with a proper test/debug
logging mechanism (e.g., a LOG_DEBUG or ASSERT_MESSAGE) that uses the same
variables (id, arrival, break_dim, uniform_break_earliest_h,
uniform_break_latest_h) so diagnostic output is available when tests fail;
ensure the replacement follows project logging conventions and is gated by a
debug/log level flag.
- Around line 114-118: The structured binding from dim.get_breaks() binds e_ptr,
l_ptr and d_ptr but d_ptr is never used; either ignore it explicitly or
handle/validate it: update the binding at the dim.get_breaks() call to use
std::ignore for the third element (or cast/use (void)d_ptr) if break duration is
intentionally not needed, or add validation logic for d_ptr (e.g., check
duration constraints) and then incorporate it where
uniform_break_earliest_h/uniform_break_latest_h are populated so d_ptr is no
longer unused.
- Around line 193-200: The cuopt::host_copy(b.locations_, stream) call inside
the break iteration causes repeated device-to-host syncs; pre-copy non-uniform
break locations for each vehicle before iterating breaks (e.g. iterate
non_uniform.at(id) and cache cuopt::host_copy results for each b.locations_ into
a vector), then inside the loop use the cached vector by index
(use_non_uniform/non_uniform, break_dim, id, b.locations_) instead of calling
cuopt::host_copy repeatedly to eliminate per-break synchronization overhead.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: cfc3dd8f-a31d-4c6e-9bee-b2e9129d9303
📒 Files selected for processing (9)
cpp/tests/routing/unit_tests/breaks.cucpp/tests/routing/unit_tests/heterogenous_breaks.cucpp/tests/routing/utilities/check_constraints.cupython/cuopt/cuopt/routing/vehicle_routing_wrapper.pyxpython/cuopt/cuopt/tests/routing/test_data_model.pypython/cuopt/cuopt/tests/routing/test_initial_solutions.pypython/cuopt/cuopt/tests/routing/test_solver.pypython/cuopt/cuopt/tests/routing/test_vehicle_properties.pypython/cuopt/cuopt/tests/routing/test_warnings_exceptions.py
🚧 Files skipped from review as they are similar to previous changes (1)
- python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx
| skip_first = cudf.Series([1, 1, 1, 1, 1]) | ||
| drop_return = cudf.Series([1, 1, 1, 1, 1]) | ||
| d.set_skip_first_trips(skip_first) | ||
| d.set_drop_return_trips(drop_return) | ||
| assert (d.get_skip_first_trips() == skip_first).all() | ||
| assert (d.get_drop_return_trips() == drop_return).all() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the documented dtype contract for these routing accessors.
rg -n -C3 'def set_skip_first_trips|def set_drop_return_trips|def get_skip_first_trips|def get_drop_return_trips|dtype - bool|bool type' python/cuopt/cuopt/routing/vehicle_routing.pyRepository: NVIDIA/cuopt
Length of output: 1598
Use boolean series for skip/drop flags; add dtype assertions.
The set_skip_first_trips() and set_drop_return_trips() methods are documented to accept cudf.Series dtype - bool, and their getters are documented to return bool type. However, this test passes cudf.Series([1, 1, 1, 1, 1]) (integer dtype). Since True == 1, the equality checks pass even if the API returns or accepts the wrong dtype, leaving the contract unverified.
Fix
- skip_first = cudf.Series([1, 1, 1, 1, 1])
- drop_return = cudf.Series([1, 1, 1, 1, 1])
+ skip_first = cudf.Series([True] * vehicle_num, dtype="bool")
+ drop_return = cudf.Series([True] * vehicle_num, dtype="bool")
d.set_skip_first_trips(skip_first)
d.set_drop_return_trips(drop_return)
assert (d.get_skip_first_trips() == skip_first).all()
assert (d.get_drop_return_trips() == drop_return).all()
+ assert str(d.get_skip_first_trips().dtype) == "bool"
+ assert str(d.get_drop_return_trips().dtype) == "bool"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py` around lines 80 -
85, The test uses integer Series for boolean flags; change the inputs to
explicit boolean Series (e.g., cudf.Series([True, True, True, True, True],
dtype=bool)) when calling d.set_skip_first_trips and d.set_drop_return_trips,
keep the value equality checks (e.g., (d.get_skip_first_trips() ==
skip_first).all()), and add dtype assertions to verify the API contract (e.g.,
assert d.get_skip_first_trips().dtype == bool and assert
d.get_drop_return_trips().dtype == bool) so both setters and getters are
validated for boolean dtype.
| routing_solution = routing.Solve(d, s) | ||
| vehicle_count = routing_solution.get_vehicle_count() | ||
| cu_status = routing_solution.get_status() | ||
|
|
||
| assert cu_status == 0 | ||
| assert vehicle_count == 1 |
There was a problem hiding this comment.
Assert the singleton assignment itself.
This corner-case test still only proves the solve succeeded. It would pass even if the only vehicle dropped one of the three orders or ignored the match constraint. Please assert that truck 0 actually serves all configured order locations.
As per coding guidelines, **/*test*.{cpp,cu,py} must validate numerical correctness of optimization results (not just 'runs without error') and test degenerate cases, including singleton problems.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py` around lines 360
- 365, The test currently only checks solve success and vehicle count; extend it
to assert the actual assignment for vehicle 0 so the single truck serves all
three orders. After routing.Solve(d, s) and obtaining routing_solution, query
the solution's assignment/route API on routing_solution (e.g.,
get_assignments(), get_route(vehicle_id), or equivalent) for vehicle 0 and
assert that the returned stops include all configured order IDs/locations (count
== 3 and membership equals the test orders). Keep existing checks
(get_vehicle_count(), get_status()) and add the explicit assertions against
routing_solution's route/assignments for vehicle 0.
| solution_cudf = sol.get_route() | ||
| for i, assign in enumerate( | ||
| solution_cudf["truck_id"].unique().to_arrow().to_pylist() | ||
| ): | ||
| solution_vehicle_x = solution_cudf[solution_cudf["truck_id"] == assign] | ||
| h_route = solution_vehicle_x["route"].to_arrow().to_pylist() | ||
| route_len = len(h_route) | ||
| assert route_len > 3 |
There was a problem hiding this comment.
This does not actually check the empty-route behavior.
The loop only inspects truck_ids already present in sol.get_route(). If the regression is that vehicles with empty routes disappear from the returned solution entirely, this test still passes. Add a fleet-wide assertion on the expected vehicle IDs / empty-route representation before checking per-route length.
As per coding guidelines, **/*test*.{cpp,cu,py} must validate numerical correctness of optimization results and test degenerate cases, including empty problems.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py` around lines 728
- 735, The test currently iterates only over truck_ids present in solution_cudf
and misses vehicles that should be in the fleet but have empty routes; update
the test to first assert that sol.get_route() contains entries for all expected
vehicle IDs (or explicitly represents empty routes) by comparing
solution_cudf["truck_id"].unique().to_arrow().to_pylist() to the expected fleet
ID list (or checking that missing IDs are represented as empty-route records),
then proceed to the per-vehicle loop that checks route lengths using
solution_cudf and sol.get_route(); reference the symbols solution_cudf,
sol.get_route(), and "truck_id" when making these assertions so empty-route
degenerate cases are validated before the existing route_len assertions.
| with pytest.raises(Exception) as exc_info: | ||
| dm = routing.DataModel(3, 3) | ||
| dm.add_cost_matrix(cost_matrix) | ||
| assert ( | ||
| str(exc_info.value) | ||
| == "Number of locations doesn't match number of locations in matrix" | ||
| ) | ||
| with pytest.raises(Exception) as exc_info: | ||
| dm = routing.DataModel(cost_matrix.shape[0], 3) | ||
| dm.add_cost_matrix(cost_matrix[:3]) | ||
| assert ( | ||
| str(exc_info.value) == "cost matrix is expected to be a square matrix" | ||
| ) | ||
| with pytest.raises(Exception) as exc_info: | ||
| dm = routing.DataModel(cost_matrix.shape[0], 3) | ||
| dm.add_cost_matrix(cost_matrix) | ||
| assert ( | ||
| str(exc_info.value) | ||
| == "All values in cost matrix must be greater than or equal to zero" | ||
| ) |
There was a problem hiding this comment.
Assert the concrete validation exception type.
pytest.raises(Exception) will also pass on unrelated failures in the validation path. Please pin these to the specific exception the API is expected to raise so the tests fail when the wrong error bubbles out.
Also applies to: 93-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py` around lines 51
- 70, Replace the broad pytest.raises(Exception) with the concrete validation
exception that routing.DataModel.add_cost_matrix actually raises (for example
pytest.raises(ValueError) or the specific ValidationError class used by the
module) in the three assertions around the first block and the similar
assertions later (the other block noted at 93-123); update all occurrences so
tests assert the exact exception type from DataModel.add_cost_matrix instead of
Exception.
| def test_range(): | ||
| cost_matrix = cudf.DataFrame([[0, 5.0, 5.0], [5.0, 0, 5.0], [5.0, 5.0, 0]]) | ||
| dm = routing.DataModel(cost_matrix.shape[0], 3, 5) | ||
| dm.add_cost_matrix(cost_matrix) | ||
| with pytest.raises(Exception) as exc_info: | ||
| order_locations = cudf.Series([0, 1, 2, 4, 1]) | ||
| dm.set_order_locations(order_locations) | ||
| assert ( | ||
| str(exc_info.value) | ||
| == "All values in order locations must be less than or equal to 3" | ||
| ) |
There was a problem hiding this comment.
Add the location_id == num_locations boundary case.
This failure case uses 4, so it would still pass if set_order_locations() incorrectly accepted 3 for a 3-location matrix. Please add an assertion for the exact upper boundary to lock down the off-by-one behavior here. As per coding guidelines, "Test with free variables, singleton problems, and extreme problem dimensions near resource limits to validate edge case handling."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py` around lines
113 - 123, Add a new assertion in test_range to cover the exact upper-bound
boundary case: after the existing case with value 4, add another with
order_locations containing the value 3 (e.g., cudf.Series([0,1,2,3,1])) and
assert that dm.set_order_locations(...) raises the same Exception and message;
this ensures DataModel.set_order_locations enforces indices < num_locations
(DataModel, set_order_locations, test_range).
|
/ok to test 4f6b141 |
|
/ok to test ad9efbc |
Description
Issue
Checklist