Conversation
✅ Deploy Preview for antenna-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
✅ Deploy Preview for antenna-ssec ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds processed-state tracking across stack: DB annotations and filters to mark images as processed (including null/empty detections), ML pipeline changes to emit null detections, API exposure of per-image and per-collection processed metrics, tests, and UI surfaces to display those metrics. Changes
Sequence DiagramsequenceDiagram
participant Pipeline as ML Pipeline
participant SaveResults as save_results()
participant DB as Database/Models
participant API as API Layer
participant UI as Frontend
Pipeline->>SaveResults: Return detection results for images
SaveResults->>SaveResults: create_null_detections_for_undetected_images()
SaveResults->>DB: Persist detections (including bbox=None)
DB->>DB: Annotate SourceImage.was_processed via Exists(Detection)
DB->>DB: Compute collection processed counts (with_source_images_processed_count)
API->>DB: Query annotated images/collections (with was_processed, processed counts)
DB-->>API: Return annotated rows
API->>UI: JSON includes was_processed and source_images_processed_count
UI->>UI: Render timeline, activity plot, and capture-sets column
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly Related PRs
Suggested Reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ami/main/api/views.py (1)
378-439:⚠️ Potential issue | 🟠 MajorFix
was_processedfor empty intervals (and avoid stale values).Line 439 uses
imageafter the loop; for intervals with no captures (or events with zero captures) this either reuses the prior interval’s image or can crash. Accumulate during the loop and keep False when no captures.🐛 Suggested fix
@@ while image_index < len(source_images) and source_images[image_index]["timestamp"] <= interval_end: image = source_images[image_index] @@ if image["detections_count"] >= max(interval_data["detection_counts"]): interval_data["top_capture"] = SourceImage(pk=image["id"]) + interval_data["was_processed"] = interval_data["was_processed"] or image["was_processed"] image_index += 1 @@ - interval_data["was_processed"] = image["was_processed"] + # was_processed already accumulated; remain False when no captures
🤖 Fix all issues with AI agents
In `@ami/main/models.py`:
- Around line 1839-1841: Remove the redundant Q(bbox=None) in
get_detections_count() (keep Q(bbox__isnull=True) and the empty-list check
Q(bbox=[])) and then make update_detection_counts() use the same exclusion
filter as get_detections_count() so the cached detections_count is computed
consistently; specifically, update the counting logic in
update_detection_counts() to apply .exclude(Q(bbox__isnull=True) | Q(bbox=[]))
on self.detections before calling .count() so both methods return the same
value.
In `@ami/ml/models/pipeline.py`:
- Around line 882-888: The loop that assigns detector_algorithm_reference over
algorithms_known currently overwrites when multiple detection algorithms exist;
update the logic in the section that iterates algorithms_known to make selection
deterministic or to emit a null detection per detector: either pick one
deterministic algorithm (e.g., lowest key or sorted by name) and log/warn when
multiple AlgorithmTaskType.DETECTION entries are present, or instead create an
AlgorithmReference for each known_algorithm with task_type DETECTION (rather
than a single detector_algorithm_reference) so downstream checks use
per-detector null detections; modify the code around
detector_algorithm_reference, algorithms_known and AlgorithmReference
accordingly and add a warning using the existing logger when multiple detectors
are found.
- Around line 896-902: The appended null detection currently uses a naive
timestamp via datetime.datetime.now(); update the DetectionResponse creation
inside the null_detections_to_add.append call to use the timezone-aware now()
function (already imported and used elsewhere) instead of
datetime.datetime.now(), ensuring the timestamp field for DetectionResponse is
timezone-aware and consistent with other parts of the pipeline.
In `@ui/src/data-services/models/collection.ts`:
- Around line 93-101: Rename the getter numImagesProccessed to
numImagesProcessedLabel in the Collection model so the name is spelled correctly
and follows the "*Label" convention; update the getter implementation (currently
computing numProcessed, pct and returning the formatted string) to the new name,
then find and replace all usages (e.g., any occurrences of
item.numImagesProccessed such as in collection-columns.tsx) to
item.numImagesProcessedLabel, and update any type declarations/exports or tests
that reference the old symbol to the new one to ensure no unresolved references
remain.
In `@ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx`:
- Around line 71-96: The "Was processed" trace currently maps wasProcessed→0/1
onto yaxis 'y2' (shared with detectionsMaxCount) so it becomes invisible when
detectionsMaxCount is large and the mapping is inverted; fix by either (A)
giving this trace its own y-axis (e.g., set yaxis: 'y3' and add a yaxis config
with fixed range [0,1]) and keep y as wasProcessed ? 0 : 1 but rename the trace
to "Not processed" (or add a clarifying comment about the inversion), or (B)
scale the y values to the detections axis by mapping y: timeline.map(t =>
t.numCaptures>0 ? (t.wasProcessed ? 0 : detectionsMaxCount) : 0) and keep the
name "Not processed" (or invert the boolean mapping and rename to "Was
processed" accordingly); update hover/customdata text to match the chosen
semantics.
🧹 Nitpick comments (3)
ui/src/data-services/models/timeline-tick.ts (1)
35-37: Consider adding a nullish fallback forwasProcessed, consistent with other getters.Other getters in this class (e.g.,
numDetections,numCaptures) use?? 0/?? falseto guard against missing server data. Ifwas_processedis absent from the API response (e.g., during rollout or with an older API version), this will returnundefinedinstead offalse.Suggested fix
get wasProcessed(): boolean { - return this._timelineTick.was_processed + return this._timelineTick.was_processed ?? false }ami/main/admin.py (1)
256-269: Reduce admin list N+1 by annotatingwas_processed.
get_was_processedin list_display can trigger a query per row. Consider annotating inget_querysetand reading the annotation in an admin method.♻️ Suggested adjustment
@@ def get_queryset(self, request: HttpRequest) -> QuerySet[Any]: - return super().get_queryset(request).select_related("event", "deployment", "deployment__data_source") + return ( + super() + .get_queryset(request) + .select_related("event", "deployment", "deployment__data_source") + .with_was_processed() + ) + + `@admin.display`(boolean=True, ordering="was_processed", description="Was processed") + def get_was_processed(self, obj: SourceImage) -> bool: + return getattr(obj, "was_processed", obj.get_was_processed())ami/main/models.py (1)
3725-3734: Samebbox=None/bbox__isnull=Trueredundancy as inget_detections_count.
~models.Q(images__detections__bbox__isnull=True)and~models.Q(images__detections__bbox=None)produce identical SQL. One can be removed for clarity.Simplify the filter
source_images_with_detections_count=models.Count( "images", filter=( models.Q(images__detections__isnull=False) & ~models.Q(images__detections__bbox__isnull=True) - & ~models.Q(images__detections__bbox=None) & ~models.Q(images__detections__bbox=[]) ), distinct=True, )
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ami/ml/models/pipeline.py (1)
939-944:⚠️ Potential issue | 🟠 MajorNull detections incorrectly create empty
Occurrencerecords.
create_and_update_occurrences_for_detectionsreceives the fulldetectionslist, which includes null Detection objects (wherebbox=None). For each null detection without a pre-existing occurrence, it bulk-creates anOccurrencewith event, deployment, and project set but no taxon or determination. These phantom occurrences are semantically wrong — a null detection means "no animal found" — and persist in theOccurrencetable. Some API endpoints filter out these empty occurrences (checkingdetermination__isnull=False), but the core queryset.valid()only excludes occurrences with zero detections, allowing these to inflate counts in certain views. Filter out null detections before creating occurrences:🛠️ Proposed fix in
save_results+ real_detections = [d for d in detections if d.bbox is not None] create_and_update_occurrences_for_detections( - detections=detections, + detections=real_detections, logger=job_logger, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 939 - 944, The code passes the full detections list (including null detections with bbox=None) into create_and_update_occurrences_for_detections which creates empty Occurrence records; filter the detections before calling that function (e.g., in save_results) by keeping only detections where bbox is not None (or detection is truthy) and then call create_and_update_occurrences_for_detections with that filtered list so null detections are ignored and no phantom Occurrence rows are created.
🧹 Nitpick comments (1)
ui/src/data-services/models/capture-set.ts (1)
82-101:numImagesWithDetectionsLabelcan render"undefined (0%)"— worth aligning with the new pattern.The new
numImagesProcessedLabelcorrectly defaultsnumProcessedto0via?? 0, so it never emits the string"undefined". The adjacent, pre-existingnumImagesWithDetectionsLabelusesthis.numImagesWithDetections?.toLocaleString()without a fallback; whennumImagesWithDetectionsisundefined, the template literal produces the literal string"undefined (0%)"in the rendered cell.Since the new getter sets the correct precedent, consider aligning
numImagesWithDetectionsLabelfor consistency:♻️ Proposed alignment for numImagesWithDetectionsLabel
get numImagesWithDetectionsLabel(): string { + const numWithDetections = this.numImagesWithDetections ?? 0 const pct = - this.numImagesWithDetections && this.numImages - ? (this.numImagesWithDetections / this.numImages) * 100 + this.numImages && this.numImages > 0 + ? (numWithDetections / this.numImages) * 100 : 0 - return `${this.numImagesWithDetections?.toLocaleString()} (${pct.toFixed( - 0 - )}%)` + return `${numWithDetections.toLocaleString()} (${pct.toFixed(0)}%)` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/src/data-services/models/capture-set.ts` around lines 82 - 101, The getter numImagesWithDetectionsLabel can produce "undefined (0%)" because it calls this.numImagesWithDetections?.toLocaleString() without a fallback; update it to mirror numImagesProcessedLabel by defaulting numImagesWithDetections to 0 (use the nullish coalescing operator, e.g. const numWithDetections = this.numImagesWithDetections ?? 0), compute pct from numWithDetections and this.numImages, and use numWithDetections.toLocaleString() in the returned template so the label never renders "undefined".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ami/main/models.py`:
- Around line 3741-3744: Remove the redundant null-checks in the queryset: drop
the negated Q(models.Q(images__detections__bbox=None)) since
Q(...__bbox__isnull=True) and Q(...__bbox=None) are equivalent for JSONField,
and also remove the now-unnecessary models.Q(images__detections__isnull=False)
because excluding bbox IS NULL already guarantees a detection row; update the
queryset fragment that currently contains
models.Q(images__detections__isnull=False) &
~models.Q(images__detections__bbox__isnull=True) &
~models.Q(images__detections__bbox=None) &
~models.Q(images__detections__bbox=[]) to only keep the meaningful predicates
(e.g., ~models.Q(images__detections__bbox__isnull=True) and
~models.Q(images__detections__bbox=[])).
In `@ami/ml/models/pipeline.py`:
- Around line 917-924: filter_processed_images is incorrectly treating the null
placeholder detections (created by create_null_detections_for_undetected_images,
which use detections with bbox=None) as "unprocessed" because they have no
classifications; add a branch in filter_processed_images before the existing
classifications__isnull check that detects when existing_detections consists
only of null detections (e.g., all detections have bbox==None or an equivalent
null marker) and treat that case as "processed - no animal found" so the image
is skipped on subsequent runs; update any comments/tests to reflect that
null-only detection sets should not cause re-queuing.
- Around line 830-834: The int() casts on detection.source_image_id and
source_img.id can raise ValueError for non-numeric IDs; change the membership
check to use string comparison instead of int conversion by building
source_images_with_detections from str(detection.source_image_id) (iterating
results.detections) and comparing against str(source_img.id) when iterating
results.source_images (or otherwise guard the cast with a try/except and fall
back to string comparison) so comparisons are robust for UUIDs and other
non-integer IDs.
In `@ui/src/pages/project/capture-sets/capture-set-columns.tsx`:
- Around line 113-114: The renderCell parameter is typed as Collection but that
type isn't imported—replace the incorrect type with CaptureSet (i.e., change the
renderCell signature from item: Collection to item: CaptureSet) to match the
other renderCell entries and ensure CaptureSet is imported at the top of the
file so the BasicTableCell usage of item.numImagesProcessedLabel compiles.
---
Outside diff comments:
In `@ami/ml/models/pipeline.py`:
- Around line 939-944: The code passes the full detections list (including null
detections with bbox=None) into create_and_update_occurrences_for_detections
which creates empty Occurrence records; filter the detections before calling
that function (e.g., in save_results) by keeping only detections where bbox is
not None (or detection is truthy) and then call
create_and_update_occurrences_for_detections with that filtered list so null
detections are ignored and no phantom Occurrence rows are created.
---
Duplicate comments:
In `@ami/ml/models/pipeline.py`:
- Around line 835-840: The loop over algorithms_known silently lets the last
detection algorithm win; modify the logic in the block that sets
detector_algorithm_reference so that if a second AlgorithmTaskType.DETECTION is
encountered (i.e., detector_algorithm_reference is already set) you surface a
clear failure (raise a ValueError/RuntimeError or log+raise) instead of
overwriting; reference the symbols algorithms_known,
AlgorithmTaskType.DETECTION, detector_algorithm_reference, and
AlgorithmReference to implement the check and error with a descriptive message
listing conflicting algorithm names/keys.
---
Nitpick comments:
In `@ui/src/data-services/models/capture-set.ts`:
- Around line 82-101: The getter numImagesWithDetectionsLabel can produce
"undefined (0%)" because it calls this.numImagesWithDetections?.toLocaleString()
without a fallback; update it to mirror numImagesProcessedLabel by defaulting
numImagesWithDetections to 0 (use the nullish coalescing operator, e.g. const
numWithDetections = this.numImagesWithDetections ?? 0), compute pct from
numWithDetections and this.numImages, and use numWithDetections.toLocaleString()
in the returned template so the label never renders "undefined".
There was a problem hiding this comment.
♻️ Duplicate comments (2)
ami/ml/models/pipeline.py (2)
911-918:⚠️ Potential issue | 🟠 MajorNull detections will cause images to be re-queued on subsequent pipeline runs.
The
filter_processed_imagesfunction (lines 81-115) yields images whenexisting_detections.filter(classifications__isnull=True).exists()is True. Since null detections (bbox=None) have no classifications, this condition will always be True for them, defeating the deduplication logic.
filter_processed_imagesneeds an additional branch to recognize null-only detections as "processed, no animal found — skip" before theclassifications__isnullcheck.🛠️ Proposed fix in
filter_processed_imagesfor image in images: existing_detections = image.detections.filter(detection_algorithm__in=pipeline_algorithms) + real_detections = existing_detections.filter(bbox__isnull=False) + null_only = existing_detections.exists() and not real_detections.exists() + if not existing_detections.exists(): task_logger.debug(f"Image {image} needs processing: has no existing detections from pipeline's detector") yield image - elif existing_detections.filter(classifications__isnull=True).exists(): + elif null_only: + task_logger.debug(f"Image {image} was processed with no detections (null detection exists), skipping") + continue + elif real_detections.filter(classifications__isnull=True).exists(): task_logger.debug(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 911 - 918, filter_processed_images is treating any detection with classifications__isnull as unprocessed, but your create_null_detections_for_undetected_images adds detections with bbox=None which also have no classifications and thus get re-queued; update filter_processed_images to first check for "null-only" detections (e.g., detections where bbox is None or bbox==NULL and there are no other non-null bboxes) and consider those images as processed and skip them before falling back to the existing classifications__isnull check; locate the logic in filter_processed_images and use the same detection attributes/naming as in create_null_detections_for_undetected_images and results.detections to identify null-only detections and exclude those images from being yielded.
830-835:⚠️ Potential issue | 🟡 MinorUse string comparison instead of
int()cast to avoid potentialValueError.The
source_image_idfield is typed asstrinDetectionResponse. Casting toint()could raiseValueErrorif UUIDs or other non-numeric IDs are used in the future.🛠️ Proposed fix
- source_images_with_detections = {int(detection.source_image_id) for detection in results.detections} + source_images_with_detections = {str(detection.source_image_id) for detection in results.detections} null_detections_to_add = [] detection_algorithm_reference = AlgorithmReference(name=detection_algorithm.name, key=detection_algorithm.key) for source_img in results.source_images: - if int(source_img.id) not in source_images_with_detections: + if str(source_img.id) not in source_images_with_detections:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 830 - 835, The code currently casts detection.source_image_id to int which can raise ValueError for non-numeric IDs; change the set and membership check to use string values instead: build source_images_with_detections = {detection.source_image_id for detection in results.detections} (or explicitly str(detection.source_image_id)) and compare using if str(source_img.id) not in source_images_with_detections so all checks around results.detections, source_images_with_detections, and the loop over results.source_images use string comparisons rather than int() casts (leave AlgorithmReference(name=detection_algorithm.name, key=detection_algorithm.key) unchanged).
🧹 Nitpick comments (3)
ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx (1)
11-11: Consider using a more muted green for visual consistency.The bright green
#00ff1astands out significantly against the muted palette used for other traces (#4E4F57captures,#5F8AC6detections). Consider a softer green like#4CAF50or#66BB6Afor better visual harmony.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx` at line 11, Replace the overly bright green constant lineColorProcessed (currently '#00ff1a') with a more muted green to match the rest of the palette; update the value of lineColorProcessed to a softer hex such as '#4CAF50' or '#66BB6A' (or another muted green your design system uses) so processed traces visually harmonize with captures ('#4E4F57') and detections ('#5F8AC6').ami/ml/models/pipeline.py (2)
817-821: Remove unusedloggerparameter or use it for logging.The
loggerparameter is declared but never used in the function body. Either add logging statements or remove the parameter to avoid confusion.♻️ Proposed fix
Option 1: Remove unused parameter:
def create_null_detections_for_undetected_images( results: PipelineResultsResponse, detection_algorithm: Algorithm, - logger: logging.Logger = logger, ) -> list[DetectionResponse]:Option 2: Add logging:
for source_img in results.source_images: if int(source_img.id) not in source_images_with_detections: + logger.debug(f"Creating null detection for source image {source_img.id}") null_detections_to_add.append(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 817 - 821, The function create_null_detections_for_undetected_images currently declares a logger parameter that is unused; either remove the logger parameter from the signature and all callers (to keep the API minimal) or use it inside the function to emit useful diagnostics (e.g., log how many null detections were created, which detection_algorithm was used, and any skipped images). Update references to PipelineResultsResponse, DetectionResponse and detection_algorithm within the function accordingly and ensure callers are adjusted if you remove the parameter.
895-900: Consider chaining exceptions for better debugging.Per static analysis hint (B904), re-raising with
from errorfrom Noneimproves error diagnostics by preserving the exception chain.♻️ Proposed fix
try: detection_algorithm = pipeline.algorithms.get(task_type=AlgorithmTaskType.DETECTION) - except Algorithm.DoesNotExist: - raise ValueError("Pipeline does not have a detection algorithm") - except Algorithm.MultipleObjectsReturned: - raise NotImplementedError("Multiple detection algorithms per pipeline are not supported") + except Algorithm.DoesNotExist as err: + raise ValueError("Pipeline does not have a detection algorithm") from err + except Algorithm.MultipleObjectsReturned as err: + raise NotImplementedError("Multiple detection algorithms per pipeline are not supported") from err🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 895 - 900, The exception handlers around the pipeline.algorithms.get call should chain the original exceptions when re-raising so the traceback is preserved; update the except blocks that catch Algorithm.DoesNotExist and Algorithm.MultipleObjectsReturned to raise the new ValueError and NotImplementedError using "from err" (e.g., raise ValueError("...") from err) where "err" is the caught exception variable, keeping the same messages and referencing the detection_algorithm retrieval and Algorithm.DoesNotExist / Algorithm.MultipleObjectsReturned symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@ami/ml/models/pipeline.py`:
- Around line 911-918: filter_processed_images is treating any detection with
classifications__isnull as unprocessed, but your
create_null_detections_for_undetected_images adds detections with bbox=None
which also have no classifications and thus get re-queued; update
filter_processed_images to first check for "null-only" detections (e.g.,
detections where bbox is None or bbox==NULL and there are no other non-null
bboxes) and consider those images as processed and skip them before falling back
to the existing classifications__isnull check; locate the logic in
filter_processed_images and use the same detection attributes/naming as in
create_null_detections_for_undetected_images and results.detections to identify
null-only detections and exclude those images from being yielded.
- Around line 830-835: The code currently casts detection.source_image_id to int
which can raise ValueError for non-numeric IDs; change the set and membership
check to use string values instead: build source_images_with_detections =
{detection.source_image_id for detection in results.detections} (or explicitly
str(detection.source_image_id)) and compare using if str(source_img.id) not in
source_images_with_detections so all checks around results.detections,
source_images_with_detections, and the loop over results.source_images use
string comparisons rather than int() casts (leave
AlgorithmReference(name=detection_algorithm.name, key=detection_algorithm.key)
unchanged).
---
Nitpick comments:
In `@ami/ml/models/pipeline.py`:
- Around line 817-821: The function create_null_detections_for_undetected_images
currently declares a logger parameter that is unused; either remove the logger
parameter from the signature and all callers (to keep the API minimal) or use it
inside the function to emit useful diagnostics (e.g., log how many null
detections were created, which detection_algorithm was used, and any skipped
images). Update references to PipelineResultsResponse, DetectionResponse and
detection_algorithm within the function accordingly and ensure callers are
adjusted if you remove the parameter.
- Around line 895-900: The exception handlers around the pipeline.algorithms.get
call should chain the original exceptions when re-raising so the traceback is
preserved; update the except blocks that catch Algorithm.DoesNotExist and
Algorithm.MultipleObjectsReturned to raise the new ValueError and
NotImplementedError using "from err" (e.g., raise ValueError("...") from err)
where "err" is the caught exception variable, keeping the same messages and
referencing the detection_algorithm retrieval and Algorithm.DoesNotExist /
Algorithm.MultipleObjectsReturned symbols.
In `@ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx`:
- Line 11: Replace the overly bright green constant lineColorProcessed
(currently '#00ff1a') with a more muted green to match the rest of the palette;
update the value of lineColorProcessed to a softer hex such as '#4CAF50' or
'#66BB6A' (or another muted green your design system uses) so processed traces
visually harmonize with captures ('#4E4F57') and detections ('#5F8AC6').
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ami/ml/models/pipeline.pyui/src/pages/session-details/playback/activity-plot/activity-plot.tsx
There was a problem hiding this comment.
Pull request overview
Adds explicit “processed” tracking for captures by introducing “null” detections when a pipeline succeeds but returns no detections, and surfaces this in timeline + capture set statistics in the UI/API.
Changes:
- Backend: create/save “null” detections and adjust detection-counting logic to exclude null/empty bboxes while still tracking “processed”.
- API/UI: expose and visualize
was_processedin the session timeline; add “Total Processed Captures” to capture sets. - Tests: add coverage for saving results with null detections.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| ui/src/utils/language.ts | Adds label string for “Total Processed Captures”. |
| ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx | Adds “Was processed” series to activity plot. |
| ui/src/pages/project/capture-sets/capture-sets.tsx | Enables the new processed-captures column. |
| ui/src/pages/project/capture-sets/capture-set-columns.tsx | Adds processed-captures column rendering. |
| ui/src/data-services/models/timeline-tick.ts | Adds wasProcessed mapping from API field. |
| ui/src/data-services/models/capture-set.ts | Adds processed capture count/label derivations. |
| ami/ml/tests.py | Adds test for pipeline results producing null detections. |
| ami/ml/schemas.py | Allows bbox to be optional in detection request/response schemas. |
| ami/ml/models/pipeline.py | Creates null detections for undetected images; updates bbox serialization handling. |
| ami/main/models.py | Adds queryset annotation for was_processed; excludes null/empty bbox from detection counts. |
| ami/main/api/views.py | Adds was_processed to timeline payload; adds processed count annotation for collections; filters detections endpoint. |
| ami/main/api/serializers.py | Adds was_processed to timeline interval serializer; includes processed count field for collections. |
| ami/main/admin.py | Shows get_was_processed in SourceImage admin list display. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
ami/ml/models/pipeline.py (1)
87-90:⚠️ Potential issue | 🟠 MajorNull-detection skip is too broad and can mask reprocessing needs.
Line 87 currently skips when any null detection exists. If a capture has both null and real detections, it will skip before checking real detections/classifications.
Suggested fix
- elif existing_detections.null_detections().exists(): # type: ignore - task_logger.debug(f"Image {image} has a null detection from pipeline {pipeline}, skipping! ") - continue - elif existing_detections.filter(classifications__isnull=True).exists(): + real_detections = existing_detections.exclude(models.Q(bbox__isnull=True) | models.Q(bbox=[])) + null_only = existing_detections.exists() and not real_detections.exists() + if null_only: + task_logger.debug(f"Image {image} has only null detections from pipeline {pipeline}, skipping!") + continue + elif real_detections.filter(classifications__isnull=True).exists(): # Check if there are detections with no classifications task_logger.debug( f"Image {image} needs processing: has existing detections with no classifications " "from pipeline {pipeline}" ) yield image else: # If there are existing detections with classifications, # Compare their classification algorithms to the current pipeline's algorithms pipeline_algorithm_ids = set(classification_algorithms.values_list("id", flat=True)) - detection_algorithm_ids = set(existing_detections.values_list("classifications__algorithm_id", flat=True)) + detection_algorithm_ids = set(real_detections.values_list("classifications__algorithm_id", flat=True))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 87 - 90, The current check uses existing_detections.null_detections().exists() which skips the image if any null detection exists and can mask cases where real detections are also present; change the logic so the pipeline only skips when ALL detections are null (i.e., there are no non-null/classified detections). Concretely, replace the current condition with a check that no classified/non-null detections exist (for example, use existing_detections.filter(classifications__isnull=False).exists() to detect real detections and only skip when that returns False), and keep the task_logger.debug message referencing image and pipeline.
🧹 Nitpick comments (1)
ami/ml/models/pipeline.py (1)
899-903: Use the same detection-task criteria here as infilter_processed_images.Line 899 uses only
AlgorithmTaskType.DETECTION, but earlier logic usesAlgorithm.detection_task_types. Aligning these avoids inconsistent behavior across pipeline paths.Suggested fix
- try: - detection_algorithm = pipeline.algorithms.get(task_type=AlgorithmTaskType.DETECTION) - except Algorithm.DoesNotExist: + detection_algorithms = pipeline.algorithms.filter(task_type__in=Algorithm.detection_task_types) + if not detection_algorithms.exists(): raise ValueError("Pipeline does not have a detection algorithm") - except Algorithm.MultipleObjectsReturned: + if detection_algorithms.count() > 1: raise NotImplementedError("Multiple detection algorithms per pipeline are not supported") + detection_algorithm = detection_algorithms.get()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ami/ml/models/pipeline.py` around lines 899 - 903, The detection lookup uses only AlgorithmTaskType.DETECTION but should use the same criteria as filter_processed_images; update the lookup for detection_algorithm (the pipeline.algorithms.get call) to use Algorithm.detection_task_types (e.g., task_type in/IN the detection_task_types set) instead of a single AlgorithmTaskType.DETECTION and keep the existing exception handling (Algorithm.DoesNotExist and Algorithm.MultipleObjectsReturned) so behavior matches filter_processed_images.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ami/main/api/views.py`:
- Line 904: The queryset assignment is using inverted logic: change the
exclude(~NULL_DETECTIONS_FILTER) call on Detection to
exclude(NULL_DETECTIONS_FILTER) so it filters out null detections and returns
valid ones; update the line that sets queryset (the
Detection.objects.exclude(...) expression that also calls
select_related("source_image", "detection_algorithm")) to use
NULL_DETECTIONS_FILTER without the bitwise negation so behavior matches other
uses in the codebase.
---
Duplicate comments:
In `@ami/ml/models/pipeline.py`:
- Around line 87-90: The current check uses
existing_detections.null_detections().exists() which skips the image if any null
detection exists and can mask cases where real detections are also present;
change the logic so the pipeline only skips when ALL detections are null (i.e.,
there are no non-null/classified detections). Concretely, replace the current
condition with a check that no classified/non-null detections exist (for
example, use existing_detections.filter(classifications__isnull=False).exists()
to detect real detections and only skip when that returns False), and keep the
task_logger.debug message referencing image and pipeline.
---
Nitpick comments:
In `@ami/ml/models/pipeline.py`:
- Around line 899-903: The detection lookup uses only
AlgorithmTaskType.DETECTION but should use the same criteria as
filter_processed_images; update the lookup for detection_algorithm (the
pipeline.algorithms.get call) to use Algorithm.detection_task_types (e.g.,
task_type in/IN the detection_task_types set) instead of a single
AlgorithmTaskType.DETECTION and keep the existing exception handling
(Algorithm.DoesNotExist and Algorithm.MultipleObjectsReturned) so behavior
matches filter_processed_images.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
ami/main/api/views.pyami/main/models.pyami/ml/models/pipeline.pyui/src/pages/project/capture-sets/capture-set-columns.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- ui/src/pages/project/capture-sets/capture-set-columns.tsx

Summary
If a pipeline doesn't return detections but still successfully processed the image, it will have a "null" detection.
List of Changes
Related Issues
Closes #484
How to Test the Changes
docker compose run --rm django python manage.py test -k test_image_with_null_detection --failfast --pdbScreenshots
Example of non-null detection
Example of null detection
Example of no processing/detections
Deployment Notes
Include instructions if this PR requires specific steps for its deployment (database migrations, config changes, etc.)
Checklist
Summary by CodeRabbit
New Features
Bug Fixes
Tests