Skip to content

Missing kTfLiteInt16 in ComparisonsPrepare#3391

Draft
veblush wants to merge 1 commit intotensorflow:mainfrom
veblush:comp-int16
Draft

Missing kTfLiteInt16 in ComparisonsPrepare#3391
veblush wants to merge 1 commit intotensorflow:mainfrom
veblush:comp-int16

Conversation

@veblush
Copy link
Collaborator

@veblush veblush commented Jan 31, 2026

BUG=n/a

@veblush veblush added the ci:full Triggers the comprehensive cross-platform test suite. label Jan 31, 2026
@TFLM-bot TFLM-bot added the ci:ready Triggers the basic TFLM test suite. label Jan 31, 2026
TF_LITE_ENSURE(context, input2 != nullptr);

if (input1->type == kTfLiteInt8) {
if (input1->type == kTfLiteInt8 || input1->type == kTfLiteInt16) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@veblush should also have a check if the tensor is in fact quantized (unless the TfLite converter always produces quantized INT16???). I always assumed un-quantized INT16 was just as valid as unquantized INT32. Also add an is_quantized bool to the OpData?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@veblush The actual comparison Eval methods should also have INT16 support added. Not all LiteRT comparisons have INT16 support, and it is confusing how some LiteRT comparisons assume (or don't) that the INT16 inputs are not quantized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci:full Triggers the comprehensive cross-platform test suite. ci:ready Triggers the basic TFLM test suite.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments