Skip to content

perf: speedup flat fts#6054

Open
westonpace wants to merge 2 commits intolance-format:mainfrom
westonpace:perf/speedup-flat-fts
Open

perf: speedup flat fts#6054
westonpace wants to merge 2 commits intolance-format:mainfrom
westonpace:perf/speedup-flat-fts

Conversation

@westonpace
Copy link
Member

This adds various performance improvements to the flat FTS search. The most significant improvement is that it parallelizes the search.

This does have some impact to accuracy. To calculate bm25 we typically need to make two passes through the data. The first to count token frequencies and the second to count token scores. The current implementation avoids this by using "token frequency so far" when calculating the bm25 score. This is generally accurate when there is a lot of indexed data and a small amount of unindexed data because the "token frequency so far" gets bootstrapped by the frequencies from the index and so the effect of the unindexed frequencies are minimal.

However, this can be more significant when there is no index or the unindexed data makes up a significant portion of the data. In that case the "token frequency so far" can be quite inaccurate for the first few documents.

In parallelizing this search we make this problem worse since each thread is calculating its own independent "token frequency so far" and it will take longer for each one to arrive at a more accurate result.

The most accurate approach would probably be to just accumulate all data in memory, tokenize (in parallel), count token frequencies (back to serial), then calculate scores (in parallel). This does run the risk of accumulating too much data however.

Another alternative could be to accumulate up to some amount (e.g. 100MB), calculate initial token frequencies, and then parallelize the rest of the search using those initial token frequencies. I'm open to suggestions. In the meantime we could probably proceed with this PR as-is.

In addition to the parallelization this PR also makes various changes to the algorithm itself to avoid string copies. This cuts down on the CPU time by 5x on my system.

@github-actions
Copy link
Contributor

PR Review

P0: Bug — MemBM25Scorer::update silently drops tokens when index is None

In the None (no index) path of flat_bm25_search_stream, the scorer is initialized with an empty HashMap:

None => MemBM25Scorer::new(0, 0, HashMap::new()),

But MemBM25Scorer::update was changed to only update existing keys:

if let Some(old_count) = self.token_docs.get_mut(token) {
    *old_count += *count;
} else {
    log::warn!("Token {} not found in token_docs", token);
}

The old code used .entry(token.clone()).or_insert(0) which would create new entries. The new code never populates token_docs, so num_docs_containing_token() always returns 0, producing incorrect (inflated) IDF values for all queries when there is no index.

The fix applied to the index_bm25_scorer.num_docs() == 0 branch (pre-populating token_docs from query tokens) should also be applied to the None branch.

P1: doc_norm uses matching-token count instead of total document length

let doc_norm = K1 * (1.0 - B + B * num_matching_tokens as f32 / scorer.avg_doc_length());

In BM25, dl should be the total number of tokens in the document (num_tokens), not just the tokens matching the query (num_matching_tokens). avg_doc_length() correctly uses all tokens, so the numerator should too. This was also wrong in the old code (used doc_tokens.len() after filtering), but since the code is being rewritten here, this would be a good time to fix it.

Minor

  • ESTIMATED_MAX_TOKENS_PER_ROW is defined but never used — should be removed or used.
  • num_docs_containing_token(&self, token: &String) changed from &str to &String — prefer keeping &str as it's more general.
  • There's a review question left as a code comment in do_flat_full_text_search (// What is this assertion for? ...). This should be resolved or removed before merge.

@wjones127
Copy link
Contributor

The most accurate approach would probably be to just accumulate all data in memory, tokenize (in parallel), count token frequencies (back to serial), then calculate scores (in parallel). This does run the risk of accumulating too much data however.

I feel like a very efficient way would be to tokenize in parallel, and then collect all tokenized data and compute in the end.

IIRC tokenization is usually the bottleneck, and I'd imagine the tokenized data is smaller than the original text, especially if you are able to filter out tokens that aren't relevant to the query.

@codecov
Copy link

codecov bot commented Feb 28, 2026

Codecov Report

❌ Patch coverage is 77.16535% with 29 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
rust/lance-index/src/scalar/inverted/index.rs 81.44% 13 Missing and 5 partials ⚠️
rust/lance-index/src/scalar/inverted/query.rs 50.00% 11 Missing ⚠️

📢 Thoughts on this report? Let us know!

@westonpace
Copy link
Member Author

I feel like a very efficient way would be to tokenize in parallel, and then collect all tokenized data and compute in the end.

IIRC tokenization is usually the bottleneck, and I'd imagine the tokenized data is smaller than the original text, especially if you are able to filter out tokens that aren't relevant to the query.

This should be doable! We can tokenize and discard all tokens that aren't in the query and then accumulate. We might also need to accumulate token counts for each row but that should be small too. Maybe for a first pass, if we exceed some limit (e.g. 1GB) then we log a warning and just keep going (eventually OOM). The warning could be something like...

Accumualted more than 1GB of tokenized flat search data.  This means there is a lot of unindexed text data being searched.  The search is going to be very RAM intensive and may eventually OOM.  Create (or update) an index on the text column to avoid this issue.

@westonpace
Copy link
Member Author

Both of claude's suggestions are valid. I will revisit this weekend / Monday when I have time to add some regression tests for these cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants