Skip to content

Share stackTable, frameTable, funcTable, resourceTable and nativeSymbols between threads#5482

Open
mstange wants to merge 9 commits intofirefox-devtools:mainfrom
mstange:share-everything
Open

Share stackTable, frameTable, funcTable, resourceTable and nativeSymbols between threads#5482
mstange wants to merge 9 commits intofirefox-devtools:mainfrom
mstange:share-everything

Conversation

@mstange
Copy link
Contributor

@mstange mstange commented Jun 4, 2025

Production | Deploy preview

Sharing the string table happened in #5481.
This PR makes us share the rest, and completes issue #3918.

Only samples and markers are still per-thread. I don't expect this to change.
This includes allocation samples.

To do:

  • Fix type check and test failures
  • Upgrade the URL when upgrading the profile, to remap function indexes etc. in the URL
  • When compacting during upload, apply translation tables to redux state (e.g. update focus function indexes in URL)
  • Do something to address perf impact for computing the derived thread (transforms and call node table computation happen per thread and take longer if the tables are bigger because they now contain information for all threads, should be able to reuse more of the computed outputs in the thread derivation pipeline)
  • Memoize the computation of the implementation filtered thread across threads
  • Adjust indexes in transforms when merging two different profiles
  • Add a paragraph to CHANGELOG-formats.md for the new version.
  • Add a test for merging profiles with applied transforms where we actually check the filtered call tree

Potential follow-ups:

  • Add a test checking that IndexIntoSourceTable indexes in the redux state are updated after profile sanitizing
  • Add a test checking that collapse-resource transforms aren't broken if profile sanitizing adds elements to the funcTable

@vlasakm
Copy link

vlasakm commented Aug 10, 2025

This is really great! Sharing this data across threads works especially well for applications where there is a large number of threads, but just a few "thread pools" whose threads are all executing the same code.

Using this allowed me to employ better caching when creating a firefox profile, save on the profile size, etc.

It also made me greedy, and I attempted to visualize a rather large profile (1602 threads, 577108 stacks, 29760 frames). Previously my experience was that Firefox Profiler would choke on say more than 300+ threads, and I hoped this change with sharing could improve the situation. Now it loads, and the UI is still quite responsive, but the load of the profile is taking long 2 minutes 15 seconds (and Firefox prompts to kill the script, etc.).

90 % of the load time is spent is in computeCallNodeTable, which I believe may be called repeatedly for each thread (and the merged thread which includes all of them), even though all these calls would process the same stack table and frame table, now that they are shared.

I know very little about React/Redux, but I see that there's some concept of memoization and thread selectors, that is still used for call nodes, even though after this change to share the stack and frame table. Very naively, would it perhaps be possible to just compute and memoize the call node information once by not using the thread selector mechanism here? But there's a great chance I am missing all the needed context.

FWIW I was and am still using this branch for quite some time. I am not sure if I am able to contribute much (I even failed to rebase it on top of more recent changes), but it would be great to see this, perhaps after all the big work-in-progress changes land (like the Typescript migration).

@mstange
Copy link
Contributor Author

mstange commented Aug 10, 2025

Yes indeed, sharing the computed call node table across threads while properly handling per thread transforms is the next thing I want to work on when I get back to this task. Thanks for the encouragement and it's great to hear that you're seeing real improvements from it!

@mstange mstange force-pushed the share-everything branch 4 times, most recently from 24a4604 to 8d957f0 Compare January 29, 2026 23:25
@codecov
Copy link

codecov bot commented Jan 29, 2026

Codecov Report

❌ Patch coverage is 85.86777% with 171 lines in your changes missing coverage. Please review.
✅ Project coverage is 85.37%. Comparing base (806adbc) to head (31327b6).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/app-logic/url-handling.ts 11.76% 60 Missing ⚠️
src/profile-logic/transforms.ts 29.41% 48 Missing ⚠️
src/profile-logic/profile-compacting.ts 87.96% 25 Missing and 1 partial ⚠️
src/profile-logic/sanitize.ts 88.60% 9 Missing ⚠️
src/reducers/code.ts 50.00% 6 Missing ⚠️
src/profile-logic/merge-compare.ts 96.66% 5 Missing ⚠️
src/profile-logic/profile-data.ts 94.73% 5 Missing ⚠️
src/reducers/url-state.ts 80.00% 5 Missing ⚠️
src/profile-logic/processed-profile-versioning.ts 97.87% 3 Missing ⚠️
src/profile-logic/index-translation.ts 90.47% 2 Missing ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5482      +/-   ##
==========================================
- Coverage   85.56%   85.37%   -0.20%     
==========================================
  Files         319      320       +1     
  Lines       31499    31908     +409     
  Branches     8705     8687      -18     
==========================================
+ Hits        26953    27241     +288     
- Misses       4115     4236     +121     
  Partials      431      431              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@mstange mstange force-pushed the share-everything branch 4 times, most recently from a5b1248 to 4b18e81 Compare January 30, 2026 18:40
canova added a commit that referenced this pull request Feb 2, 2026
Small things I noticed while working on #5482.
@mstange mstange marked this pull request as ready for review February 2, 2026 20:00
@mstange
Copy link
Contributor Author

mstange commented Feb 2, 2026

This is now ready for review.

@mstange mstange force-pushed the share-everything branch 2 times, most recently from ce546d0 to 0d1cc61 Compare February 2, 2026 21:18
mstange added a commit that referenced this pull request Feb 3, 2026
More small stuff that makes #5482 a tiny bit easier.
@mstange mstange requested a review from canova February 3, 2026 15:35
@mstange mstange force-pushed the share-everything branch 3 times, most recently from d1916d3 to cf780f4 Compare February 3, 2026 17:04
Copy link
Member

@canova canova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I promised myself that I would be done with reviewing this PR today, but I underestimated the amount of changes here 😅 I'm done-ish with the first commit, which is the main beast. So pushing the review comments now even though I still need to review the others.

profile: Profile | null,
browserConnection: BrowserConnection | null
browserConnection: BrowserConnection | null,
profileUpgradeInfo?: ProfileUpgradeInfo
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I'm a bit surprised with the profileUpgradeInfo, but seems fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'll go ahead and add some comments about the profileUpgradeInfo business.

Copy link
Contributor Author

@mstange mstange Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting, it looks like I was using it as an outparameter here. I think I will add a commit to refactor this a bit.

Comment on lines +1233 to +1239
const resourceIndex = funcIndex - threadMappings.funcTableIndexMap.length;
if (resourceIndex in threadMappings.resourceTableIndexMap) {
return (
newFuncCount + threadMappings.resourceTableIndexMap[resourceIndex]
);
}
return funcIndex;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I'm confused by this part of this function. How do we compute the resource index?

Edit: I think I understand more now, I missed that the if (funcIndex in threadMappings.funcTableIndexMap) { first if was returning early if the funcIndex is in the map. And this is like a special case for collapse-resource transform. I think it would be great to have a comment here explaining the behavior.

Copy link
Member

@canova canova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm done reviewing the rest of the PR! (I saw the github's unicorn error message quite a few times while doing so 😄) It was a lot easier to review the other commits. I think it looks good to me with the issues I raised yesterday fixed. I would prefer to take a look at the PR once more once these are fixed though. But I assume that review will be a lot more quicker.


function _gatherSourceReferencesInProfile(profile: Profile): Uint8Array {
const referencedSources = new Uint8Array(profile.shared.sources.length);
_gatherReferencesInStackTable(shared.stackTable, referencedSharedData);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to mention that the order of these calls are important, for example _gatherReferencesInSources requires referencedSources to be populated in _gatherReferencesInFuncTable

type TranslationMapForStacks = Int32Array;
type TranslationMapForLibs = Int32Array;
type TranslationMapForStrings = Int32Array;
type TranslationMapForSources = Int32Array;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! It's a lot better to use Int32Array instead of maps.

Can we also add a comment at the top of this block mentioning the oldXToNewXPlusOne behavior as it might not be obvious to everyone at first.

@canova
Copy link
Member

canova commented Feb 27, 2026

Ah another question, I see that there are 2 items unchecked in the list. Are they done? I think we can also do those as follow-ups as they don't look critical.

…Symbols across threads.

This makes func indexes etc valid globally instead of scoped per threads.
It also allows for smaller profiles because data doesn't have to be duplicated.
Symbolication is probably faster.

Selecting multiple threads, i.e. displaying a "merged thread", is now easier because
we no longer need to compute merged funcTables etc for the merged thread; the global
tables are already "merged".
This also also means that func index mapping during symbolication now works even if
multiple threads are selected. Fixes firefox-devtools#5703.

This profile version comes both with a profile upgrader and with a URL upgrader,
because we have to adjust any func and resource indexes for transforms in the URL.
@mstange
Copy link
Contributor Author

mstange commented Feb 28, 2026

Ah another question, I see that there are 2 items unchecked in the list. Are they done? I think we can also do those as follow-ups as they don't look critical.

Ah yes, I took a brief look at this and it didn't look straightforward to add those tests, so I'll leave those two items for potential follow-ups. Edited the PR description.

We also need to adjust any indexes in the redux state after compacting,
for example funcTable or resourceTable indexes used by transforms.

This patch also makes us adjust the source index in the URL after compaction.
Strictly speaking this fixes an existing bug, because the source table
was already being compacted.

It also fixes the bug where we weren't keeping track of the oldFuncCount properly
when splitting functions into private-window and non-private-window instances,
and weren't updating collapse-resource transforms correctly. Fixes firefox-devtools#5793.
We compute the call node table based on the filtered thread, per thread.
But now that every thread contains the full stackTable with entries used
by all threads, this computation has become much more expensive.
But most of the time we are passing the same tables from the shared profile
info, so we can just do the work once and share it across threads.

This only fixes the case where none of the threads have a transform applied.
They'll end up passing the tables from the shared data and hit the memoization
cache.

If a thread has a transform, it'll pass different tables and not hit the cache.

We don't do anything to share work across two threads that happen to have the
same transforms.

This also doesn't fix the case where there's an implementation filter applied.

Here's how loading the profile https://share.firefox.dev/4bxDqu8 compares:

- Before sharing the tables: https://share.firefox.dev/4r0imRO (1.1 seconds)
- After sharing the tables, no global memoization: https://share.firefox.dev/4qNGp6m (1.8 seconds)
- After sharing the tables, with global memoization: https://share.firefox.dev/4rwkwso (1.3 seconds)
…threads.

Here's how loading the profile https://share.firefox.dev/4kbkjrU compares:

- Before sharing the tables: https://share.firefox.dev/4kh5i84 (1.1 seconds)
- After sharing the tables, no global memoization: https://share.firefox.dev/3NLn1IK (1.6 seconds)
- After sharing the tables, with global memoization: https://share.firefox.dev/3LNNuot (1.3 seconds)
Here's how loading the profile https://share.firefox.dev/4a0b88Y compares:

- Before sharing the tables: https://share.firefox.dev/4qeiVGu (1.1 seconds)
- After sharing the tables, no global memoization: https://share.firefox.dev/3NUEi27 (2.0 seconds)
- After sharing the tables, with global memoization: https://share.firefox.dev/4qcDQty (1.3 seconds)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants