Conversation
Closed
This commit automatically updates src/toolNames.json with new tool identifiers from microsoft/vscode-copilot-chat repository. Source: microsoft/vscode-copilot-chat File: src/extension/tools/common/toolNames.ts
chore: sync toolNames.json with vscode-copilot-chat
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request enhances the token tracking and reporting features in
src/extension.tsby adding support for actual token counts from LLM API usage data, alongside existing text-based estimates. It introduces new fields and interfaces to capture this data, updates aggregation logic to use the most accurate token values available, and improves session log parsing to extract detailed token usage when possible.Token tracking improvements:
Added
estimatedTokensandactualTokensfields toPeriodStats, session cache, and usage summaries to distinguish between text-based estimates and actual LLM API-reported token counts. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]Aggregation logic now prefers actual token counts when available, falling back to estimates otherwise, for improved accuracy in usage statistics.
Session log parsing enhancements:
PromptTokenDetailandActualUsageinterfaces to capture detailed prompt token breakdowns and actual usage data from LLM API responses.Chat turn details:
ChatTurninterface and log viewer logic to include actual usage data for each turn, enabling more precise diagnostics and insights. [1] [2] [3]Cache management:
These changes collectively improve the accuracy and granularity of token usage reporting, supporting both estimated and actual values for better diagnostics and analytics.