Skip to content

Conversation

@lzhangzz
Copy link
Collaborator

No description provided.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR performs a major refactoring of the turbomind engine architecture with the following key changes:

  • Replaces LlamaTritonModel with a new TurboMind class providing a cleaner API
  • Removes the old batch processing implementation (LlamaBatch, LlamaV2)
  • Introduces new model abstractions: LanguageModel, InputProcessor, and OutputProcessor to better separate concerns
  • Updates RequestMetrics fields to use atomic operations for thread-safe access
  • Consolidates model-related code into a unified models CMake target

Reviewed changes

Copilot reviewed 102 out of 102 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/turbomind/utils/metrics.h Changed metric fields to atomic types and fixed typo in field name
src/turbomind/turbomind.h/cc New TurboMind class interface replacing LlamaTritonModel
src/turbomind/triton_backend/llama/* Removed old triton backend files
src/turbomind/python/bind.cpp Updated Python bindings to use new TurboMind class
src/turbomind/models/language_model.* New LanguageModel abstraction for inference
src/turbomind/models/input_processor.* New component for handling input processing
src/turbomind/models/output_processor.* New component for handling output processing
src/turbomind/models/llama/unified_decoder.* Updated to work with new architecture
src/turbomind/models/llama/unified_attention_layer.* Refactored attention layer implementation
src/turbomind/models/llama/llama_utils.cu Changed isTuning() from thread_local to static
src/turbomind/layers/sampling_layers/* Removed old sampling layer files
src/turbomind/kernels/sampling_kernels.h Changed sampled_indexes/nums types from uint32_t to int

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

struct RequestMetrics {
int64_t enque_time{}; // when a request is enqued
int64_t scheduled_time{}; // when a request is scheduled for inference
std::atomic<int64_t> enqueue_time{}; // when a request is enqued
Copy link

Copilot AI Dec 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling error in comment: "enqued" should be "enqueued".

Copilot uses AI. Check for mistakes.
os << "RequestMetrics { ";
os << "enque_time=" << m.enque_time;
os << ", scheduled_time=" << m.scheduled_time;
os << "enque_time=" << m.enqueue_time.load(std::memory_order_relaxed);
Copy link

Copilot AI Dec 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent naming: the output stream operator still uses the old field name "enque_time" in the message, but it should be "enqueue_time" to match the renamed field.

Copilot uses AI. Check for mistakes.
bool& isTuning()
{
thread_local bool value{};
static bool value{};
Copy link

Copilot AI Dec 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isTuning() function changed from thread_local to static, which removes thread safety. If this function is accessed from multiple threads simultaneously in the new architecture, this could cause race conditions. Consider whether thread-local storage is still needed or if synchronization is required.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants