fix: sync XCCL LoRA registry names in OpenAIServingModels#1025
fix: sync XCCL LoRA registry names in OpenAIServingModels#1025stablegenius49 wants to merge 1 commit intoinclusionAI:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the LoRA management within the OpenAI serving layer by introducing a mechanism to correctly synchronize LoRA registry names. It ensures that after an XCCL LoRA weight update, the system's internal registry ( Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a mechanism to synchronize LoRA registry names in OpenAIServingModels during XCCL-based weight updates. The changes involve caching LoRA metadata when it's set and then applying this cached information to re-key the LoRA requests in the serving layer after the update is complete. A new module, areal/engine/vllm_ext/lora_registry.py, encapsulates this logic cleanly. The implementation appears correct and addresses the described issue. Additionally, a new test file provides good coverage for the new functionality. I have one suggestion to improve the maintainability of the new test file by simplifying how the module under test is imported.
| _MODULE_PATH = ( | ||
| Path(__file__).resolve().parents[1] | ||
| / "areal" | ||
| / "engine" | ||
| / "vllm_ext" | ||
| / "lora_registry.py" | ||
| ) | ||
| _SPEC = spec_from_file_location("test_lora_registry", _MODULE_PATH) | ||
| assert _SPEC is not None and _SPEC.loader is not None | ||
| _lora_registry = module_from_spec(_SPEC) | ||
| _SPEC.loader.exec_module(_lora_registry) | ||
|
|
||
| apply_pending_lora_registry_update = ( | ||
| _lora_registry.apply_pending_lora_registry_update | ||
| ) | ||
| cache_pending_lora_registry_update = ( | ||
| _lora_registry.cache_pending_lora_registry_update | ||
| ) |
There was a problem hiding this comment.
The method used to import the lora_registry module is unnecessarily complex and can be brittle. Dynamically loading a module from a file path makes the test suite harder to maintain, for example, if file paths change.
Assuming pytest is run from the project root (a standard practice), you can use a more idiomatic and robust absolute import. This would simplify the test setup significantly.
from areal.engine.vllm_ext.lora_registry import (
apply_pending_lora_registry_update,
cache_pending_lora_registry_update,
)|
Hi @stablegenius49, Thank you for your contribution! We sincerely apologize for merging #1039 before noticing this PR. From what I can tell, #1039 seems to include a similar change. Is my understanding correct? If this PR introduces additional functionality, could you please resolve the conflicts and run Sorry for the inconvenience, and thank you for your understanding! |
|
This pull request has been automatically marked as stale because it has not had recent activity within the last 14 days. Please add a comment or push new commits to keep it active. Thank you for your contribution! |
Summary
/areal_set_update_weight_meta_lorasucceedsopenai_serving_models.lora_requestsafter/areal_update_weights_lora_xcclsucceeds so the new versioned LoRA name is routable immediatelyTesting
python3 -m pytest tests/test_vllm_lora_registry.pypython3 -m compileall areal/engine/vllm_ext/lora_registry.py areal/engine/vllm_ext/areal_vllm_server.py tests/test_vllm_lora_registry.pyCloses #1020