Skip to content

[Cherry-Pick][BugFix]Add lock to avoid generating nan (#7046)#7047

Open
juncaipeng wants to merge 1 commit intoPaddlePaddle:release/2.5from
juncaipeng:release/2.5-add-lock-1
Open

[Cherry-Pick][BugFix]Add lock to avoid generating nan (#7046)#7047
juncaipeng wants to merge 1 commit intoPaddlePaddle:release/2.5from
juncaipeng:release/2.5-add-lock-1

Conversation

@juncaipeng
Copy link
Copy Markdown
Collaborator

Motivation

Add lock to avoid generating nan when using storage cache

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

fastdeploy/cache_manager/prefix_cache_manager.py

Usage or Command

none

Accuracy Tests

none

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings March 27, 2026 07:53
@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Mar 27, 2026

Thanks for your contribution!

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

该 PR 旨在通过在使用 storage cache 的读写路径上增加 GPU KVCache 互斥锁,避免与 worker 并发访问导致的 NaN 问题,并对 MooncakeStore 的 warmup 行为做了调整。

Changes:

  • PrefixCacheManager 的 storage prefetch / write-back 任务下发流程中增加 KVCache 锁(并限制为同步模式)。
  • 调整 MooncakeStore.warmup() 的 warmup 写入大小,并变更 warmup key 的清理行为。

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
fastdeploy/cache_manager/prefix_cache_manager.py 为 storage 读写任务下发/等待流程增加 KVCache 互斥锁,减少并发访问导致的 NaN 风险
fastdeploy/cache_manager/transfer_factory/mooncake_store/mooncake_store.py 调整 MooncakeStore 初始化 warmup 的数据大小与清理策略
Comments suppressed due to low confidence (1)

fastdeploy/cache_manager/prefix_cache_manager.py:1152

  • 这里在获取 gpu_cache_lock 之后,如果触发 keys/block_ids 长度不一致会 raise ValueError,但锁不会被释放,可能导致后续 worker/transfer 进程永久阻塞。建议把 acquire 之后的逻辑放入 try/finally,在 finally 中确保 _release_kvcache_lock() 一定执行。
        assert is_sync, "Only support is_sync=True for now."
        self._acquire_kvcache_lock()

        if len(task.keys) != len(task.gpu_block_ids):
            err_msg = (
                f"write_back_storage error: hash_keys({len(task.keys)}) != gpu_block_ids({len(task.gpu_block_ids)})"
            )
            logger.error(err_msg)
            raise ValueError(err_msg)

Comment on lines +1176 to +1186
assert is_sync, "Only support is_sync=True for now."
self._acquire_kvcache_lock()

storage_block_ids = []
self.task_prefetch_event[task.task_id] = Event()
# issue task to cache_transfer_manager
self.cache_task_queue.put_transfer_task((CacheStatus.STORAGE2GPU, task))
if is_sync:
storage_block_ids = self.wait_prefetch_storage_task(task.task_id)

self._release_kvcache_lock()
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue_prefetch_storage_task 在 acquire 锁后到 release 之间缺少 try/finally 保护;一旦 put_transfer_task / wait_prefetch_storage_task / 其他异常抛出,会导致锁不释放并引发死锁。建议用 try/finally 包裹并在 finally 中 release。

Copilot uses AI. Check for mistakes.
Comment on lines +1144 to +1146
assert is_sync, "Only support is_sync=True for now."
self._acquire_kvcache_lock()

Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

新增的 KVCache 锁逻辑目前缺少单测覆盖,容易在异常路径(例如长度不匹配抛错、transfer 队列异常)下回归为“锁未释放导致死锁”。建议补充单测:mock gpu_cache_lock,断言 acquire/release 成对出现,并覆盖异常分支下也会 release。

Copilot generated this review using guidance from repository custom instructions.
@codecov-commenter
Copy link
Copy Markdown

Codecov Report

❌ Patch coverage is 0% with 22 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (release/2.5@56964c7). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/cache_manager/prefix_cache_manager.py 0.00% 21 Missing ⚠️
.../transfer_factory/mooncake_store/mooncake_store.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff               @@
##             release/2.5    #7047   +/-   ##
==============================================
  Coverage               ?   68.34%           
==============================================
  Files                  ?      390           
  Lines                  ?    54078           
  Branches               ?     8519           
==============================================
  Hits                   ?    36958           
  Misses                 ?    14437           
  Partials               ?     2683           
Flag Coverage Δ
GPU 68.34% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants