fix: pre-download sage_attention kernel before applying backend, remove pinned fa3 kernel version#578
Conversation
The submodule-level set_attention_backend in diffusers does not trigger the kernel download, leaving kernel_fn as None and causing a TypeError. This adds an explicit _maybe_download_kernel_for_backend call.
|
This PR has been inactive for 10 days and is now marked as stale. |
johannaSommer
left a comment
There was a problem hiding this comment.
LGTM but please wait for Begüm's review regarding the version
| Dict[str, Any] | ||
| The algorithm packages. | ||
| """ | ||
| flash_attention_3 = get_kernel("kernels-community/flash-attn3", version="<0.1.0") |
There was a problem hiding this comment.
I know that this was an important fix at some point, so not sure about removing it. Please wait for @begumcig 's review on this, she tackled this back then
There was a problem hiding this comment.
We absolutely don't need it now, and in fact, it's breaking the algorithm for the newer pytorch version, which Marius found out!!
| enable_gqa=enable_gqa, | ||
| ) | ||
| else: | ||
| out, _, *_ = torch.ops.flash_attn_pruna._flash_attn_forward( |
There was a problem hiding this comment.
we might need to keep this flexibile depending on the fa3 version we encounter - can we return and check whether the output is a tuple or a tensor and handle it accordingly?
There was a problem hiding this comment.
nevermind i get what you did now, this is great!
|
This PR has been inactive for 10 days and is now marked as stale. |
| Dict[str, Any] | ||
| The algorithm packages. | ||
| """ | ||
| flash_attention_3 = get_kernel("kernels-community/flash-attn3", version="<0.1.0") |
There was a problem hiding this comment.
We absolutely don't need it now, and in fact, it's breaking the algorithm for the newer pytorch version, which Marius found out!!
Description
Currently, there is a bug in the sageattn algorithm. Diffusers has two set_attention_backend methods, one for the whole model and one for the submodules. The submodule-level set_attention_backend in diffusers does not trigger the kernel download, leaving kernel_fn as None and causing a TypeError. This adds an explicit _maybe_download_kernel_for_backend call.
Further, the pinned version of the fa3 kernel is removed such that fa3 works for torch 2.10 now. Note, that some kernel builds return (out, lse), others return just out, depending on torch and cuda version. Thus, this must be handled in the registered torch-op function.
Related Issue
/
Type of Change
How Has This Been Tested?
Run in notebook
Checklist
Additional Notes
/