-
-
Notifications
You must be signed in to change notification settings - Fork 827
Add k-bit blockwise quantization (K=2-5) with warp-level CUDA kernels #1858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
TimDettmers
wants to merge
11
commits into
main
Choose a base branch
from
feature/kbit-quantization
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
c39f791
Add k-bit quantization kernels (K=2-5, blocksize=32) -- WIP
TimDettmers fb649f1
Fix RDC device linking: move kernels to ops.cu, all 157 tests pass
TimDettmers 2825890
Complete k-bit quantization: Stages 6-8, Python API, 218 tests pass
TimDettmers 4b17a2f
Remove implementation progress report
TimDettmers 2973bf5
Add vectorized dequant kernel and E4M4 uint8 absmax support
TimDettmers 03415e1
Remove scalar dequant kernel, fp32 absmax, and Stage 1-3 scaffolding
TimDettmers 8a2817e
Template dequant kernel on output type, add bf16/fp32 native output
TimDettmers f52b572
Fix lint and formatting issues from CI pre-commit checks
TimDettmers f95a7f2
Fix analytical error bound for K=5 with E4M4 absmax
TimDettmers d1f3d75
Add out parameter to dequantize_kbit for CUDA graph compatibility
TimDettmers 10cf922
docs: Add kbit design docs, remove spec.md
TimDettmers File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the user passes fp32 absmax, the dequant dispatch silently encodes it to E4M4 before calling the kernel. This is a lossy conversion the caller may not expect — they passed fp32 precision but get E4M4 precision. Consider either warning or documenting this behavior.