Skip to content

AtomicBot-ai/atomic-llama-cpp-turboquant

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9,013 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Atomic llama.cpp

atomic llama

License: MIT Release Server

Manifesto / ggml / ops

LLM inference in C/C++

Recent API changes

Hot topics


Quick start

Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:

Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.

Example command:

# Use a local model file
llama-cli -m my_model.gguf

# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUF

Description

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.

  • Plain C/C++ implementation without any dependencies
  • Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
  • AVX, AVX2, AVX512 and AMX support for x86 architectures
  • RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
  • 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
  • Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
  • Vulkan and SYCL backend support
  • CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity

The llama.cpp project is the main playground for developing new features for the ggml library.

Gemma 4 MTP — speculative decoding

This fork ships a first-class implementation of Multi-Token Prediction (MTP) speculative decoding for Gemma 4 targets paired with the official gemma4_assistant drafter head. Unlike a classical draft-model setup, the assistant is loaded into the target context (no second llama_context, no second tokenizer, no separate KV cache) and runs on a dedicated scheduler so MTP draft compute overlaps target verification.

Highlights:

  • +30-50 % short-prompt throughput on Gemma 4 26B-A4B / 31B in the matrix bench (f16 KV); accept rate ~85-88 % on dense targets.
  • Async pipeline (depth-2) with llama_decode_mtp_async / llama_decode_mtp_wait so MTP work overlaps server post-accept bookkeeping.
  • In-graph argmax — host transfers 4 bytes per draft step instead of the full F32 [n_vocab] row.
  • Centroid LM head for Edge variants (E2B / E4B); dense tied head for 26B-A4B / 31B.

Pre-built assistant GGUFs

Recommended quantization is Q4_K_M (throughput is identical to F16 on this assistant size — bandwidth, not weight precision, dominates — while footprint is ~4× lower). Also published: Q4_K_S, Q5_K_M, Q8_0, F16.

AtomicChat / Gemma 4 Assistant GGUF collection

Target model Assistant (MTP head) GGUF
Gemma 4 E2B AtomicChat/gemma-4-E2B-it-assistant-GGUF
Gemma 4 E4B AtomicChat/gemma-4-E4B-it-assistant-GGUF
Gemma 4 26B-A4B AtomicChat/gemma-4-26B-A4B-it-assistant-GGUF
Gemma 4 31B AtomicChat/gemma-4-31B-it-assistant-GGUF

Quick start

# Manual invocation — works for any of the four targets above.
llama-server \
  -m /path/to/gemma-4-target.gguf \
  --mtp-head /path/to/gemma-4-assistant-Q4_K_M.gguf \
  --spec-type mtp \
  --draft-block-size 3 \
  -c 16384 \
  -ngl 99 -ngld 99 \
  -fa on \
  --host 127.0.0.1 --port 8080

Repo helper scripts pick the right defaults per target (and prefer a quantized assistant under .scratch/ when one exists):

# Dense targets.
scripts/run-gemma4-mtp-server.sh         # 26B-A4B
scripts/run-gemma4-31b-mtp-server.sh     # 31B

# Edge / centroid-head targets — MTP_PRESET=throughput|lift|balanced|quality.
MTP_PRESET=throughput scripts/run-gemma4-e4b-mtp-server.sh
MTP_PRESET=throughput scripts/run-gemma4-e2b-mtp-server.sh

Bench snapshot (MacBook Pro M4 Max, 40-core GPU, 48 GB, Metal, single slot)

Median tps over 3 runs with Q4_K_M assistant heads. Dense scripts default to --draft-block-size 3; E4B uses MTP_PRESET=throughput (B = 2, --draft-max 6). See .scratch/bench-logs/gemma-matrix-fullrun-20260512-224705.md.

model mode n=128 tps n=512 tps accept@128 accept@512
gemma-E4B f16-base 90.3 89.0
gemma-E4B f16-mtp 94.3 86.0 80.0 % 64.5 %
gemma-E4B turbo3-mtp 67.8 64.5 82.6 % 72.3 %
gemma-26B f16-base 83.6 82.7
gemma-26B f16-mtp 110.8 75.7 84.0 % 67.9 %
gemma-26B turbo3-mtp 80.5 69.2 84.9 % 66.1 %
gemma-31B f16-base 19.4 17.5
gemma-31B f16-mtp 21.2 18.5 88.0 % 74.4 %
gemma-31B turbo3-mtp 19.4 16.3 88.0 % 70.7 %

Knobs

  • --draft-block-size B — head emits B - 1 tokens per round (default 4; bench used 3).
  • --mtp-head <path> (preferred) / -md <path> (back-compat alias).
  • LLAMA_MTP_SKIP_STREAK_THRESHOLD=N — adaptive skip after N consecutive zero-accept batches (off by default).
  • LLAMA_PIPELINE_DEPTH2=0 — disable depth-2 overlap (A/B against sync).
  • LLAMA_MTP_ACC_TRACE=1|<path> — NDJSON tracer for per-iteration draft / accept events.

Full architecture (graph, KV-safety contract, async pipeline, server integration, trade-offs) and the longer benchmark history live in MTP.md. User-facing CLI flags are also documented in docs/speculative.md.

Qwen 3.6 NextN — speculative decoding

This fork also ships a first-class implementation of NextN (a.k.a. MTP auxiliary-head) speculative decoding for Qwen 3.6 targets — both the dense qwen35 family and the qwen35moe Mixture-of-Experts variants. The NextN-layer weights ship inside the target's combined *_MTP.gguf (produced by the official Qwen converter), so the draft context reuses the already-loaded target llama_model — no second mmap, no second tokenizer, no second model load.

Highlights:

  • +28-36 % throughput on Qwen 3.6 35B-A3B MoE (the headline use case); acceptance ≥ 78 % at both prompt lengths in the matrix bench.
  • +5-7 % throughput on Qwen 3.6 27B dense (draft-compute-bound on this workload, but consistently positive after the shared-model refactor — previous double-mmap path regressed by 8-12 %).
  • Shared-model draft context built over the target's weights with cparams.nextn_draft = true; draft KV is sized only for the NextN layer (kv_only_nextn = true).
  • Composes with TurboQuant3 KV (-ctk turbo3 -ctv turbo3) — on MoE targets the combination is the recommended default.
  • Same async / depth-2 pipeline as Gemma MTP; pre-norm hidden states flow from the target via the embeddings_pre_norm path.

Pre-built model GGUFs

Recommended source is the unsloth Hugging Face collection — the same combined *_MTP.gguf files exercised in the matrix bench. The UD-Q4_K_XL quant is the recommended default (matches the bench cells).

Target Combined _MTP.gguf (target + NextN head)
Qwen 3.6 35B-A3B (MoE) unsloth/Qwen3.6-35B-A3B-MTP-GGUF
Qwen 3.6 27B (dense) unsloth/Qwen3.6-27B-MTP-GGUF

Quick start

# Pull both target (-hf) and draft (-hfd) from the same HF combined _MTP.gguf;
# they resolve to the same cached file → the server takes the shared-model branch.
llama-server \
  -hf  unsloth/Qwen3.6-35B-A3B-MTP-GGUF:UD-Q4_K_XL \
  -hfd unsloth/Qwen3.6-35B-A3B-MTP-GGUF:UD-Q4_K_XL \
  --spec-type nextn \
  --draft-max 2 --draft-min 1 \
  -c 8192 \
  -ngl 99 -ngld 99 \
  -ctk turbo3 -ctv turbo3 -fa on \
  --host 127.0.0.1 --port 8080

Or with a local file (e.g. the artifact stored under .scratch/):

llama-server \
  -m   /path/to/Qwen3.6-35B-A3B-UD-Q4_K_XL_MTP.gguf \
  -md  /path/to/Qwen3.6-35B-A3B-UD-Q4_K_XL_MTP.gguf \
  --spec-type nextn --draft-max 2 --draft-min 1 \
  -c 8192 -ngl 99 -ngld 99 -ctk turbo3 -ctv turbo3 -fa on

Repo helper scripts pick the right defaults per target:

scripts/run-qwen36-27b-nextn-server.sh        # Qwen 3.6 27B dense
scripts/run-qwen36-35ba3b-nextn-server.sh     # Qwen 3.6 35B-A3B MoE

If you ship the NextN head as a separate NEXTN_ONLY GGUF (general.architecture = qwen35*_mtp), it is still supported — point --model-draft at that file and the server falls back to the legacy override_arch path (loads a second llama_model).

Bench snapshot (MacBook Pro M4 Max, 40-core GPU, 48 GB, Metal, single slot)

Median tps over 3 runs, --draft-max 2 --draft-min 1, single-slot, shared target/draft model. See .scratch/bench-logs/qwen-matrix-fullrun-20260512-222625.md.

model mode n=128 tps n=512 tps accept@128 accept@512
qwen-27B dense f16-base 21.3 20.8
qwen-27B dense f16-nextn 22.9 21.6 93.9 % 85.1 %
qwen-27B dense turbo3-base 19.7 18.7
qwen-27B dense turbo3-nextn 20.8 19.7 85.5 % 78.7 %
qwen-35B-A3B MoE f16-base 70.1 69.6
qwen-35B-A3B MoE f16-nextn 95.2 89.1 88.2 % 78.7 %
qwen-35B-A3B MoE turbo3-base 61.8 62.0
qwen-35B-A3B MoE turbo3-nextn 82.7 77.2 82.9 % 80.6 %

Knobs

  • --spec-type nextn — enable NextN drafting (not Gemma mtp).
  • --model-draft / -md — pass the same path as --model for the shared-model path; pass a NEXTN_ONLY GGUF to use the legacy double-load fallback.
  • --draft-max / --draft-min — chained-draft bounds per round (current default for the helper scripts: 2 / 1).
  • llama_set_nextn (C API) — pairs target and draft contexts so that llama_context_nextn_seq_rm trims both KV caches in one call.

Full architecture (graph dispatch, KV-only-NextN trick, hidden-state transfer, performance trade-offs and the 27B-dense compute-bound analysis) lives in NEXTN.md; user-facing CLI flags are also documented in docs/speculative.md.

TurboQuant — KV cache & weight compression

Credits. TurboQuant in this fork is built on top of the absolutely awesome work by @TheTom in TheTom/llama-cpp-turboquant. Huge thanks for the original WHT-rotated quantization design, the reference kernels, and the relentless backend ports — none of this would exist without that project. ❤️

This fork (atomic-llama-cpp-turboquant) packages TurboQuant as a family of WHT-rotated low-bit quantization formats with backend-native kernels. They target two distinct memory-traffic problems:

  • KV cache compressionTURBO2_0 / TURBO3_0 / TURBO4_0 (2/3/4-bit, WHT + PolarQuant). Selected at runtime via -ctk / -ctv.
  • Model weight compressionTQ3_1S / TQ4_1S (3/4-bit, WHT-rotated Lloyd-Max with block_size = 32). Selected at quantize time as a --type for llama-quantize.

KV cache types (-ctk / -ctv)

Type Bits Compression vs F16 Notes
turbo2 2 ~6.4× maximum compression, intended for large-context budgets
turbo3 3 ~4.3× recommended default; Metal TurboFlash decode kernel
turbo4 4 ~3.8× highest accuracy of the family, safest fallback

Typical invocation with full GPU offload + Flash-Attention:

llama-server -m model.gguf -c 32768 -ngl 99 \
  -ctk turbo3 -ctv turbo3 -fa on

Pair with --cache-reuse N and a long -c to see the practical KV-budget win — TurboQuant typically shifts the OOM ceiling on Apple Silicon / discrete GPUs by 3-6× at the same context length.

Weight quantization types (llama-quantize)

Type Bits Block size Notes
TQ3_1S 3 32 8-level Lloyd-Max + WHT rotation
TQ4_1S 4 32 16-level Lloyd-Max + WHT rotation; fused Metal/Vulkan MUL_MAT_VEC kernels
# Convert / re-quantize an F16/F32 GGUF to TQ4_1S.
llama-quantize model-f16.gguf model-tq4_1s.gguf TQ4_1S

TQ4_1S typically delivers ~25-35 % size reduction vs Q8_0 with single-digit-% PPL deltas; on bandwidth-bound models / GPUs it can also be faster than Q8_0 because of the lighter memory traffic.

Backend support

Backend KV turbo2 / turbo3 / turbo4 Weights TQ3_1S / TQ4_1S
Metal (Apple Silicon) yes; TurboFlash flash-attn decode kernel for turbo3 (off-by-default on Apple10 — see PR #91) yes (V2.1 fused kernels)
CUDA (NVIDIA) turbo3 / turbo4 (full); turbo2 via reference path TQ4_1S MUL_MAT_VEC
Vulkan turbo3 KV (FA + coopmat), SET_ROWS for turbo2/4 TQ4_1S (specialised MUL_MAT_VEC, SET_ROWS, CPY)
HIP / ROCm turbo3 KV; F16-K + TURBO-V mixed dispatch reference
CPU reference (correctness, not throughput) reference

For combining TurboQuant KV with Gemma 4 MTP speculative decoding, see MTP.md §11-12. The matrix bench shows that the combo (turbo3 KV + MTP) is the right pick when the target model is bandwidth-bound (e.g. Gemma 4 31B), and that f16-KV + MTP wins when the target is compute-bound (e.g. Gemma 4 26B-A4B on M4 Max).

For Qwen 3.6 NextN speculative decoding on top of TurboQuant3 KV, see NEXTN.md §7. The matrix bench shows that turbo3 KV + NextN is the recommended default on the MoE target (Qwen 3.6 35B-A3B, +24-36 % tps over the turbo3-base baseline at single-slot), and lifts the dense Qwen 3.6 27B by ~5 % on top of turbo3-base despite the model being draft-compute-bound.

Models

Typically finetunes of the base models below are supported as well.

Instructions for adding support for new models: HOWTO-add-model.md

Text-only

Multimodal

Bindings
UIs

(to have a project listed here, it should clearly state that it depends on llama.cpp)

Tools
  • akx/ggify – download PyTorch models from Hugging Face Hub and convert them to GGML
  • akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
  • crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
  • gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
  • Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
  • unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
  • Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
  • GPUStack - Manage GPU clusters for running LLMs
  • llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
  • llama-swap - transparent proxy that adds automatic model switching with llama-server
  • Kalavai - Crowdsource end to end LLM deployment at any scale
  • llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
  • LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
  • Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.

Supported backends

Backend Target devices
Metal Apple Silicon
BLAS All
BLIS All
SYCL Intel and Nvidia GPU
OpenVINO [In Progress] Intel CPUs, GPUs, and NPUs
MUSA Moore Threads GPU
CUDA Nvidia GPU
HIP AMD GPU
ZenDNN AMD CPU
Vulkan GPU
CANN Ascend NPU
OpenCL Adreno GPU
IBM zDNN IBM Z & LinuxONE
WebGPU [In Progress] All
RPC All
Hexagon [In Progress] Snapdragon
VirtGPU VirtGPU APIR

Obtaining and quantizing models

The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:

You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, by using this CLI argument: -hf <user>/<model>[:quant]. For example:

llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. The MODEL_ENDPOINT must point to a Hugging Face compatible API endpoint.

After downloading a model, use the CLI tools to run it locally - see below.

llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.

The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:

To learn more about model quantization, read this documentation

A CLI tool for accessing and experimenting with most of llama.cpp's functionality.

  • Run in conversation mode

    Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding -cnv and specifying a suitable chat template with --chat-template NAME

    llama-cli -m model.gguf
    
    # > hi, who are you?
    # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
    #
    # > what is 1+1?
    # Easy peasy! The answer to 1+1 is... 2!
  • Run in conversation mode with custom chat template
    # use the "chatml" template (use -h to see the list of supported templates)
    llama-cli -m model.gguf -cnv --chat-template chatml
    
    # use a custom template
    llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
  • Constrain the output with a custom grammar
    llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
    
    # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}

    The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.

    For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/

A lightweight, OpenAI API compatible, HTTP server for serving LLMs.

  • Start a local HTTP server with default configuration on port 8080
    llama-server -m model.gguf --port 8080
    
    # Basic web UI can be accessed via browser: http://localhost:8080
    # Chat completion endpoint: http://localhost:8080/v1/chat/completions
  • Support multiple-users and parallel decoding
    # up to 4 concurrent requests, each with 4096 max context
    llama-server -m model.gguf -c 16384 -np 4
  • Enable speculative decoding
    # the draft.gguf model should be a small variant of the target model.gguf
    llama-server -m model.gguf -md draft.gguf
  • Enable Gemma 4 MTP speculative decoding (this fork)

    Pair a gemma4 target with the official gemma4_assistant MTP head. The head is loaded into the target context (no second llama_context, no second KV cache) and runs on a dedicated scheduler so MTP draft compute overlaps target verification.

    Pre-built assistant GGUFs (recommended Q4_K_M / Q4_K_S for best speed/quality) are published in the AtomicChat / Gemma 4 Assistant GGUF collection:

    Target model Assistant (MTP head) GGUF
    Gemma 4 E2B AtomicChat/gemma-4-E2B-it-assistant-GGUF
    Gemma 4 E4B AtomicChat/gemma-4-E4B-it-assistant-GGUF
    Gemma 4 26B-A4B AtomicChat/gemma-4-26B-A4B-it-assistant-GGUF
    Gemma 4 31B AtomicChat/gemma-4-31B-it-assistant-GGUF
    # Manual invocation — works for any of the four targets above.
    llama-server \
      -m   /path/to/gemma-4-target.gguf \
      --mtp-head /path/to/gemma-4-assistant-Q4_K_M.gguf \
      --spec-type mtp \
      --draft-block-size 3 \
      -c 16384 \
      -ngl 99 -ngld 99 \
      -fa on \
      --host 127.0.0.1 --port 8080

    Repo helper scripts pick the right defaults per target (and prefer a quantized assistant when present under .scratch/):

    # Dense targets (block size 3 by default).
    scripts/run-gemma4-mtp-server.sh           # 26B-A4B
    scripts/run-gemma4-31b-mtp-server.sh       # 31B
    
    # Edge / centroid-head targets (MTP_PRESET aware: throughput|lift|balanced|quality).
    MTP_PRESET=throughput scripts/run-gemma4-e4b-mtp-server.sh
    MTP_PRESET=throughput scripts/run-gemma4-e2b-mtp-server.sh

    Full architecture, async pipeline, KV-safety contract, tuning knobs and the latest matrix benchmark live in MTP.md. User-facing CLI flags (--spec-type, --draft-*) are documented in docs/speculative.md.

  • Enable Qwen 3.6 NextN speculative decoding (this fork)

    For Qwen 3.6 combined *_MTP.gguf checkpoints (the official Qwen converter packs the NextN auxiliary-head weights into the same file as the target), point --model-draft (-md) at the same file as --model and pass --spec-type nextn. The server detects this and reuses the already-loaded target llama_model — drafting builds a second llama_context over the same weights with llama_context_params.nextn_draft = true, so there is no second mmap of the GGUF, no second tokenizer and no second weight load. Composes with TurboQuant3 KV (-ctk turbo3 -ctv turbo3) — on Qwen 3.6 35B-A3B MoE the combination is +24-36 % tps vs the same target without speculation.

    Pre-built combined _MTP.gguf quants (recommended UD-Q4_K_XL, matches the matrix bench cells):

    Target Combined _MTP.gguf
    Qwen 3.6 35B-A3B (MoE) unsloth/Qwen3.6-35B-A3B-MTP-GGUF
    Qwen 3.6 27B (dense) unsloth/Qwen3.6-27B-MTP-GGUF
    # Pull both target (-hf) and draft (-hfd) from the same HF combined _MTP.gguf.
    llama-server \
      -hf  unsloth/Qwen3.6-35B-A3B-MTP-GGUF:UD-Q4_K_XL \
      -hfd unsloth/Qwen3.6-35B-A3B-MTP-GGUF:UD-Q4_K_XL \
      --spec-type nextn \
      --draft-max 2 --draft-min 1 \
      -c 8192 \
      -ngl 99 -ngld 99 \
      -ctk turbo3 -ctv turbo3 -fa on \
      --host 127.0.0.1 --port 8080

    Or with a local file:

    llama-server \
      -m   /path/to/Qwen3.6-35B-A3B-UD-Q4_K_XL_MTP.gguf \
      -md  /path/to/Qwen3.6-35B-A3B-UD-Q4_K_XL_MTP.gguf \
      --spec-type nextn --draft-max 2 --draft-min 1 \
      -c 8192 -ngl 99 -ngld 99 -ctk turbo3 -ctv turbo3 -fa on

    Repo helpers pick the right defaults per target:

    scripts/run-qwen36-27b-nextn-server.sh        # Qwen 3.6 27B dense
    scripts/run-qwen36-35ba3b-nextn-server.sh     # Qwen 3.6 35B-A3B MoE

    Standalone NEXTN_ONLY GGUFs (general.architecture = qwen35*_mtp) are still supported as a fallback (the server then performs a second llama_model_load_from_file with override_arch). The shared-model path is preferred whenever the same combined _MTP.gguf can be used as both --model and --model-draft.

    Full architecture, KV-only-NextN trick, hidden-state transfer and the matrix bench (incl. the 27B-dense compute-bound analysis) live in NEXTN.md. User-facing CLI flags (--spec-type nextn, --draft-*) are documented in docs/speculative.md.

  • Enable TurboQuant KV cache compression (this fork)

    Use a TurboQuant KV-cache type for both K and V — typically with Flash-Attention enabled — to cut KV memory traffic and footprint at long contexts. Recommended default is turbo3 (3-bit, ~4.3× vs F16, accelerated by TurboFlash on Metal and dedicated kernels on CUDA / Vulkan / HIP).

    # ~4.3x KV compression vs F16, full GPU offload, Flash-Attn on.
    llama-server -m model.gguf -c 32768 \
      -ngl 99 -ctk turbo3 -ctv turbo3 -fa on

    Pick a stronger compression preset by stepping the bit-width:

    -ctk turbo2 -ctv turbo2   # 2-bit KV, ~6.4x vs F16 (highest compression)
    -ctk turbo3 -ctv turbo3   # 3-bit KV, ~4.3x  (default sweet spot)
    -ctk turbo4 -ctv turbo4   # 4-bit KV, ~3.8x  (highest accuracy / fallback)

    See the longer write-up above for weight quantization (TQ4_1S / TQ3_1S) and the per-backend support matrix.

  • Serve an embedding model
    # use the /embedding endpoint
    llama-server -m model.gguf --embedding --pooling cls -ub 8192
  • Serve a reranking model
    # use the /reranking endpoint
    llama-server -m model.gguf --reranking
  • Constrain all outputs with a grammar
    # custom grammar
    llama-server -m model.gguf --grammar-file grammar.gbnf
    
    # JSON
    llama-server -m model.gguf --grammar-file grammars/json.gbnf

A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.

  • Measure the perplexity over a text file
    llama-perplexity -m model.gguf -f file.txt
    
    # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
    # Final estimate: PPL = 5.4007 +/- 0.67339
  • Measure KL divergence
    # TODO

Benchmark the performance of the inference for various parameters.

  • Run default benchmark
    llama-bench -m model.gguf
    
    # Output:
    # | model               |       size |     params | backend    | threads |          test |                  t/s |
    # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         pp512 |      5765.41 ± 20.55 |
    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         tg128 |        197.71 ± 0.81 |
    #
    # build: 3e0ba0e60 (4229)

A minimal example for implementing apps with llama.cpp. Useful for developers.

  • Basic text completion
    llama-simple -m model.gguf
    
    # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of

Contributing

  • Contributors can open PRs
  • Collaborators will be invited based on contributions
  • Maintainers can push to branches in the llama.cpp repo and merge PRs into the master branch
  • Any help with managing issues, PRs and projects is very appreciated!
  • See good first issues for tasks suitable for first contributions
  • Read the CONTRIBUTING.md for more information
  • Make sure to read this: Inference at the edge
  • A bit of backstory for those who are interested: Changelog podcast

Other documentation

Development documentation

Seminal papers and background on the models

If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:

XCFramework

The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:

// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.

import PackageDescription

let package = Package(
    name: "MyLlamaPackage",
    targets: [
        .executableTarget(
            name: "MyLlamaPackage",
            dependencies: [
                "LlamaFramework"
            ]),
        .binaryTarget(
            name: "LlamaFramework",
            url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
            checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
        )
    ]
)

The above example is using an intermediate build b5046 of the library. This can be modified to use a different version by changing the URL and checksum.

Completions

Command-line completion is available for some environments.

Bash Completion

$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

Optionally this can be added to your .bashrc or .bash_profile to load it automatically. For example:

$ echo "source ~/.llama-completion.bash" >> ~/.bashrc

Dependencies

  • yhirose/cpp-httplib - Single-header HTTP server, used by llama-server - MIT license
  • stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
  • nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
  • miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
  • subprocess.h - Single-header process launching solution for C and C++ - Public domain

Footnotes

  1. https://huggingface.co/docs/transformers/perplexity

Packages

 
 
 

Contributors

Languages

  • C++ 53.4%
  • C 18.0%
  • Python 6.9%
  • Cuda 5.9%
  • HTML 2.8%
  • TypeScript 2.7%
  • Other 10.3%