Skip to content

UPSTREAM PR #1184: Feat: Select backend devices via arg#40

Open
loci-dev wants to merge 18 commits intomainfrom
loci/pr-1184-select-backend
Open

UPSTREAM PR #1184: Feat: Select backend devices via arg#40
loci-dev wants to merge 18 commits intomainfrom
loci/pr-1184-select-backend

Conversation

@loci-dev
Copy link

@loci-dev loci-dev commented Feb 2, 2026

Note

Source pull request: leejet/stable-diffusion.cpp#1184

The main goal of this PR is to improve user experience in multi-gpu setups, allowing to chose which model part gets sent to which device.

Cli changes:

  • Adds the --main-backend-device [device_name] argument to set the default backend
  • remove --clip-on-cpu, --vae-on-cpu and --control-net-cpu arguments
  • replace them respectively with the new --clip_backend_device [device_name], --vae-backend-device [device_name], --control-net-backend-device [device_name] arguments
  • add the --diffusion_backend_device (control the device used for the diffusion/flow models) and the --tae-backend-device
  • add --upscaler-backend-device, --photomaker-backend-device, and --vision-backend-device
  • add --list-devices argument to print the list of available ggml devices and exit.
  • add --rpc argument to connect to a compatible GGML rpc server

C API changes (stable-diffusion.h):

  • Change the content of the sd_ctx_params_t struct.
  • void list_backends_to_buffer(char* buffer, size_t buffer_size) to write the details of the available buffers to a null-terminated char array. Devices are separated by newline characters (\n), and the name and description of the device are separated by \t character.
  • size_t backend_list_size() to get the size of the buffer needed for void list_backends_to_buffer
  • void add_rpc_device(const char* address); connect to a ggml RPC backend (from llama.cpp)

The default device selection should now consistently prioritize discrete GPUs over iGPUs.

For example if you want to run the text encoders on CPU, you'd need to use --clip_backend_device CPU instead of --clip-on-cpu

TODO:

  • Fix bug with --lora-apply-mode immediately when clip and diffsion models are running on different (non-cpu) backends.
  • Clean up logs

Important: to use RPC, you need to add -DGGML_RPC=ON to the build. Additionally it requires either sd.cpp to be built with -DSD_USE_SYSTEM_GGML flag (I haven't tested that one), or the RPC server to be built with -DCMAKE_C_FLAGS="-DGGML_MAX_NAME=128" -DCMAKE_CXX_FLAGS="-DGGML_MAX_NAME=128" (default is 64)

Fixes #1116

@loci-dev loci-dev force-pushed the main branch 19 times, most recently from 052ebb0 to 76ede2c Compare February 3, 2026 10:20
@loci-dev loci-dev force-pushed the loci/pr-1184-select-backend branch from 29e8399 to 2d43513 Compare February 3, 2026 10:46
@loci-dev loci-dev temporarily deployed to stable-diffusion-cpp-prod February 3, 2026 10:46 — with GitHub Actions Inactive
@loci-review
Copy link

loci-review bot commented Feb 3, 2026

Overview

Analysis of stable-diffusion.cpp across 18 commits reveals minimal performance impact from multi-backend device management refactoring. Of 48,425 total functions, 124 were modified (0.26%), 331 added, and 109 removed. Power consumption increased negligibly: build.bin.sd-cli (+0.388%, 479,167→481,028 nJ) and build.bin.sd-server (+0.239%, 512,977→514,202 nJ).

Function Analysis

SDContextParams Constructor (both binaries): Response time increased ~40% (+2,816-2,840ns) due to initializing 9 new std::string device placement fields replacing 3 boolean flags. Enables per-component GPU/CPU device selection for heterogeneous computing.

SDContextParams Destructor (both binaries): Response time increased ~42% (+2,497-2,505ns) from destroying 9 additional string members. One-time cleanup cost outside inference paths.

~StableDiffusionGGML (both binaries): Throughput time increased ~95% (+192ns absolute) managing 7 backend types versus 3, including loop-based cleanup for multiple CLIP backends. Response time impact minimal (+5.2%, ~720ns).

ggml_e8m0_to_fp32_half (sd-cli): Response time improved 24% (-36ns), benefiting quantization operations called millions of times during inference.

Standard library functions (std::_Rb_tree::begin, std::vector::_S_max_size, std::swap): Showed 76-289% throughput increases due to template instantiation complexity, but absolute changes remain under 220ns in non-critical initialization paths.

Additional Findings

All performance regressions occur in initialization and cleanup phases, not inference hot paths. The architectural changes enable multi-GPU workload distribution, per-component device placement (diffusion, CLIP, VAE on separate devices), and runtime backend flexibility. Quantization improvements and multi-GPU capabilities provide net performance gains during actual inference, far exceeding the microsecond-level initialization overhead. Changes are well-justified architectural improvements with negligible real-world impact.

🔎 Full breakdown: Loci Inspector.
💬 Questions? Tag @loci-dev.

@loci-dev loci-dev force-pushed the main branch 6 times, most recently from 76645dd to 5bbc590 Compare February 7, 2026 04:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants