UPSTREAM PR #1239: LoRA: improve LoCon support with other naming conventions#39
UPSTREAM PR #1239: LoRA: improve LoCon support with other naming conventions#39
Conversation
OverviewAnalysis of 47,944 functions across two binaries reveals minimal performance impact from a single commit improving LoRA/LoCon naming convention support. Modified functions: 85 (0.18%), new: 53, removed: 32, unchanged: 47,774 (99.64%). Power Consumption:
Energy efficiency remains essentially unchanged, confirming negligible impact on production workloads. Function AnalysisMost Significant Regressions:
Most Significant Improvements:
Source Code Context: The single commit modified only lora.hpp, adding fallback logic for LoKr weight naming conventions in Other analyzed functions (validation, logging, constructors, string conversion) showed minor changes with negligible cumulative impact. Additional FindingsCritical Path Assessment: No changes detected in inference hot paths (tensor operations, attention mechanisms, GPU kernels). Core ML workloads remain unaffected. Primary Concern: The LoRA Impact: The naming convention improvements successfully enhance model compatibility. The 33% throughput improvement in Overall Assessment: The target version achieves functional objectives (enhanced LoRA compatibility) without measurable inference performance degradation. Performance variations are predominantly compiler-driven, with no systemic regressions in production-critical operations. 🔎 Full breakdown: Loci Inspector. |
0219cb4 to
17a1e1e
Compare
Mirrored from leejet/stable-diffusion.cpp#1239
Lora "mid" weights for convolution layers were being ignored, wich can cause crashes (
stable-diffusion.cpp\lora.hpp:498: GGML_ASSERT(ggml_nelements(diff) == ggml_nelements(model_tensor)) failed) when loading some LoRA models.This should fix it in a lot of, if not all, cases.
Example model that fails before this change: https://civitai.green/models/918898/lokrconcept-ahetobleh-for-illustrious-based-models