Skip to content

feat(llm): add Featherless AI provider#65

Closed
aviflombaum wants to merge 2 commits intospacedriveapp:mainfrom
aviflombaum:feat/featherless-provider
Closed

feat(llm): add Featherless AI provider#65
aviflombaum wants to merge 2 commits intospacedriveapp:mainfrom
aviflombaum:feat/featherless-provider

Conversation

@aviflombaum
Copy link

@aviflombaum aviflombaum commented Feb 19, 2026

Summary

Adds Featherless AI as a native LLM provider. Featherless hosts 17,000+ open-source models behind an OpenAI-compatible API, giving spacebot users access to models like Qwen3-32B, Llama, Mistral finetunes, and others not available through existing providers.

Registers featherless_key as a named key that auto-maps to a ProviderConfig with ApiType::OpenAiCompletions, following the same pattern as anthropic, openai, and openrouter. The generic provider dispatch in attempt_completion handles routing without any model.rs changes.

Changes

4 files changed, 55 insertions, 4 deletions

File Change
src/config.rs featherless_key field on LlmConfig, TomlLlmConfigFields, TomlLlmConfig; FEATHERLESS_API_KEY env var in load_from_env and from_toml; FEATHERLESS_PROVIDER_BASE_URL constant; auto-mapping to ProviderConfig in both config paths; has_any_key() check; needs_onboarding() env check; onboarding menu entry
src/llm/routing.rs defaults_for_provider entry (default: featherless/Qwen/Qwen3-32B); provider_to_prefix entry
src/api/providers.rs featherless field on ProviderStatus; all three endpoint handlers (get_providers, update_provider, delete_provider); routing auto-switch in has_key_for_current
src/api/models.rs configured_providers discovery line

Usage

Config (config.toml):

[llm]
featherless_key = "your-api-key"

[defaults.routing]
channel = "featherless/Qwen/Qwen3-32B"

Environment variable:

FEATHERLESS_API_KEY=your-api-key

Model routing format: featherless/<org>/<model-name> — the featherless/ prefix is stripped before sending to the API, so model names match what Featherless lists (e.g. Qwen/Qwen3-32B, moonshotai/Kimi-K2.5).

Design notes

  • No changes to llm/model.rs or llm/manager.rs — the generic ProviderConfig dispatch added in feat(llm): add custom providers and dynamic API routing #36 handles Featherless automatically once the key is registered with ApiType::OpenAiCompletions
  • Uses or_insert_with so a custom [llm.provider.featherless] block in config.toml takes precedence over the named key auto-mapping
  • Default model is Qwen/Qwen3-32B (32B parameter open-source model with tool calling support)

Why Featherless

Featherless specializes in hosting open-source models that aren't available (or are expensive) on other providers. Users get access to thousands of community and frontier open-source models with a single API key, complementing the closed-model providers already supported.

Testing

  • Rebased on current main (no merge conflicts)
  • Every change mirrors the existing named-key-to-ProviderConfig pattern
  • No changes to existing provider behavior; all additions are purely additive

@aviflombaum aviflombaum force-pushed the feat/featherless-provider branch from 9e3f00d to cad3859 Compare February 19, 2026 21:38
Add Featherless AI as a native LLM provider, giving access to 17,000+
open-source models via their OpenAI-compatible API.

Registers featherless_key as a named key that auto-maps to a
ProviderConfig with ApiType::OpenAiCompletions, following the same
pattern as anthropic, openai, and openrouter. The generic provider
dispatch handles routing without any model.rs changes.

Changes:
- config: featherless_key field, FEATHERLESS_API_KEY env var,
  FEATHERLESS_PROVIDER_BASE_URL constant, auto-mapping to
  ProviderConfig in both load_from_env and from_toml, onboarding
  menu entry, needs_onboarding env check
- llm/routing: default model (Qwen/Qwen3-32B), provider_to_prefix
- api/providers: get/update/delete endpoints, routing auto-switch
- api/models: configured_providers discovery
@aviflombaum aviflombaum force-pushed the feat/featherless-provider branch from cad3859 to 6cf69ba Compare February 19, 2026 21:58
The call_openai() function appends /v1/chat/completions to the base URL,
so including /v1 in the constant caused double /v1 in the final URL
(https://api.featherless.ai/v1/v1/chat/completions), resulting in 404.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments