Skip to content

Add support for Claude Code Token#32

Open
egenvall wants to merge 1 commit intospacedriveapp:mainfrom
egenvall:feature/antropic-subscription-usage
Open

Add support for Claude Code Token#32
egenvall wants to merge 1 commit intospacedriveapp:mainfrom
egenvall:feature/antropic-subscription-usage

Conversation

@egenvall
Copy link
Contributor

@egenvall egenvall commented Feb 18, 2026

Summary

  • Add Claude Code OAuth token (sk-ant-oat*) support, enabling use of Anthropic subscription-based authentication alongside standard API keys
  • Introduce a new src/llm/anthropic/ module handling auth routing, tool name normalization, prompt caching, and request building
  • Add per-slot adaptive thinking effort configuration (auto/max/high/medium/low) for Claude 4.6 models
  • Add frontend UI for thinking effort when a 4.6 model is selected
Screenshot 2026-02-18 at 16 38 09

Details

Inspired & Ported from PI-AI OAuth token authentication.

Anthropic API keys prefixes are auto-detected and route differently depending on whether it stems from a claude setup-token

Tool name normalization - When using a claude setup-token, tool names are mapped to Claude Code canonical casing on the way out, and reverse-mapped back to original names on responses.

Prompt caching - System prompt blocks and the last tool definition get cache_control attached

Adaptive thinking - 4.6-generation models (opus-4-6, sonnet-4-6) get thinking: { type: "adaptive" } with configurable effort per routing slot. Opus defaults to max, others to high.

Test plan

  • Run claude setup-token and use that as your API key for Anthropic.
  • Verify standard API key auth (sk-ant-api*) still works as before
  • Verify thinking effort dropdown only appears for 4.6 models in the admin UI
  • Run cargo test — new unit tests cover auth detection, cache control, tool normalization, and reverse mapping

Known issues

  • Different thinking levels for the same model needs a refactor to pipe per route.

- Add support for `claude setup-token` as the Anthropic API Key
- Add support for dynamic-thinking per router for supported models

Clean up

Revert quickstart
@jamiepine
Copy link
Member

Maybe not the best timing on this, given Anthropic have cracked down. Gonna sit on this one, curious what others think?

@insecurejezza
Copy link

I say give us the option at our own risk. Anthropic is intentionally vague about whether this is allowed or not.

@egenvall
Copy link
Contributor Author

Maybe not the best timing on this, given Anthropic have cracked down. Gonna sit on this one, curious what others think?

It's technically against TOS but after their statement yesterday they went out on X and said the crackdown was misunderstood and due to "docs cleanup".

In my opinion, let people use it at their own risk. The reward greatly outweighs the risk as API pricing is simply not reasonable for individuals who want to use 4.6

Marenz added a commit to Marenz/spacebot that referenced this pull request Feb 19, 2026
…builder

PR spacedriveapp#32's build_anthropic_request was used for all Anthropic-compatible
providers, sending MiniMax requests to Anthropic's API with OAuth
headers. Split into call_anthropic (OAuth, adaptive thinking, prompt
caching, tool normalization) and call_anthropic_compatible (plain
Anthropic message format for third-party providers like MiniMax).
@egenvall
Copy link
Contributor Author

@jamiepine if we're aligned I can rebase to get it in.

@robertocarvajal
Copy link

why not use opencode to spawn claude instances? that would be fine with TOS. Basically run a claude -p ralph loop on it.

@ricorna
Copy link
Contributor

ricorna commented Feb 19, 2026

Don't bother. They straight up changed something on their end yesterday evening. Anthropic only accepts OAuth based keys from Claude code now as far as I can tell?

Anyone still running an agent on their sub? I had to go to GLM on my OpenClaw.

@egenvall
Copy link
Contributor Author

Don't bother. They straight up changed something on their end yesterday evening. Anthropic only accepts OAuth based keys from Claude code now as far as I can tell?

Anyone still running an agent on their sub? I had to go to GLM on my OpenClaw.

I've been running on my sub all day 🤷

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants

Comments