Skip to content

Conversation

@Cozmopolit
Copy link
Contributor

@Cozmopolit Cozmopolit commented Dec 16, 2025

Motivation and Context

This PR adds a native Anthropic connector for Semantic Kernel, enabling direct integration with Claude models.

Why is this change required?

  • Anthropic's Claude models are among the leading LLMs, but SK currently lacks native support
  • Microsoft recently started offering Anthropic models on Azure, making this connector increasingly relevant
  • Users currently need to implement custom solutions or use workarounds

What problem does it solve?

  • Provides first-class Claude model support in Semantic Kernel
  • Enables consistent SK patterns (function calling, filters, telemetry) with Anthropic models
  • Supports both direct Anthropic API and Azure-hosted Anthropic endpoints

What scenario does it contribute to?

  • Developers building multi-model applications who want to use Claude alongside other models
  • Azure customers who want to use Anthropic models through their existing Azure infrastructure
  • Anyone who needs Claude's capabilities with SK's function calling and agent patterns

Description

This PR introduces a complete Anthropic connector using the IChatClient pattern aligned with the OpenAI connector's recommended approach.

Implementation approach:

  • IChatClient Pattern - Uses ChatClientBuilder pipeline with AnthropicClient.Messages.AsIChatClient() as the inner client, following the same pattern as the OpenAI connector
  • UseKernelFunctionInvocation() - Leverages SK's built-in function invocation middleware for auto function calling with full filter support (IAutoFunctionInvocationFilter)
  • Anthropic SDK Integration - Built on top of the official Anthropic .NET SDK (Anthropic.SDK NuGet package)
  • Dual Endpoint Support - Works with both direct Anthropic API and Anthropic models on Azure
  • Full Feature Parity - Chat completion, streaming, function calling, multi-modal (images), prompt execution settings
  • Minimal Custom Code - The IChatClient pattern significantly reduces connector complexity by delegating function calling orchestration to SK's shared infrastructure

Supported Prompt Execution Settings:

  • ModelId, MaxTokens, Temperature, TopP, TopK
  • StopSequences, SystemPrompt
  • FunctionChoiceBehavior (Auto, Required, None)

Testing:

The connector includes 157+ unit tests covering chat completion, function calling with filters, streaming, multi-modal content, error handling, and settings serialization.

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the SK Contribution Guidelines and the pre-submission formatting script raises no violations
  • All unit tests pass, and I have added new tests where possible
  • I didn't break any Agent this year, so far 😄

@Cozmopolit Cozmopolit requested a review from a team as a code owner December 16, 2025 16:06
@moonbox3 moonbox3 added .NET Issue or Pull requests regarding .NET code kernel Issues or pull requests impacting the core kernel labels Dec 16, 2025
@github-actions github-actions bot changed the title feat(.Net): Add Anthropic Connector for Claude models .Net: feat(.Net): Add Anthropic Connector for Claude models Dec 16, 2025
@Cozmopolit Cozmopolit changed the title .Net: feat(.Net): Add Anthropic Connector for Claude models .Net: feat: Add Anthropic Connector for Claude models Dec 16, 2025
@Cozmopolit Cozmopolit closed this Dec 17, 2025
@Cozmopolit
Copy link
Contributor Author

Cozmopolit commented Dec 17, 2025

Sorry. Too many small bugs here, this goes into more extensive testing to support all great features like binary returns from tool calls.

Edit: all issues that I found are resolved. Re-opening the PR.

@Cozmopolit
Copy link
Contributor Author

Refactoring in progress: Converting to use M.E.AI's IChatClient pattern via UseKernelFunctionInvocation() to align with the OpenAI connector's recommended approach. This will significantly reduce code complexity (~1200 lines → ~100 lines) while maintaining full SK filter support.

Adds a new connector for Anthropic's Claude models with full feature parity
to other SK connectors:

Features:
- Chat completion (streaming and non-streaming)
- Auto function calling with proper text and usage aggregation
- Tool/function definitions and invocation
- Multi-modal support (images)
- Prompt execution settings (temperature, top_p, max_tokens, etc.)
- Auto function invocation filters
- Telemetry and activity tracing

Key implementation details:
- Text content generated before tool calls is preserved across iterations
- Token usage (InputTokens, OutputTokens, TotalTokens) is aggregated across
  all function calling iterations
- All exit points (normal completion, filter termination, max iterations)
  correctly apply aggregated values
- Defensive validation for API response edge cases

Includes comprehensive unit tests (157 tests) covering:
- Chat completion scenarios
- Function calling with filters
- Streaming behavior
- Text and usage aggregation
- Error handling
Prepare Anthropic connector for future FunctionCallsProcessor changes that will
preserve ImageContent objects instead of serializing them to strings.
This enables native image support in tool results when that update lands.
Replaces custom AnthropicClientCore implementation with M.E.AI's IChatClient pattern using ChatClientBuilder and UseKernelFunctionInvocation() for auto function calling.

Key changes:

- Remove AnthropicClientCore (~1700 lines) in favor of ChatClientBuilder pipeline

- Use AnthropicClient.Messages.AsIChatClient() as inner client

- Leverage UseKernelFunctionInvocation() for SK filter support

- Add AnthropicPipelineHelpers for options conversion

- Restructure DI extensions for proper pipeline construction

- Expand test coverage to 157+ tests
@Cozmopolit Cozmopolit force-pushed the feature/anthropic-connector branch from 58ab17c to 31039cf Compare January 4, 2026 17:07
@Cozmopolit Cozmopolit marked this pull request as ready for review January 4, 2026 17:11
@Cozmopolit
Copy link
Contributor Author

I am done here.
Connector is now well tested and has loads of unit and integration tests.

If anything is not to your liking, please comment and it shall be fixed.

…enarios

M.E.AI's FunctionInvokingChatClient automatically continues conversations when
it receives tool_use responses. This means tests with tool call responses need
multiple HTTP responses queued, even for streaming tests.

Changes:
- Removed Skip attribute from GetStreamingChatMessageContentsAsyncWithToolCallsReturnsContentAsync
- Added SetupStreamingFunctionCallScenario helper for streaming multi-response tests
- Added final_streaming_response_after_tool_call.txt test data file
- Updated test to queue both tool call and final response
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kernel.core kernel Issues or pull requests impacting the core kernel .NET Issue or Pull requests regarding .NET code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants