-
Notifications
You must be signed in to change notification settings - Fork 4.4k
.Net: feat: Add Anthropic Connector for Claude models #13419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Sorry. Too many small bugs here, this goes into more extensive testing to support all great features like binary returns from tool calls. Edit: all issues that I found are resolved. Re-opening the PR. |
|
Refactoring in progress: Converting to use M.E.AI's IChatClient pattern via UseKernelFunctionInvocation() to align with the OpenAI connector's recommended approach. This will significantly reduce code complexity (~1200 lines → ~100 lines) while maintaining full SK filter support. |
Adds a new connector for Anthropic's Claude models with full feature parity to other SK connectors: Features: - Chat completion (streaming and non-streaming) - Auto function calling with proper text and usage aggregation - Tool/function definitions and invocation - Multi-modal support (images) - Prompt execution settings (temperature, top_p, max_tokens, etc.) - Auto function invocation filters - Telemetry and activity tracing Key implementation details: - Text content generated before tool calls is preserved across iterations - Token usage (InputTokens, OutputTokens, TotalTokens) is aggregated across all function calling iterations - All exit points (normal completion, filter termination, max iterations) correctly apply aggregated values - Defensive validation for API response edge cases Includes comprehensive unit tests (157 tests) covering: - Chat completion scenarios - Function calling with filters - Streaming behavior - Text and usage aggregation - Error handling
Prepare Anthropic connector for future FunctionCallsProcessor changes that will preserve ImageContent objects instead of serializing them to strings. This enables native image support in tool results when that update lands.
Replaces custom AnthropicClientCore implementation with M.E.AI's IChatClient pattern using ChatClientBuilder and UseKernelFunctionInvocation() for auto function calling. Key changes: - Remove AnthropicClientCore (~1700 lines) in favor of ChatClientBuilder pipeline - Use AnthropicClient.Messages.AsIChatClient() as inner client - Leverage UseKernelFunctionInvocation() for SK filter support - Add AnthropicPipelineHelpers for options conversion - Restructure DI extensions for proper pipeline construction - Expand test coverage to 157+ tests
58ab17c to
31039cf
Compare
|
I am done here. If anything is not to your liking, please comment and it shall be fixed. |
…enarios M.E.AI's FunctionInvokingChatClient automatically continues conversations when it receives tool_use responses. This means tests with tool call responses need multiple HTTP responses queued, even for streaming tests. Changes: - Removed Skip attribute from GetStreamingChatMessageContentsAsyncWithToolCallsReturnsContentAsync - Added SetupStreamingFunctionCallScenario helper for streaming multi-response tests - Added final_streaming_response_after_tool_call.txt test data file - Updated test to queue both tool call and final response
Motivation and Context
This PR adds a native Anthropic connector for Semantic Kernel, enabling direct integration with Claude models.
Why is this change required?
What problem does it solve?
What scenario does it contribute to?
Description
This PR introduces a complete Anthropic connector using the IChatClient pattern aligned with the OpenAI connector's recommended approach.
Implementation approach:
ChatClientBuilderpipeline withAnthropicClient.Messages.AsIChatClient()as the inner client, following the same pattern as the OpenAI connectorIAutoFunctionInvocationFilter)Anthropic.SDKNuGet package)Supported Prompt Execution Settings:
Testing:
The connector includes 157+ unit tests covering chat completion, function calling with filters, streaming, multi-modal content, error handling, and settings serialization.
Contribution Checklist