Skip to content

AgentFactories

Rasmus Wulff Jensen edited this page Jan 27, 2026 · 17 revisions

Note

In the samples below, we create and use an OpenAIAgentFactory, but all factories work the same (but with slight variations of what options are available)

Each AgentFactory serves the same purpose of creating Agent Definitions for you to use to call the LLMs.

Create the Agent Factory

You have 2 ways to create an Agent Factory (using simplified connection info or a Connection instance)

Option 1. Create a new instance manually

OpenAIAgentFactory agentFactory1 = new OpenAIAgentFactory("<api-key>");

//or

OpenAIAgentFactory agentFactory2 = new OpenAIAgentFactory(new OpenAIConnection
{
   ApiKey = "<api-key>",
   NetworkTimeout = TimeSpan.FromMinutes(5)
});

Option 2. Use Dependency Injection

builder.Services.AddOpenAIAgentFactory("<api-key>");

//or

builder.Services.AddOpenAIAgentFactory(new OpenAIConnection
{
    ApiKey = "<api-key>",
    NetworkTimeout = TimeSpan.FromMinutes(5)
});

Using AgentFactory to create Agents

Each AgentFactory has 2 CreateAgent method overloads: one with simplified options, and one with more advanced options

//Create your simplified Agent (Support 'model', 'instructions', 'name', and 'tools')
OpenAIAgent agent = agentFactory.CreateAgent(model: "gpt-5", instructions: "You are a nice AI");

//Create your more advanced Agent with access to all options
OpenAIAgent agent = agentFactory.CreateAgent(new AgentOptions
{
    Model = "gpt-5",
    ReasoningEffort = OpenAIReasoningEffort.Low, //Set reasoning effort
    Instructions = "You are a nice AI", //The System Prompt
    Tools = [], //Add your tools here
});

The provider-specific Agents

The CreateAgent methods gives back an provider-specific Agent; Example an OpenAIAgent. These agents are inheriting from standard AIAgent which means they are 100% compatible with the rest of Microsoft Agent Framework.

public class OpenAIAgent(AIAgent innerAgent) : AIAgent

If you for some reason do not wish to use these provider-specific agents (as you might wish to dynamically wish to make the same Agent from multiple providers you can refer to them as AIAgent without issue)

//Refering to an agent as an 'AIAgent' instead of an 'OpenAIAgent'
AIAgent agent = agentFactory.CreateAgent("gpt-5");

Options you can provide while creating an Agent

Mandatory Options

In all AgentFactories, Model is mandatory; in AnthropicAgentFactory, the property MaxOutputTokens is also mandatory to set, while it is optional for the rest.

Property Notes
Model - In AzureOpenAI, you technically provide the 'DeploymentName' from https://ai.azure.com and not the Model name
- Several of the Providers have a 'const' collection of Model-name, example OpenAIChatModels
MaxOutputTokens Only Anthropic has this requirement, and the rest can optionally set this.

Common Options

Most Agents needs Instructions and Tools to be of true value. Beyond that, the various 'Thinking settings' for reasoning models are common to set, and on OpenAI-based providers, you can choose ClientType. Finally, some scenarios require you to specify a name for your Agent

Property Notes
Name The Name of the Agent (Optional in most cases, but some scenarios do require one)
Instructions Instructions is the LLMs System Message (sometimes also called the Developer Message). It allows you to steer the models, tone, rules, and behaviour through Prompt Engineering
Tools You can on an Agent set a collection of tools that Agent can choose to activate during a prompt (You often use the Instructions to steer the model on when and in what order tools should be called)
- Tip: You can use AIToolsFactory to help define your Tools beyond the normal AIFunctionFactory
ClientType In OpenAI-based systems (OpenAI, ÀzureOpenAI, OpenRouter, and XAI), you can choose for each agent if you wish to use OpenAI's ChatClient protocol or their ResponseAPI protocol. By default, an Agent uses the ClientType defined on the parent Connection, which has ChatClient as default, but you can override the default at the connection level or directly on Agent creation using this property
Thinking Settings Across the various providers there is different ways to set how much an LLM should 'think' before answering
- On OpenAI-based Agents you can set the ReasoningEffort to none, minimal, low, medium, high or xhigh
- On OpenAI-based Agent using the ResponsesAPI you can set the ReasoningSummaryVerbosity for what reasoning summary should be returned (aka text on what the model 'thought' about)
- On AnthropicAgents you can set the BudgetTokens property to indicate how many tokens the Agent are allowed to think (minimum 1024 tokens)
- On GoogleAgents you can set the ThinkingBudget property to indicate how many tokens the Agent are allowed to think
Thinking Settings Across the various providers there is different ways to set how much an LLM should 'think' before answering
- On OpenAI-based Agents you can set the ReasoningEffort to none, minimal, low, medium, high or xhigh
- On OpenAI-based Agent using the ResponsesAPI you can set the ReasoningSummaryVerbosity for what reasoning summary should be returned (aka text on what the model 'thought' about)
- On AnthropicAgents you can set the BudgetTokens property to indicate how many tokens the Agent are allowed to think (minimum 1024 tokens)
- On GoogleAgents you can set ThinkingBudget (Gemini < 3) or ThinkingLevel + IncludeThoughts (Gemini 3+)

Middleware Options

In Agent Framework Toolkit, Middleware has been streamlined to be property-options instead of the regular .AsBuilder (which still works if you prefer that way; see below).

Tool Calling Middleware.

Tool Calling Middleware allows you to inspect, manipulate, and cancel tool calls.

If you just wish to get details on when a Tool Call happens, you can use the simplified RawToolCallDetails

OpenAIAgent agent = agentFactory.CreateAgent(new AgentOptions
{
    Model = "gpt-5",
    RawToolCallDetails = details =>
    {
        FunctionInvocationContext context = details.Context; //Context of the Tool Call
        Console.WriteLine(details.ToString()); //'ToString' version of the above
    }
});

If you need access to the full tool calling experience (to manipulate/cancel), you can use the advanced version of the Middleware like this

OpenAIAgent agent = agentFactory.CreateAgent(new AgentOptions
{
    Model = "gpt-5",
    ToolCallingMiddleware = async (callingAgent, context, next, token) =>
    {
        //Inspect, Manipulate here...

        return await next(context, token); //Do not call 'next' to cancel
    }
});
OpenTelemetry Middleware

If you wish to use OpenTelemetry with your agent, you can specify your OpenTelemetry source in the following way

OpenAIAgent agent = agentFactory.CreateAgent(new AgentOptions
{
    Model = "gpt-5",
    OpenTelemetryMiddleware = new OpenTelemetryMiddleware(
        "MySource", //Set your source
        telemetryAgent => telemetryAgent.EnableSensitiveData = true //configure the telemetry-agent
        )
});
Logging Middleware

If you wish to use traditional Logging via ILogger instead of OpenTelemetry, you can do it like this

OpenAIAgent agent = agentFactory.CreateAgent(new AgentOptions
{
    Model = "gpt-5",
    LoggingMiddleware = new LoggingMiddleware(new MyLoggerFactory(), loggingAgent => /* optional configuration */)
});

public class MyLoggerFactory : ILoggerFactory
{
    ...
}

Tip

If you need support for more advanced/custom Middleware, you can use the regular .AsBuilder approach

AIAgent agent = agentFactory.CreateAgent(new AgentOptions
{
    Model = "gpt-5",
})
.AsBuilder()
.Use(...) //Add your advanced/custom middleware here
.Build();

//Bonus: If you still want an 'OpenAIAgent', you can do the following
OpenAIAgent openAIAgent = new OpenAIAgent(agent);

Less Common Options

Here is a list of the remaining Options you can set on an Agent that are less common, but might be needed in advanced/nice/provider-specific scenarios

Property Notes
Id Set the ID of Agent (else a random GUID is assigned as ID)
Description Description of the Agent (not used by the LLM) and is purely informational
Temperature The Temperature of the LLM Call (1 = Normal; 0 = Less creativity) [ONLY NON-REASONING MODELS]
Services Setup Tool Calling Service Injection (See https://youtu.be/EGs-Myf5MB4 for more details)
LoggerFactory Setup logger Factory (Alternative to Middleware)
RawHttpCallDetails Intercept the raw HTTP Call to the LLM (great for advanced debugging sessions)
ClientFactory Interact with the underlying Client-factory (OpenAI-based Agents)
AIContextProviderFactory Set a ContextProviderFactory to intercept LLM Calls pre and post (see https://youtu.be/AndCk0HeddQ for more details)
ChatHistoryProviderFactory Set a custom Chat History Provider. See https://youtu.be/qbDIyFphZg4 for details
AdditionalChatClientAgentOptions Option to set even more options if not covered by AgentFrameworkToolkit

Clone this wiki locally