Skip to content

Azure-Samples/python-agentframework-demos

Python Agent Framework Demos

Open in GitHub Codespaces Open in Dev Containers

This repository provides examples of Microsoft Agent Framework using LLMs from GitHub Models, Azure AI Foundry, or other model providers. GitHub Models are free to use for anyone with a GitHub account, up to a daily rate limit.

Getting started

You have a few options for getting started with this repository. The quickest way to get started is GitHub Codespaces, since it will setup everything for you, but you can also set it up locally.

GitHub Codespaces

You can run this repository virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

  1. Open the repository (this may take several minutes):

    Open in GitHub Codespaces

  2. Open a terminal window

  3. Continue with the steps to run the examples

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

  4. Continue with the steps to run the examples

The dev container includes a Redis server, which is used by the agent_history_redis.py example.

Local environment

  1. Make sure the following tools are installed:

  2. Clone the repository:

    git clone https://github.com/Azure-Samples/python-agentframework-demos
    cd python-agentframework-demos
  3. Install the dependencies:

    uv sync
  4. Optional: To run the agent_history_redis.py example, you need a Redis server running locally:

    docker run -d -p 6379:6379 redis:7-alpine
  5. Optional: To run the PostgreSQL examples (agent_knowledge_postgres.py, agent_knowledge_pg.py, agent_knowledge_pg_rewrite.py), you need PostgreSQL with pgvector running locally:

    docker run -d -p 5432:5432 -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=LocalPasswordOnly pgvector/pgvector:pg17

Configuring model providers

These examples can be run with Azure AI Foundry, OpenAI.com, or GitHub Models, depending on the environment variables you set. All the scripts reference the environment variables from a .env file, and an example .env.sample file is provided. Host-specific instructions are below.

Using GitHub Models

If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your GITHUB_TOKEN is already configured in the Codespaces environment.

If you want to run the scripts locally, you need to set up the GITHUB_TOKEN environment variable with a GitHub personal access token (PAT). You can create a PAT by following these steps:

  1. Go to your GitHub account settings.

  2. Click on "Developer settings" in the left sidebar.

  3. Click on "Personal access tokens" in the left sidebar.

  4. Click on "Tokens (classic)" or "Fine-grained tokens" depending on your preference.

  5. Click on "Generate new token".

  6. Give your token a name and select the scopes you want to grant. For this project, you don't need any specific scopes.

  7. Click on "Generate token".

  8. Copy the generated token.

  9. Set the GITHUB_TOKEN environment variable in your terminal or IDE:

    export GITHUB_TOKEN=your_personal_access_token
  10. Optionally, you can use a model other than "gpt-5-mini" by setting the GITHUB_MODEL environment variable. Use a model that supports function calling, such as: gpt-5, gpt-5-mini, gpt-4o, gpt-4o-mini, o3-mini, AI21-Jamba-1.5-Large, AI21-Jamba-1.5-Mini, Codestral-2501, Cohere-command-r, Ministral-3B, Mistral-Large-2411, Mistral-Nemo, Mistral-small

Using Azure AI Foundry models

You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure AI Foundry instead, you need to provision the Azure AI resources, which will incur costs.

This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-5-mini" and "text-embedding-3-large" via Azure AI Foundry. The IaC is defined in the infra directory and uses the Azure Developer CLI to provision the resources.

  1. Make sure the Azure Developer CLI (azd) is installed.

  2. Login to Azure:

    azd auth login

    For GitHub Codespaces users, if the previous command fails, try:

     azd auth login --use-device-code

    If you are using a tenant besides the default tenant, you may need to also login with Azure CLI to that tenant:

    az login --tenant your-tenant-id
  3. Provision the OpenAI account:

    azd provision

    It will prompt you to provide an azd environment name (like "agents-demos"), select a subscription from your Azure account, and select a location. Then it will provision the resources in your account.

  4. Once the resources are provisioned, you should now see a local .env file with all the environment variables needed to run the scripts.

  5. To delete the resources, run:

    azd down

Using OpenAI.com models

  1. Create a .env file by copying the .env.sample file and updating it with your OpenAI API key and desired model name.

    cp .env.sample .env
  2. Update the .env file with your OpenAI API key and desired model name:

    API_HOST=openai
    OPENAI_API_KEY=your_openai_api_key
    OPENAI_MODEL=gpt-4o-mini

Running the Python examples

You can run the examples in this repository by executing the scripts in the examples directory. Each script demonstrates a different Agent Framework pattern.

Example Description
agent_basic.py A basic informational agent.
agent_tool.py An agent with a single weather tool.
agent_tools.py A weekend planning agent with multiple tools.
agent_session.py In-memory sessions for multi-turn conversations with memory across messages.
agent_history_redis.py Persistent chat history with Redis for conversation history that survives restarts.
agent_supervisor.py A supervisor orchestrating activity and recipe sub-agents.
workflow_magenticone.py A MagenticOne multi-agent workflow.
workflow_hitl.py Human-in-the-loop (HITL) for tool-enabled agents with human feedback.
agent_middleware.py Agent, chat, and function middleware for logging, timing, and blocking.
agent_knowledge_sqlite.py Knowledge retrieval (RAG) using a custom context provider with SQLite FTS5.
agent_knowledge_postgres.py Knowledge retrieval (RAG) with PostgreSQL hybrid search (pgvector + full-text) using Reciprocal Rank Fusion.
agent_mcp_remote.py An agent using a remote MCP server (Microsoft Learn) for documentation search.
agent_mcp_local.py An agent connected to a local MCP server (e.g. for expense logging).
openai_tool_calling.py Tool calling with the low-level OpenAI SDK, showing manual tool dispatch.
workflow_basic.py A workflow-based agent.
agent_otel_aspire.py An agent with OpenTelemetry tracing, metrics, and structured logs exported to the Aspire Dashboard.
agent_otel_appinsights.py An agent with OpenTelemetry tracing, metrics, and structured logs exported to Azure Application Insights. Requires Azure provisioning via azd provision.
agent_evaluation.py Evaluate a travel planner agent using Azure AI Evaluation agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set AZURE_AI_PROJECT in .env to log results to Azure AI Foundry.
agent_redteam.py Red-team a financial advisor agent using Azure AI Evaluation to test resilience against adversarial attacks across risk categories (Violence, HateUnfairness, Sexual, SelfHarm). Requires AZURE_AI_PROJECT in .env.

Using the Aspire Dashboard for telemetry

The agent_otel_aspire.py example can export OpenTelemetry traces, metrics, and structured logs to a Aspire Dashboard.

In GitHub Codespaces / Dev Containers

The Aspire Dashboard runs automatically as a service alongside the dev container. No extra setup is needed.

  1. The OTEL_EXPORTER_OTLP_ENDPOINT environment variable is already set by the dev container.

  2. Run the example:

    uv run examples/agent_otel_aspire.py
  3. Open the dashboard at http://localhost:18888 and explore:

    • Traces: See the full span tree — agent invocation → chat completion → tool execution
    • Metrics: View token usage and operation duration histograms
    • Structured Logs: Browse conversation messages (system, user, assistant, tool)
    • GenAI visualizer: Select a chat completion span to see the rendered conversation

Local environment (without Dev Containers)

If you're running locally without Dev Containers, you need to start the Aspire Dashboard manually:

  1. Start the Aspire Dashboard:

    docker run --rm -it -d -p 18888:18888 -p 4317:18889 --name aspire-dashboard \
        -e DASHBOARD__FRONTEND__AUTHMODE=Unsecured \
        mcr.microsoft.com/dotnet/aspire-dashboard:latest
  2. Add the OTLP endpoint to your .env file:

    OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
  3. Run the example:

    uv run agent_otel_aspire.py
  4. Open the dashboard at http://localhost:18888 and explore.

  5. When done, stop the dashboard:

    docker stop aspire-dashboard

For the full Python + Aspire guide, see Use the Aspire dashboard with Python apps.

Exporting telemetry to Azure Application Insights

The agent_otel_appinsights.py example exports OpenTelemetry traces, metrics, and structured logs to Azure Application Insights.

Setup

This example requires an APPLICATIONINSIGHTS_CONNECTION_STRING environment variable. You can get this automatically or manually:

Option A: Automatic via azd provision

If you run azd provision (see Using Azure AI Foundry models), the Application Insights resource is provisioned automatically and the connection string is written to your .env file.

Option B: Manual from the Azure Portal

  1. Create an Application Insights resource in the Azure Portal.

  2. Copy the connection string from the resource's Overview page.

  3. Add it to your .env file:

    APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...;IngestionEndpoint=...

Running the example

uv run examples/agent_otel_appinsights.py

Viewing telemetry

After running the example, navigate to your Application Insights resource in the Azure Portal:

  • Transaction search: See end-to-end traces for agent invocations, chat completions, and tool executions.
  • Live Metrics: Monitor real-time request rates and performance.
  • Performance: Analyze operation durations and identify bottlenecks.

Telemetry data may take 2–5 minutes to appear in the portal.

Resources

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •