Here's what nobody tells you about multi-agentic systems: the hard part isn't building them but making them profitable. One misconfigured model serving enterprise features to free users can burn $20K in a weekend. Meanwhile, you're manually juggling dozens of requirements for different user tiers, regions, and privacy compliance and each one is a potential failure point.
Part 2 of 3 of the series: Chaos to Clarity: Defensible AI Systems That Deliver on Your Goals
The solution? LangGraph multi-agent workflows controlled by LaunchDarkly AI Config targeting rules that intelligently route users: paid customers get premium tools and models, free users get cost-efficient alternatives, and EU users get Mistral for enhanced privacy. Use the LaunchDarkly REST API to set up a custom variant-targeting matrix in 2 minutes instead of spending hours setting it up manually.
In the next 18 minutes, you'll transform your basic multi-agent system with:
- Business Tiers & MCP Integration: Free users get internal RAG search, Paid users get premium models with external research tools and expanded tool call limits, all controlled by LaunchDarkly AI Configs
- Geographic Targeting: EU users automatically get Mistral and Claude models (enhanced privacy), other users get cost-optimized alternatives
- Smart Configuration: Set up complex targeting matrices with LaunchDarkly segments and targeting rules
⚠️ CRITICAL: Naming RequirementsThe bootstrap script depends on exact naming from Part 1. You MUST have used these names:
- Project:
multi-agent-chatbot- AI Configs:
supervisor-agent,security-agent,support-agent- Tools:
search_v2,reranking- Variations:
supervisor-basic,pii-detector,rag-search-enhancedIf you used different names in Part 1, you'll need to either rename your resources or create new ones with the correct names before proceeding.
You'll need:
- **Completed Part 1**: Working multi-agent system with basic AI Configs
- Same environment: Python 3.9+, uv, API keys from Part 1
- LaunchDarkly API key: Add
LD_API_KEY=your-api-keyto your.envfile (get API key) - Mistral API key: Add
MISTRAL_API_KEY=your-mistral-keyto your.envfile (get API key) - required for EU privacy features
The automation scripts in this tutorial use the LaunchDarkly REST API to programmatically create configurations. Here's how to get your API key:
To get your LaunchDarkly API key, start by navigating to Organization Settings by clicking the gear icon (⚙️) in the left sidebar of your LaunchDarkly dashboard. Once there, access Authorization Settings by clicking "Authorization" in the settings menu. Next, create a new access token by clicking "Create token" in the "Access tokens" section.
Click "Create token" in the Access tokens section
When configuring your token, give it a descriptive name like "multi-agent-chatbot", select "Writer" as the role (required for creating configurations), use the default API version (latest), and leave "This is a service token" unchecked for now.
Configure your token with a descriptive name and Writer role
After configuring the settings, click "Save token" and immediately copy the token value. This is IMPORTANT because it's only shown once!
Copy the token value immediately - it's only shown once
Finally, add the token to your environment:
# Add this line to your .env file
LD_API_KEY=your-copied-api-key-hereSecurity Note: Keep your API key private and never commit it to version control. The token allows full access to your LaunchDarkly account.
Your agents need more than just your internal documents. Model Context Protocol (MCP) connects AI assistants to live external data and they agents become orchestrators of your digital infrastructure, tapping into databases, communication tools, development platforms, and any system that matters to your business. MCP tools run as separate servers that your agents call when needed.
The MCP Registry serves as a community-driven directory for discovering available MCP servers - like an app store for MCP tools. For this tutorial, we'll use manual installation since our specific academic research servers (ArXiv and Semantic Scholar) aren't yet available in the registry.
Install external research capabilities:
# Install ArXiv MCP server for academic paper search
uv tool install arxiv-mcp-server
# Install Semantic Scholar MCP server for citation data
git clone https://github.com/JackKuo666/semanticscholar-MCP-Server.gitMCP Tools Added:
- arxiv_search: Live academic paper search (Paid users)
- semantic_scholar: Citation and research database (Paid users)
These tools integrate with your agents via LangGraph - LaunchDarkly controls which users get access to which tools.
Now we'll use programmatic API automation to configure the complete setup including tools and targeting matrix. The LaunchDarkly REST API lets you manage tools, segments, and AI Configs programmatically. Instead of manually creating dozens of variations in the UI, you'll set up complex targeting matrices with a single Python script. This approach is essential when you need to handle multiple geographic regions × business tiers with conditional tool assignments.
What This Script Does: This is configuration automation, not application deployment. The script makes REST API calls to LaunchDarkly to provision user segments, AI config variations, targeting rules, and tools - the same resources you could create manually through the LaunchDarkly dashboard. Your actual chat application continues running unchanged.
Configure your complete targeting matrix with one command:
cd bootstrap
uv run python create_configs.pyThe configuration script intelligently handles existing resources from Part 1:
- Reuses:
supervisor-agent(identical), existingsearch_v2andrerankingtools - Updates:
security-agentwith additional geographic compliance config variations - Creates New:
support-agentconfig variations for business tier targeting, plus new tools (search_v1,arxiv_search,semantic_scholar)
LaunchDarkly Resources Added
- 3 new tools:
search_v1(basic search),arxiv_searchandsemantic_scholar(MCP research tools) - 4 combined user segments with geographic and tier targeting rules
- 3 AI Configs Variations with intelligent handling:
- security-agent: Updated with 2 new geographic variations (basic vs strict GDPR)
- support-agent-business-tiers: New config with 5 variations (geographic × tier matrix)
- Complete targeting rules that route users to appropriate variations
Here's how it works: EU free users get Claude Haiku with basic search (privacy + cost efficiency). EU paid users get Claude Sonnet with full research tools (privacy + premium features). Non-EU free users get GPT-4o Mini with basic search (maximum cost efficiency). Non-EU paid users get GPT-4 with complete research tools (maximum capability).
This segmentation strategy optimizes costs through efficient models for free users while providing premium capabilities to paid users. It also enhances privacy by giving EU users Mistral models with a privacy-by-design approach.
The included test script simulates real user scenarios across all segments, verifying that your targeting rules work correctly. It sends actual API requests to your system and confirms each user type gets the right model, tools, and behavior - giving you confidence before real users arrive.
Validate your segmentation with the test script:
uv run python api/segmentation_test.pyThis confirms your targeting matrix is working correctly across all user segments!
Now let's see your segmentation in action through the actual user interface that your customers will experience.
# Start your system (2 terminals)
uv run uvicorn api.main:app --reload --port 8001
API_PORT=8001 uv run streamlit run ui/chat_interface.py --server.port 8501Open http://localhost:8501 and test different user types:
- User Dropdown: Select different regions (eu, other) and plans (Free, Paid)
- Ask Questions: Try "Search for machine learning papers"
- Watch Workflow: See which model and tools get used for each user type
- Verify Routing: EU users get Mistral, Other users get GPT, Paid users get MCP tools
Select different user types to test segmentation in the chat interface
You've built a sophisticated multi-agent system that demonstrates how modern AI applications can handle complex user segmentation and feature differentiation. Automated configuration setup shows a practical approach to managing multi-dimensional targeting and provides a clear framework for expanding into additional geographic regions or business tiers as needed.
Your multi-agent system now has:
- Smart Geographic Routing: Enhanced privacy protection for EU users
- Business Tier Management: Feature scaling that grows with customer value
- API Automation: Complex configurations created programmatically via LaunchDarkly REST API
- External Tool Integration: Research capabilities for premium users
In Part 3, we'll prove what actually works using controlled A/B experiments:
- Tool Implementation Test: Compare search_v1 vs search_v2 on identical models to measure search quality impact
- Model Efficiency Analysis: Test Claude vs GPT-4 with the same full tool stack to measure tool-calling precision and cost
- Security Configuration Study: Compare basic vs strict security settings to quantify enhanced privacy costs
- User satisfaction - thumbs up/down feedback
- Tool call efficiency - average number of tools used per successful query
- Token cost analysis - cost per query across different model configurations
- Response latency - performance impact of security and tool variations
Instead of guessing which configurations work better, you'll have data proving which tool implementations provide value, which models use tools more efficiently, and what security enhancements actually costs in performance.
Explore the LaunchDarkly MCP Server - enable AI agents to access feature flag configurations, user segments, and experimentation data directly through the Model Context Protocol.
Ready for data-driven optimization? Part 3 will show you how to run experiments that prove ROI and guide product decisions with real user behavior data.