Skip to content

future-agi/futureagi-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

78 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Future AGI Logo

Future AGI SDK

The open-source SDK for AI evaluation, observability, and optimization

PyPI version Downloads Python Support License GitHub Stars

πŸ“– Docs β€’ 🌐 Website β€’ πŸ’¬ Community β€’ 🎯 Dashboard


πŸ“– Table of Contents


πŸš€ What is Future AGI?

Your agent passed every eval. Then it hallucinated a refund policy that doesn't exist. Future AGI gives you the tools to catch that β€” datasets, prompt versioning, knowledge bases, evaluations, and guardrails. One SDK, one feedback loop.

# Get started in 30 seconds
pip install futureagi
export FI_API_KEY="your_key"
export FI_SECRET_KEY="your_secret"

πŸ‘‰ Get Free API Keys β€’ View Live Demo β€’ Read Quick Start Guide

✨ Key Features

  • 🎯 Evaluations β€” 50+ metrics, LLM-as-judge, and custom rubrics powered by the Critique AI agent
  • ⚑ Guardrails β€” Real-time safety checks with sub-100ms latency
  • πŸ“Š Datasets β€” Programmatically create, version, and manage training and test datasets
  • 🎨 Prompt Workbench β€” Version control, A/B testing, and deployment labels for prompts
  • πŸ“š Knowledge Base β€” Document management and retrieval for RAG applications
  • πŸ“ˆ Analytics β€” Model performance, token costs, and behavior insights
  • πŸ€– Simulate β€” Test your AI system against realistic scenarios before users hit it
  • πŸ” Observability β€” OpenTelemetry-native tracing across 50+ frameworks

πŸ“¦ Installation

Python

pip install futureagi

TypeScript/JavaScript

npm install @future-agi/sdk
# or
pnpm add @future-agi/sdk

Requirements: Python >= 3.10 | Node.js >= 14


πŸ”‘ Authentication

Get your API credentials from the Future AGI Dashboard:

export FI_API_KEY="your_api_key"
export FI_SECRET_KEY="your_secret_key"

Or set them programmatically:

import os
os.environ["FI_API_KEY"] = "your_api_key"
os.environ["FI_SECRET_KEY"] = "your_secret_key"

🎯 Quick Start

πŸ“Š Dataset Management

Create and manage datasets with built-in evaluations:

from fi.datasets import Dataset
from fi.datasets.types import (
    Cell, Column, DatasetConfig, DataTypeChoices,
    ModelTypes, Row, SourceChoices
)

# Create a new dataset
config = DatasetConfig(name="qa_dataset", model_type=ModelTypes.GENERATIVE_LLM)
dataset = Dataset(dataset_config=config)
dataset = dataset.create()

# Define columns
columns = [
    Column(name="user_query", data_type=DataTypeChoices.TEXT, source=SourceChoices.OTHERS),
    Column(name="ai_response", data_type=DataTypeChoices.TEXT, source=SourceChoices.OTHERS),
    Column(name="quality_score", data_type=DataTypeChoices.INTEGER, source=SourceChoices.OTHERS),
]

# Add data
rows = [
    Row(order=1, cells=[
            Cell(column_name="user_query", value="What is machine learning?"),
        Cell(column_name="ai_response", value="Machine learning is a subset of AI..."),
        Cell(column_name="quality_score", value=9),
    ]),
    Row(order=2, cells=[
            Cell(column_name="user_query", value="Explain quantum computing"),
        Cell(column_name="ai_response", value="Quantum computing uses quantum bits..."),
        Cell(column_name="quality_score", value=8),
    ]),
]

# Push data and run evaluations
    dataset = dataset.add_columns(columns=columns)
    dataset = dataset.add_rows(rows=rows)

# Add automated evaluation
dataset.add_evaluation(
    name="factual_accuracy",
    eval_template="is_factually_consistent",
    model="gpt-4o-mini",
    required_keys_to_column_names={
        "input": "user_query",
        "output": "ai_response",
        "context": "user_query",
    },
    run=True
)

print("βœ“ Dataset created with automated evaluations")

🎨 Prompt Workbench

Version control and A/B test your prompts:

from fi.prompt import Prompt, PromptTemplate, ModelConfig

# Create a versioned prompt template
template = PromptTemplate(
    name="customer_support",
    messages=[
        {"role": "system", "content": "You are a helpful customer support agent."},
        {"role": "user", "content": "Help {{customer_name}} with {{issue_type}}."},
    ],
    variable_names={"customer_name": ["Alice"], "issue_type": ["billing"]},
    model_configuration=ModelConfig(model_name="gpt-4o-mini", temperature=0.7)
)

# Create and version the template
client = Prompt(template)
client.create()  # Create v1
client.commit_current_version("Initial version", set_default=True)

# Assign deployment labels
client.assign_label("Production", version="v1")

# Compile with variables
compiled = client.compile(customer_name="Bob", issue_type="refund")
print(compiled)
# Output: [
#   {"role": "system", "content": "You are a helpful customer support agent."},
#   {"role": "user", "content": "Help Bob with refund."}
# ]

A/B Testing Example:

import random
from openai import OpenAI
from fi.prompt import Prompt

# Fetch different variants (returns Prompt instances)
variant_a = Prompt.get_template_by_name("customer_support", label="variant-a")
variant_b = Prompt.get_template_by_name("customer_support", label="variant-b")

# Randomly select and use
selected = random.choice([variant_a, variant_b])
compiled = selected.compile(customer_name="Alice", issue_type="refund")

# Send to your LLM provider
openai = OpenAI(api_key="your_openai_key")
response = openai.chat.completions.create(model="gpt-4o", messages=compiled)
print(f"Using variant: {selected.template.name}")
print(f"Response: {response.choices[0].message.content}")

πŸ“š Knowledge Base (RAG)

Manage documents for retrieval-augmented generation:

from fi.kb import KnowledgeBase

# Initialize client
kb_client = KnowledgeBase(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key"
)

# Create a knowledge base with documents
kb = kb_client.create_kb(
    name="product_docs",
    file_paths=["manual.pdf", "faq.txt", "guide.docx"]
)

print(f"βœ“ Knowledge base created: {kb.kb.name}")
print(f"  Files uploaded: {len(kb.kb.files)}")

# Update with more files
updated_kb = kb_client.update_kb(
    kb_name=kb.kb.name,
    file_paths=["updates.pdf"]
)

# Delete specific files
kb_client.delete_files_from_kb(file_names=["updates.pdf"])

# Clean up
kb_client.delete_kb(kb_ids=[kb.kb.id])

🎯 Core Use Cases

Feature Use Case Benefit
Datasets Store and version training/test data Reproducible experiments, automated evaluations
Prompt Workbench Version control for prompts A/B testing, deployment management, rollback
Knowledge Base Evaluations and synthetic data Intelligent retrieval, document versioning
Evaluations Automated quality checks No human-in-the-loop, 100% configurable
Protect Real-time safety filters Sub-100ms latency, production-ready

πŸ”₯ Why Choose Future AGI?

Feature Future AGI Traditional Tools Other Platforms
Evaluation Speed ⚑ Sub-100ms 🐌 Seconds-Minutes 🐒 Minutes-Hours
Human in Loop ❌ Fully Automated βœ… Required βœ… Often Required
Multimodal Support βœ… Text, Image, Audio, Video ⚠️ Limited ⚠️ Text Only
Setup Time ⏱️ 2 minutes ⏳ Days-Weeks ⏳ Hours-Days
Configurability 🎯 100% Customizable πŸ”’ Fixed Metrics βš™οΈ Some Flexibility
Privacy Options πŸ” Cloud + Self-hosted ☁️ Cloud Only ☁️ Cloud Only
A/B Testing βœ… Built-in ❌ Manual ⚠️ Limited
Prompt Versioning βœ… Git-like Control ❌ Not Available ⚠️ Basic
Real-time Guardrails βœ… Production-ready ❌ Not Available ⚠️ Experimental

πŸ”Œ Supported Integrations

Future AGI works seamlessly with your existing AI stack:

LLM Providers
OpenAI β€’ Anthropic β€’ Google Gemini β€’ Azure OpenAI β€’ AWS Bedrock β€’ Cohere β€’ Mistral β€’ Ollama β€’ vLLM

Frameworks
LangChain β€’ LlamaIndex β€’ CrewAI β€’ AutoGen β€’ Haystack β€’ Semantic Kernel

Vector Databases
Pinecone β€’ Weaviate β€’ Qdrant β€’ Milvus β€’ Chroma β€’ FAISS

Observability
OpenTelemetry β€’ Custom Logging β€’ Trace Context Propagation


πŸ“š Documentation


🀝 Language Support

Language Package Status
Python futureagi βœ… Full Support
TypeScript/JavaScript @future-agi/sdk βœ… Full Support
REST API cURL/HTTP βœ… Available

πŸ†˜ Support & Community


🀝 Contributing

We welcome contributions! Here's how to get involved:

  • πŸ› Report bugs: Open an issue
  • πŸ’‘ Request features: Start a discussion
  • πŸ”§ Submit PRs: Fork, create a feature branch, and submit a pull request
  • πŸ“– Improve docs: Help us make our documentation better

See CONTRIBUTING.md for detailed guidelines.


🌟 Testimonials

"Future AGI cut our evaluation time from days to minutes. The automated critiques are spot-on!"
β€” AI Engineering Team, Fortune 500 Company

"The prompt versioning alone saved us countless headaches. A/B testing is now trivial."
β€” ML Lead, Healthcare Startup

"Sub-100ms guardrails in production. Game changer for our customer-facing AI."
β€” CTO, E-commerce Platform


πŸ“Š Roadmap

  • Datasets with automated evaluations
  • Prompt workbench with versioning
  • Knowledge base for RAG
  • Real-time guardrails (sub-100ms)
  • Multi-language SDK (Python + TypeScript)
  • Bulk Annotations for Human in the Loop
  • On-premise deployment toolkit

❓ Troubleshooting & FAQ

Import Error: `ModuleNotFoundError: No module named 'fi'`

Make sure Future AGI is installed:

pip install futureagi --upgrade
Authentication Error: Invalid API credentials
  1. Check your API keys at Dashboard
  2. Ensure environment variables are set correctly:
echo $FI_API_KEY
echo $FI_SECRET_KEY
  1. Try setting them programmatically in your code
How do I switch between environments (dev/staging/prod)?

Use prompt labels to manage different deployment environments:

client.assign_label("Development", version="v1")
client.assign_label("Staging", version="v2")
client.assign_label("Production", version="v3")
Can I use Future AGI without sending data to the cloud?

Yes! Future AGI supports self-hosted deployments. Contact us at support@futureagi.com for enterprise on-premise options.

What LLM providers are supported?

All major providers: OpenAI, Anthropic, Google, Azure, AWS Bedrock, Cohere, Mistral, and open-source models via vLLM/Ollama.

Need more help? Check our complete FAQ or join our community.


πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Built with ❀️ by the Future AGI team and contributors.

If Future AGI helps you ship better AI, a ⭐ helps more teams find us.

Star History Chart

🌐 futureagi.com Β· πŸ“– docs.futureagi.com Β· ☁️ app.futureagi.com

About

Production-grade AI evaluation, prompt management & observability SDK. Automated evaluations with sub-100ms guardrails. No human-in-the-loop required. Python + TypeScript.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors