-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Description
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
- This is a feature request for the Python library
Describe the feature or improvement you're requesting
The updated code is available at https://github.com/prane-eth/openai-python-action-guard/, and is ready for a pull request.
Background
AI agents have a growing adoption across the industry, including critical applications. AI agents that have access to tools (including MCP servers) can currently call tools directly with no centralized validation layer that inspects these calls before execution, allowing harmful or disallowed tool calls to be executed without oversight. In OpenAI-Python package, Action Guard feature automates the validation, making the workflow secure.
Proposed Change
Introduce an action_guard parameter in the OpenAI Python client that allows developers to define a centralized validation function for agent actions.
This guard would be invoked whenever the agent attempts a tool call (including MCP actions). The guard function can decide whether to allow or block the action.
Example:
def my_guard_function(action: ToolCall) -> GuardDecision:
# This can use code-based validation or a classifier model
...
# GuardDecision options:
# - ALLOW
# - BLOCK
response = openai.chat.completions.create(
model="gpt-...",
messages=[...],
tools=[...],
action_guard=my_guard_function, # The new argument
)Behavior
The guard function receives a ToolCall object representing the pending action.
Possible outcomes:
- ALLOW – Execute the action normally.
- BLOCK – Prevent execution and return an error to the agent.
Additional context
Benefits
- Centralized enforcement of action policies
- Reduced boilerplate in tool implementations
- Improved safety for agentic systems
- Seamless integration with existing tool and MCP ecosystems
Related Work
If user approval is made mandatory for each action, the workflow becomes slow and inefficient.