Skip to content

[FEATURE] Bedrock Prompt Integration #1043

@linvald

Description

@linvald

Problem Statement

AWS Bedrock offer "Prompt Management" where we can create and version prompts that configure multiple aspects of a prompt including the prompt itself, the model to use, temperature, caching, tools and other metadata.
Rather than separately having to retrieve a prompt using Bedrock APIs and then setting the system_prompt, model etc. it would be very useful to simply reference the prompt by ARN thus getting most of the information currently set using individual properties for prompt text, model, temperature etc.

Proposed Solution

AWS Bedrock Prompt Management encapsulates much more than just prompt text:

  1. Prompt text/template - The actual prompt content
  2. Model configuration - Which model to use (model_id)
  3. Inference parameters - temperature, top_p, max_tokens, etc.
  4. Additional metadata - Versioning, tags, and other prompt-specific settings

1. Fetch Complete Prompt Configuration

Fetch the full prompt configuration including model settings:

class BedrockPromptManager:
    def get_prompt_config(self, prompt_id: str, prompt_version: str = "$LATEST") -> dict:
        """Fetch complete prompt configuration from Bedrock Prompt Management"""
        response = self.client.get_prompt(
            promptIdentifier=prompt_id,
            promptVersion=prompt_version
        )
        
        # Extract all components
        return {
            'prompt_text': self._extract_prompt_text(response),
            'model_id': response.get('modelId'),
            'inference_config': {
                'temperature': response.get('temperature'),
                'top_p': response.get('topP'),
                'max_tokens': response.get('maxTokens'),
                # ... other inference parameters
            }
        }

2. Apply Configuration to BedrockModel

The BedrockModel already supports all these configuration options through its BedrockConfig TypedDict

The integration would need to:

  1. Fetch the prompt configuration
  2. Update the BedrockModel config with the fetched parameters
  3. Pass the prompt text as system_prompt

3. Integration at Agent Level

# Option A: At initialization
agent = Agent(
    bedrock_prompt_id="my-prompt-id",
    bedrock_prompt_version="1"
)
# This would internally:
# 1. Fetch prompt config from Bedrock
# 2. Create BedrockModel with the fetched model_id and inference params
# 3. Set system_prompt from the fetched prompt text

# Option B: Override at runtime
agent(
    "user query",
    bedrock_prompt_override={
        'prompt_id': 'my-prompt-id',
        'prompt_version': '1'
    }
)

4. Model Configuration Override

The key is that BedrockModel.update_config() already supports updating all these parameters

So the implementation would:

  1. Fetch prompt configuration from Bedrock Prompt Management
  2. Call model.update_config(model_id=..., temperature=..., top_p=..., max_tokens=...)
  3. Pass the prompt text through the existing system_prompt flow

5. Request Formatting

The format_request method already handles all these parameters

The inference config gets properly formatted into the Bedrock request

Notes

The critical insight is that Bedrock Prompts are not just text - they're a complete configuration bundle that includes model selection and inference parameters. The solution must fetch and apply all of these settings, not just the prompt text. The good news is that BedrockModel already supports all the necessary configuration options through its BedrockConfig, so the integration would primarily involve fetching the prompt configuration and applying it to the model.

Use Case

  • Realizing the purpose of prompt management outside the AWS eco system integrating it into Strands.
  • Make it easier to test different prompt configurations with different versions of prompts, different prompt text and model configurations.
  • Run agentic workflows with different configurations.
  • Run evals with different configurations.

Alternatives Solutions

Integrate with aws prompt management using APIs outside of strands and use the data fetched to set the individual strands properties; system_prompt, model etc.

Additional Context

No response

Metadata

Metadata

Assignees

Labels

awaiting customerAuto-applied tag indicating we're awaiting customer response based on the last commenter.enhancementNew feature or requestto-refineIssue needs to be discussed with the team and the team has come to an effort estimate consensus

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions