Conversation
Replace bcrypt token hashing with configurable FIPS-approved algorithms (HMAC-SHA256, PBKDF2-SHA256) to support deployments on FIPS-enabled hosts (e.g., RHEL 9 in FIPS mode). Existing bcrypt hashes remain verifiable for backward compatibility via automatic prefix detection. Key changes: - Configurable token hashing: TOKEN_HASH_ALGORITHM setting with hmac-sha256 (default), pbkdf2-sha256, and bcrypt options - Redis TLS configuration: SSL CA certs, client certs (mTLS), cert requirements, and minimum TLS version settings - FIPS diagnostics: new fips.py module, `agent-memory fips-check` CLI command, and authenticated GET /v1/fips endpoint - hashlib safety: added usedforsecurity=False to non-security SHA-256 calls - FIPS Docker images: Dockerfile.fips with UBI 9 and Chainguard FIPS targets - Bumped cryptography>=42.0.0 for OpenSSL 3.x FIPS provider support - 24 new FIPS compliance tests covering all crypto boundaries
There was a problem hiding this comment.
FIPS compliance implementation looks solid overall with good test coverage and backward compatibility. A few security considerations identified below.
🤖 Automated review complete. Please react with 👍 or 👎 on the individual review comments to provide feedback on their usefulness.
🛡️ Jit Security Scan Results✅ No security findings were detected in this PR
Security scan by Jit
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
There was a problem hiding this comment.
Pull request overview
This PR adds FIPS 140-3 compliance support to the Agent Memory Server, enabling deployment in federal and regulated environments requiring FedRAMP compliance. The changes make the system "FIPS-capable" by switching from non-approved algorithms (bcrypt/Blowfish) to FIPS-approved alternatives (HMAC-SHA256, PBKDF2-SHA256) and adding configurable Redis TLS settings.
Changes:
- Introduces configurable token hashing with HMAC-SHA256 as the new default (replacing bcrypt), with backward compatibility for existing bcrypt hashes
- Adds Redis TLS configuration options (ca_certs, certfile, keyfile, cert_reqs, min_version)
- Provides FIPS diagnostics via new CLI command (
fips-check) and authenticated API endpoint (GET /v1/fips)
Reviewed changes
Copilot reviewed 11 out of 12 changed files in this pull request and generated 20 comments.
Show a summary per file
| File | Description |
|---|---|
| uv.lock | Updates cryptography minimum version to 42.0.0 for FIPS provider support |
| pyproject.toml | Updates cryptography minimum version to 42.0.0 for FIPS provider support |
| agent_memory_server/auth.py | Implements multi-algorithm token hashing with auto-detection; adds HMAC-SHA256 and PBKDF2-SHA256 support |
| agent_memory_server/config.py | Adds token hashing and Redis TLS configuration settings |
| agent_memory_server/utils/redis.py | Implements TLS kwargs builder for Redis connections based on URL scheme and settings |
| agent_memory_server/utils/recency.py | Adds usedforsecurity=False flag to SHA-256 calls used for deduplication |
| agent_memory_server/fips.py | New module providing FIPS compliance diagnostics (OpenSSL version, kernel FIPS mode, algorithm checks) |
| agent_memory_server/cli.py | Adds fips-check CLI command for runtime FIPS posture assessment |
| agent_memory_server/healthcheck.py | Adds authenticated /v1/fips endpoint for FIPS diagnostics |
| tests/test_fips_compliance.py | Comprehensive test suite covering token hashing algorithms, Redis TLS, and FIPS diagnostics |
| tests/test_token_auth.py | Updates tests for new default HMAC-SHA256 algorithm |
| Dockerfile.fips | New multi-target Dockerfile for FIPS-hardened images (UBI9 and Chainguard variants) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Fix HMAC verification to compute inline instead of calling hash_token(), avoiding dependency on current token_hash_algorithm setting - Embed iteration count in PBKDF2 hash format (pbkdf2$iterations$salt$hash) so tokens remain valid when iteration count is changed - Add explicit parts validation for PBKDF2 hash parsing - Reject default TOKEN_HASH_SECRET in verify_auth_config() when using hmac-sha256 with token auth enabled - Map redis_ssl_cert_reqs string values to ssl.CERT_* enum constants - Broaden fips.py exception handling (OSError), fix TLS detection to include redis_ssl_ca_certs - Dockerfile.fips: pin image tags, fix cache mount paths for non-root user, remove ENV vars that triggered Jit false positives - Add tests for iteration-change backward compat, algorithm-switch compat, TLS detection via CA certs, and default secret rejection
tylerhutcherson
left a comment
There was a problem hiding this comment.
Looks good in terms of providing Redis TLR and hashing algorithm updates. A few other questions/points:
@bsbodden We need documentation on connecting AMS to Redis w/ SSL enabled. This is something ideally we test before we hand over to customers too.
Do we need to maintain both Docker images?
|
There was a problem hiding this comment.
This PR introduces FIPS 140-3 compliance support with configurable token hashing and Redis TLS configuration. I've identified a critical security issue with HMAC secret rotation that will invalidate all existing tokens, plus several other concerns around error handling and configuration validation.
🤖 Automated review complete. Please react with 👍 or 👎 on the individual review comments to provide feedback on their usefulness.
| "hmac$" | ||
| + hmac.new(secret, token.encode("utf-8"), hashlib.sha256).hexdigest() | ||
| ) | ||
| return hmac.compare_digest(expected, token_hash) |
There was a problem hiding this comment.
Critical: HMAC secret rotation will invalidate all existing tokens
The HMAC-SHA256 implementation uses a single global token_hash_secret for both hashing and verification. If this secret is ever rotated (which is a security best practice), all existing HMAC-hashed tokens will immediately become invalid because verification uses the current secret.
Unlike PBKDF2 (which embeds the salt in the hash) or bcrypt (which is self-contained), HMAC requires the same secret key for verification. This creates an operational problem:
- Tokens are stored in Redis with their hashes
- If
TOKEN_HASH_SECRETis rotated,verify_token_hashwill fail for all existing HMAC tokens - Users will be locked out until tokens are regenerated
Suggested improvement:
Consider one of these approaches:
# Option 1: Support multiple secrets for gradual rotation
if token_hash.startswith("hmac$"):
secrets_to_try = [
settings.token_hash_secret,
*settings.token_hash_secret_previous # List of old secrets
]
for secret in secrets_to_try:
expected = "hmac$" + hmac.new(
secret.encode("utf-8"),
token.encode("utf-8"),
hashlib.sha256
).hexdigest()
if hmac.compare_digest(expected, token_hash):
return True
return False
# Option 2: Embed a key version in the hash format
# hmac$v1$<hash> where v1 maps to a specific secret
# This allows you to maintain multiple secrets indexed by versionThis is a critical operational issue for production deployments where secret rotation is required for compliance.
| if len(parts) != 4: | ||
| return False | ||
| _, iterations_str, salt_hex, hash_hex = parts | ||
| iterations = int(iterations_str) |
There was a problem hiding this comment.
Missing error handling for invalid hex encoding
The int(iterations_str) and bytes.fromhex(salt_hex) calls can raise ValueError if the hash format is corrupted or maliciously crafted. While there's a catch-all exception handler at line 322, these specific parsing errors should be caught earlier to avoid logging warnings for expected validation failures.
Current code:
_, iterations_str, salt_hex, hash_hex = parts
iterations = int(iterations_str) # Can raise ValueError
salt = bytes.fromhex(salt_hex) # Can raise ValueErrorSuggested improvement:
try:
iterations = int(iterations_str)
salt = bytes.fromhex(salt_hex)
expected_hash = bytes.fromhex(hash_hex) # Validate this too
except (ValueError, TypeError):
return FalseThis prevents the catch-all exception handler from logging warnings for malformed input, which is expected during authentication attempts with invalid tokens.
|
|
||
| # Test if non-FIPS algorithms are blocked | ||
| try: | ||
| hashlib.md5(b"test") |
There was a problem hiding this comment.
MD5 test may cause issues in strict FIPS mode
Testing MD5 availability by calling hashlib.md5(b"test") will raise a ValueError in strict FIPS mode, which is the intended behavior. However, this test doesn't distinguish between:
- MD5 being blocked by FIPS (good)
- MD5 being unavailable for other reasons (potentially bad)
Additionally, calling MD5 even for testing purposes may trigger security monitoring alerts in some environments.
Suggested improvement:
# Test if non-FIPS algorithms are blocked
try:
# Use usedforsecurity=True to test if FIPS blocks it
hashlib.md5(b"test", usedforsecurity=True)
diagnostics["md5_blocked"] = False
except ValueError as e:
# MD5 blocked - check if it's due to FIPS
diagnostics["md5_blocked"] = "fips" in str(e).lower()
except Exception:
# Other error - MD5 unavailable but not necessarily FIPS
diagnostics["md5_blocked"] = NoneThis provides more nuanced diagnostics and explicitly uses usedforsecurity=True to make the intent clear.
| "optional": ssl.CERT_OPTIONAL, | ||
| "none": ssl.CERT_NONE, | ||
| } | ||
| cert_reqs = cert_reqs_map.get(cert_reqs_str.lower(), ssl.CERT_REQUIRED) |
There was a problem hiding this comment.
Case-sensitive string comparison may cause configuration errors
The cert_reqs_str.lower() call handles case-insensitivity, but if the user provides a value like "REQUIRED" in the environment variable and the code later compares it elsewhere without lowercasing, it could cause issues.
More importantly, there's no validation that redis_ssl_cert_reqs is one of the valid values. Invalid values silently fall back to ssl.CERT_REQUIRED, which could mask configuration errors.
Suggested improvement:
cert_reqs_str = settings.redis_ssl_cert_reqs.lower()
cert_reqs_map = {
"required": ssl.CERT_REQUIRED,
"optional": ssl.CERT_OPTIONAL,
"none": ssl.CERT_NONE,
}
if cert_reqs_str not in cert_reqs_map:
logger.warning(
f"Invalid redis_ssl_cert_reqs value: {settings.redis_ssl_cert_reqs}. "
f"Valid values are: required, optional, none. Defaulting to 'required'."
)
cert_reqs = cert_reqs_map.get(cert_reqs_str, ssl.CERT_REQUIRED)This provides explicit feedback when configuration is incorrect rather than silently using a default.
| token_hash_algorithm: Literal["hmac-sha256", "pbkdf2-sha256", "bcrypt"] = ( | ||
| "hmac-sha256" | ||
| ) | ||
| token_hash_secret: str = "change-me-in-production" # Required for hmac-sha256 |
There was a problem hiding this comment.
Default secret is a security risk
While there's validation in verify_auth_config() that rejects the default secret when using HMAC-SHA256, the default value "change-me-in-production" is still problematic because:
- It's visible in the codebase and could be used by attackers if validation is bypassed
- It may be accidentally used in development and then promoted to production
- The validation only runs when
auth_mode == "token"ortoken_auth_enabled == True, but tokens could still be created via CLI even when auth is disabled
Suggested improvement:
token_hash_secret: str | None = None # Force explicit configuration
# Then in verify_auth_config():
if settings.token_hash_algorithm == "hmac-sha256":
if not settings.token_hash_secret:
raise ValueError(
"TOKEN_HASH_SECRET must be set when using hmac-sha256. "
'Generate one with: python -c "import secrets; print(secrets.token_urlsafe(32))"'
)This forces explicit configuration and prevents accidental use of a weak default.
| EXPOSE 8000 | ||
|
|
||
| HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \ | ||
| CMD ["python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/v1/health')"] |
There was a problem hiding this comment.
Healthcheck may fail silently in restricted environments
The Chainguard healthcheck uses Python's urllib.request which may not handle connection errors gracefully in all environments. If the health check fails, the container will be marked unhealthy but the error won't be visible.
Current code:
CMD ["python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/v1/health')"]Suggested improvement:
CMD ["python", "-c", "import sys, urllib.request; sys.exit(0 if urllib.request.urlopen('http://localhost:8000/v1/health').getcode() == 200 else 1)"]This explicitly checks the status code and returns the appropriate exit code. Alternatively, consider installing curl in the Chainguard image for consistency with the UBI9 image, or use a simple Python script that provides better error handling.
Summary
Add FIPS 140-3 compliance support so AMS can be deployed on FIPS-enabled hosts (e.g., RHEL 9 in FIPS mode) for federal government and regulated-industry deployments requiring FedRAMP compliance.
Problem: The existing
bcrypttoken hashing uses the Blowfish cipher, which is not a FIPS-approved algorithm. Redis connections lack explicit TLS configuration knobs needed for FIPS-compliant data-in-transit.Solution: Make AMS "FIPS-capable" — when deployed on a FIPS-enabled host with the FIPS Docker image, all cryptographic operations use FIPS-approved algorithms via the host's validated OpenSSL module.
Changes
auth.py,config.py): NewTOKEN_HASH_ALGORITHMsetting supportinghmac-sha256(default),pbkdf2-sha256, andbcrypt. Existing bcrypt hashes auto-detected and verified for backward compatibility via hash prefix detection (hmac$,pbkdf2$,$2b$).config.py,utils/redis.py): New settings forREDIS_SSL_CA_CERTS,REDIS_SSL_CERTFILE,REDIS_SSL_KEYFILE,REDIS_SSL_CERT_REQS, andREDIS_SSL_MIN_VERSION. TLS kwargs automatically applied when usingrediss://URLs or when CA certs are configured.fips.py,cli.py,healthcheck.py): Newagent-memory fips-checkCLI command and authenticatedGET /v1/fipsendpoint that report OpenSSL version, kernel FIPS mode, MD5 blocked status, configured token hash algorithm, Redis TLS status, and overall FIPS capability assessment.utils/recency.py): Addedusedforsecurity=Falseto SHA-256 calls used for deduplication (not security), preventing potential issues on strict FIPS OpenSSL configurations.Dockerfile.fips): Multi-target Dockerfile withfips-ubi9(Red Hat UBI 9) andfips-chainguardtargets. ExistingDockerfileunchanged.pyproject.toml): Minimum version raised to>=42.0.0for proper OpenSSL 3.x FIPS provider support.Crypto boundary coverage
Backward compatibility
hmac-sha256butbcryptremains availableDockerfileor default non-FIPS deployment behaviorTest plan
tests/test_fips_compliance.py)Dockerfile.fips --target fips-ubi9on FIPS-enabled hostGET /v1/fipsendpoint returns correct diagnostics on FIPS hostagent-memory fips-checkCLI output on FIPS hostNote
High Risk
Changes core token authentication hashing behavior (new default and new verification paths) and adds Redis TLS configuration, which can impact login validity and connectivity if misconfigured. Also introduces new FIPS diagnostic surfaces (CLI + authenticated API) that need to be reviewed for information exposure and operational expectations.
Overview
Adds a FIPS-capable deployment path by introducing
Dockerfile.fips(UBI9 + Chainguard targets) and bumpingcryptographyto>=42.0.0for OpenSSL 3.x/FIPS provider support.Reworks token hashing to be configurable via
settings.token_hash_algorithmwith new FIPS-friendly schemes (hmac-sha256default,pbkdf2-sha256, plusbcrypt), and updates verification to auto-detect hash formats by prefix for backward compatibility.Adds Redis TLS knobs (
redis_ssl_*) and applies them automatically inget_redis_conn()when usingrediss://or when CA certs are configured; also adds FIPS diagnostics viaagent-memory fips-checkand an authenticatedGET /v1/fipsendpoint, plus test coverage for hashing/TLS/diagnostics andusedforsecurity=Falsefor SHA-256 used in non-security memory dedupe.Written by Cursor Bugbot for commit 898fcc2. This will update automatically on new commits. Configure here.