Configure Commit-AI for your workflow.
GROQ_API_KEY
- Your Groq API key from https://console.groq.com/keys
- Required for AI analysis
- Stored in
~/.commit-ai.envor.env
export GROQ_API_KEY=gsk_your_key_hereCOMMIT_AI_MODEL
- AI model to use
- Default:
llama-3.1-8b-instant - Options:
llama-3.1-70b-versatile,mixtral-8x7b-32768, etc.
export COMMIT_AI_MODEL=llama-3.1-70b-versatileLocation: ~/.commit-ai.env
GROQ_API_KEY=gsk_your_key_here
COMMIT_AI_MODEL=llama-3.1-8b-instantLocation: .env (in project root)
GROQ_API_KEY=gsk_your_key_here
COMMIT_AI_MODEL=llama-3.1-8b-instantProject configuration overrides global configuration.
Location: .commit-ai.yaml
# AI Settings
model: "llama-3.1-8b-instant"
temperature: 0.7
max_tokens: 15000
# Commit Rules
rules:
max_title_length: 72
require_scope: false
allowed_types:
- feat
- fix
- docs
- refactor
- test
- chore
- build
- ci
- perf
- style
# Scope Settings
scopes:
auto_detect: true
aliases:
frontend: ui
backend: api
database: db
# Ignore Patterns
ignore:
- "*.log"
- "node_modules/"
- ".git/"Go to https://console.groq.com/keys
- Create a free account or log in
- No credit card required for free tier
- Click "Create New API Key"
- Copy the key
- Save it securely
# Option 1: Interactive setup
commit-ai
# Enter key when prompted
# Option 2: Manual setup
echo "GROQ_API_KEY=gsk_your_key_here" > ~/.commit-ai.env# Use different model
commit-ai -m llama-3.1-70b-versatile
# Use local model (future)
commit-ai -m local:llama2# See detailed analysis
commit-ai -v
# Output includes:
# - Detected scope
# - Number of files analyzed
# - Diff size
# - Number of options generated# Add emojis to commit messages
commit-ai -e
# Emojis added to:
# - Commit title
# - Category headers
# - Bullet points# Commit automatically after selection
commit-ai -c
# Skip confirmation prompts
commit-ai -y
# Combine flags
commit-ai -cevAdd to .gitignore:
# Binary files
*.exe
*.dll
*.so
# Build artifacts
dist/
build/
# Dependencies
node_modules/
vendor/
Commit-AI automatically respects .gitignore patterns.
Control AI creativity (0.0 = deterministic, 1.0 = creative):
# In .env
COMMIT_AI_TEMPERATURE=0.7 # Balanced (default)
COMMIT_AI_TEMPERATURE=0.3 # More consistent
COMMIT_AI_TEMPERATURE=0.9 # More creativeControl response length:
# In .env
COMMIT_AI_MAX_TOKENS=8000 # Shorter responses
COMMIT_AI_MAX_TOKENS=15000 # Longer responses (default)# Check if key is set
echo $GROQ_API_KEY
# Set key
export GROQ_API_KEY=gsk_your_key_here
# Or create .env file
echo "GROQ_API_KEY=gsk_your_key_here" > ~/.commit-ai.env- Verify key from https://console.groq.com/keys
- Check for typos
- Regenerate key if needed
- Upgrade Groq plan
- Wait before retrying
- Stage fewer files at once
- Check available models at https://console.groq.com/docs/models
- Use default model:
llama-3.1-8b-instant
-
Store API key securely
- Use
~/.commit-ai.envfor global config - Add
.envto.gitignorefor project config - Never commit API keys
- Use
-
Use project-specific config
- Create
.envin project root - Override global settings as needed
- Share
.commit-ai.yamlwith team
- Create
-
Optimize for your workflow
- Use
-eflag for visual commits - Use
-vflag during development - Use
-cflag for automation
- Use
-
Monitor API usage
- Check usage at https://console.groq.com/usage
- Adjust token limits if needed
- Consider upgrading plan if needed
| Variable | Default | Description |
|---|---|---|
GROQ_API_KEY |
- | Groq API key (required) |
COMMIT_AI_MODEL |
llama-3.1-8b-instant |
AI model to use |
COMMIT_AI_TEMPERATURE |
0.7 |
AI creativity (0.0-1.0) |
COMMIT_AI_MAX_TOKENS |
15000 |
Max response length |