A natural language CLI that converts queries to shell commands using a local LLM (Ollama).
$ ask find all files bigger than 100MB
1. find / -size +100M -type f # Search entire filesystem for files > 100MB
2. find . -size +100M -type f # Search current directory only
Select [1-2], default=1, [Q]uit: 2
>> find . -size +100M -type f
[E]xecute [C]opy [Q]uit (default=E): E
./downloads/archive.zip
./videos/demo.mp4
Step 1 — Install Ollama
brew install ollamaStep 2 — Pull the recommended model
ollama pull qwen2.5-coder:7bThis downloads ~4.7GB. qwen2.5-coder:7b is the recommended model for shell command generation (84.8% HumanEval, fast on a laptop).
Step 3 — Install ask
brew tap sdaas/tap
brew install askThat's it. Ollama starts automatically when you run ask.
ask <natural language query> [options]ask find all files bigger than 100MB
ask list running processes sorted by memory
ask show disk usage for each directory in /var
ask count lines in all python files recursively
ask compress the logs folder into a tarball
ask --dry-run delete all .tmp files older than 7 days
ask --verbose find duplicate files in this directory| Flag | Description |
|---|---|
--dry-run |
Show the command but do not execute it |
--verbose |
Print the LLM prompt, raw response, and timing |
--help |
Show usage information |
Config file is auto-created at ~/.ask/config.yaml on first run:
ollama_model: qwen2.5-coder:7b # Ollama model to use
history_limit: 5 # Number of past interactions sent as context
dry_run_default: false # Set to true to always dry-runAll interactions are logged to ~/.ask/ask.log:
2026-04-12 10:23:01 [INFO] Query: find big files
2026-04-12 10:23:02 [INFO] LLM returned 2 option(s)
2026-04-12 10:23:05 [INFO] Executed: find . -size +100M | exit=0 | stdout=42B
Interaction history (queries, commands, outputs) is stored in ~/.ask/ask.db (SQLite).
To inspect it:
sqlite3 ~/.ask/ask.db "SELECT user_query, executed_command, exit_code FROM interactions ORDER BY id DESC LIMIT 10;"ask stores the last N interactions in SQLite and passes them as context to the LLM. This lets subsequent queries reference previous results:
ask find all .log files older than 30 days
# → find /var/log -name "*.log" -mtime +30
ask delete them
# → LLM sees previous query+output and generates: find /var/log -name "*.log" -mtime +30 -deleteNote: Full pronoun resolution ("them", "those") is best-effort in v1 — the LLM infers context from the truncated output passed in the prompt. Complex cases may require rephrasing.
- Shell-affecting commands (
cd,export,source) cannot change your current shell session when run viaask, since commands execute in a subprocess.askwill warn you and suggest running the command directly. - Model accuracy: LLM-generated commands are suggestions. Always review before executing, especially for destructive operations (
rm,drop, etc.). - Ollama model size:
qwen2.5-coder:7brequires ~4.7GB disk space and ~5GB RAM. For lower-spec machines, considerllama3.2:3b(1.9GB) — updateollama_modelin~/.ask/config.yaml. - Clipboard:
[C]opyusespbcopy(macOS only).
git clone https://github.com/Sdaas/ask
cd ask
python3 -m venv venv && source venv/bin/activate
pip install -e .
pytest tests/ -vRun directly after installing:
ask find all files bigger than 100MB- Cloud LLM backend (Claude): Add support for Anthropic's Claude API as an alternative to Ollama for users who prefer a cloud-hosted model or don't want to run Ollama locally. Would use
claude-haiku-*for speed and cost. RequiresANTHROPIC_API_KEYand theanthropicPython package. - Smarter context resolution: Improve pronoun handling ("delete them", "move those") by parsing structured output from previous commands rather than passing raw truncated stdout to the LLM.
- Linux clipboard support:
[C]opycurrently usespbcopy(macOS only); add fallback toxclip/xselfor Linux. - Shell-state commands: Support
cd,export, andsourceby emitting shell-eval-safe output that the wrapper script caneval, allowing these commands to affect the user's current session.