From eaa7dd8bc43a2219a42c7c37ef0830066704f38a Mon Sep 17 00:00:00 2001 From: RychidM Date: Tue, 7 Apr 2026 10:19:32 +0000 Subject: [PATCH 1/4] Adds Ollama service configuration with Docker Compose --- services/ollama/.env | 17 ++++++ services/ollama/README.md | 101 +++++++++++++++++++++++++++++++++++ services/ollama/compose.yaml | 78 +++++++++++++++++++++++++++ 3 files changed, 196 insertions(+) create mode 100644 services/ollama/.env create mode 100644 services/ollama/README.md create mode 100644 services/ollama/compose.yaml diff --git a/services/ollama/.env b/services/ollama/.env new file mode 100644 index 00000000..686107cc --- /dev/null +++ b/services/ollama/.env @@ -0,0 +1,17 @@ +#version=1.1 +#URL=https://github.com/tailscale-dev/ScaleTail +#COMPOSE_PROJECT_NAME= # Optional: only use when running multiple deployments on the same infrastructure. + +# Service Configuration +SERVICE=ollama +IMAGE_URL=ollama/ollama:latest + +# Network Configuration +SERVICEPORT=11434 # Ollama's default API port. Uncomment the "ports:" section in compose.yaml to expose to LAN. +DNS_SERVER=9.9.9.9 # Preferred DNS server for Tailscale. Uncomment the "dns:" section in compose.yaml to enable. + +# Tailscale Configuration +TS_AUTHKEY= # Auth key from https://tailscale.com/admin/authkeys. See: https://tailscale.com/kb/1085/auth-keys#generate-an-auth-key for instructions. + +# Ollama-specific variables +OLLAMA_API_KEY= # Optional: set a secret key to restrict API access (leave blank to disable auth) diff --git a/services/ollama/README.md b/services/ollama/README.md new file mode 100644 index 00000000..36e39798 --- /dev/null +++ b/services/ollama/README.md @@ -0,0 +1,101 @@ +# Ollama with Tailscale Sidecar Configuration + +This Docker Compose configuration sets up [Ollama](https://ollama.com) with Tailscale as a sidecar container to keep the API reachable securely over your Tailnet. + +## Ollama + +[Ollama](https://ollama.com) lets you run large language models (LLMs) locally — such as Llama 3, Mistral, and Gemma — with a simple API compatible with the OpenAI client format. Pairing it with Tailscale means you can access your local models from any device on your Tailnet (phone, laptop, remote machine) without exposing the API to the public internet. + +## Configuration Overview + +In this setup, the `tailscale-ollama` service runs Tailscale, which manages secure networking for Ollama. The `app-ollama` service uses Docker's `network_mode: service:tailscale` so all traffic is routed through the Tailscale network stack. The Ollama API remains Tailnet-only by default unless you explicitly expose the port to your LAN. + +An optional `yourNetwork` external Docker network is attached to the `tailscale` container. This allows other containers on the same host (such as Open WebUI or other LLM frontends) to reach Ollama via its Tailscale IP, keeping inter-container communication on the same overlay network. + +## Prerequisites + +- The host user must be in the `docker` group. +- The `/dev/net/tun` device must be available on the host (standard on most Linux systems). +- Pre-create the bind-mount directories before starting the stack to avoid Docker creating root-owned folders: + +```bash +mkdir -p config ts/state ollama-data +``` + +- If you use the optional `yourNetwork` network, create it first if it does not already exist: + +```bash +docker network create yourNetwork +``` + +If you don't use a shared proxy network, remove the `networks:` sections from `compose.yaml`. + +## Volumes + +| Path | Purpose | +|------|---------| +| `./config` | Tailscale serve config (`serve.json`) | +| `./ts/state` | Tailscale persistent state | +| `./ollama-data` | Downloaded Ollama models (can be large — ensure enough disk space) | + +## MagicDNS and HTTPS + +Tailscale Serve is pre-configured to proxy HTTPS on port 443 to Ollama's internal port 11434. To enable it: + +1. Uncomment `TS_ACCEPT_DNS=true` in the `tailscale` service environment. +2. Ensure your Tailnet has MagicDNS and HTTPS certificates enabled in the [Tailscale admin console](https://login.tailscale.com/admin/dns). +3. The `serve.json` config in `compose.yaml` uses `$TS_CERT_DOMAIN` automatically — no manual editing needed. + +You can then reach Ollama at `https://ollama..ts.net`. + +## Port Exposure (LAN access) + +By default, the `ports:` section is commented out — Ollama is only accessible over your Tailnet. If you also want LAN access (e.g. from devices not on Tailscale), uncomment it in `compose.yaml`: + +```yaml +ports: + - 0.0.0.0:11434:11434 +``` + +This is optional and not required for Tailnet-only usage. + +## API Key (Optional) + +Ollama supports a simple bearer token for API access. Set `OLLAMA_API_KEY` in your `.env` file to enable it. Leave it blank to allow unauthenticated access (safe when Tailnet-only). + +## First-time Setup + +After starting the stack, pull a model to get started: + +```bash +docker exec app-ollama ollama pull llama3 +``` + +You can then send requests to the API: + +```bash +curl http://:11434/api/generate \ + -d '{"model": "llama3", "prompt": "Hello!"}' +``` + +Or if using HTTPS via Tailscale Serve: + +```bash +curl https://ollama..ts.net/api/generate \ + -d '{"model": "llama3", "prompt": "Hello!"}' +``` + +## Files to check + +Please check the following contents for validity as some variables need to be defined upfront. + +- `.env` — Set `TS_AUTHKEY` (required). Optionally set `OLLAMA_API_KEY`. + +## Useful Links + +- [Ollama official site](https://ollama.com) +- [Ollama model library](https://ollama.com/library) +- [Ollama GitHub](https://github.com/ollama/ollama) +- [Tailscale auth keys](https://tailscale.com/kb/1085/auth-keys) +- [Tailscale Serve docs](https://tailscale.com/kb/1312/serve) +- [Open WebUI](https://github.com/open-webui/open-webui) — a popular browser-based UI for Ollama diff --git a/services/ollama/compose.yaml b/services/ollama/compose.yaml new file mode 100644 index 00000000..db7b1efa --- /dev/null +++ b/services/ollama/compose.yaml @@ -0,0 +1,78 @@ +configs: + ts-serve: + content: | + {"TCP":{"443":{"HTTPS":true}}, + "Web":{"$${TS_CERT_DOMAIN}:443": + {"Handlers":{"/": + {"Proxy":"http://127.0.0.1:11434"}}}}, + "AllowFunnel":{"$${TS_CERT_DOMAIN}:443":false}} + +services: +# Make sure you have updated/checked the .env file with the correct variables. +# All the ${ xx } need to be defined there. + + # Tailscale Sidecar Configuration + tailscale: + image: tailscale/tailscale:latest # Image to be used + container_name: tailscale-${SERVICE} # Name for local container management + hostname: ${SERVICE} # Name used within your Tailscale environment + environment: + - TS_AUTHKEY=${TS_AUTHKEY} + - TS_STATE_DIR=/var/lib/tailscale + - TS_SERVE_CONFIG=/config/serve.json # Tailscale Serve configuration to expose the web interface on your local Tailnet - remove this line if not required + - TS_USERSPACE=false + - TS_ENABLE_HEALTH_CHECK=true # Enable healthcheck endpoint: "/healthz" + - TS_LOCAL_ADDR_PORT=127.0.0.1:41234 # The : for the healthz endpoint + #- TS_ACCEPT_DNS=true # Uncomment when using MagicDNS + - TS_AUTH_ONCE=true + configs: + - source: ts-serve + target: /config/serve.json + volumes: + - ./config:/config # Config folder used to store Tailscale files + - ./ts/state:/var/lib/tailscale # Tailscale requirement + devices: + - /dev/net/tun:/dev/net/tun # Network configuration for Tailscale to work + cap_add: + - net_admin # Tailscale requirement + - sys_module # Required to load kernel modules for Tailscale + #ports: + # - 0.0.0.0:${SERVICEPORT}:${SERVICEPORT} # Binding port ${SERVICE}PORT to the local network - may be removed if only exposure to your Tailnet is required + # If any DNS issues arise, use your preferred DNS provider by uncommenting the config below + #dns: + # - ${DNS_SERVER} + networks: + - yourNetwork # Optional: connect to an existing proxy network so other containers can reach Ollama via its Tailscale IP + healthcheck: + test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:41234/healthz"] # Check Tailscale has a Tailnet IP and is operational + interval: 1m # How often to perform the check + timeout: 10s # Time to wait for the check to succeed + retries: 3 # Number of retries before marking as unhealthy + start_period: 10s # Time to wait before starting health checks + restart: always + + # Ollama + application: + image: ${IMAGE_URL} # Image to be used + network_mode: service:tailscale # Sidecar configuration to route Ollama through Tailscale + container_name: app-${SERVICE} # Name for local container management + environment: + - OLLAMA_HOST=0.0.0.0:11434 + - OLLAMA_API_KEY=${OLLAMA_API_KEY} # Optional: set an API key to restrict access + - OLLAMA_KEEP_ALIVE=24h # Optional: keeps models loaded in memory (default is 5 min) + volumes: + - ./${SERVICE}-data:/root/.ollama # Stores downloaded models + depends_on: + tailscale: + condition: service_healthy + healthcheck: + test: ["CMD", "pgrep", "-f", "${SERVICE}"] # Check if Ollama process is running + interval: 1m # How often to perform the check + timeout: 10s # Time to wait for the check to succeed + retries: 3 # Number of retries before marking as unhealthy + start_period: 30s # Time to wait before starting health checks + restart: always + +networks: + yourNetwork: + external: true # Assumes an existing external Docker network named "yourNetwork" From 731f7665a453b0f44e7e55dd417807e4e38ceaf7 Mon Sep 17 00:00:00 2001 From: RychidM Date: Sun, 12 Apr 2026 03:02:40 +0000 Subject: [PATCH 2/4] Comment out optional network configurations and ollama API key in .env for clarity --- services/ollama/.env | 2 +- services/ollama/compose.yaml | 12 +++++++----- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/services/ollama/.env b/services/ollama/.env index 686107cc..d303b58d 100644 --- a/services/ollama/.env +++ b/services/ollama/.env @@ -14,4 +14,4 @@ DNS_SERVER=9.9.9.9 # Preferred DNS server for Tailscale. Uncomment the "dns:" se TS_AUTHKEY= # Auth key from https://tailscale.com/admin/authkeys. See: https://tailscale.com/kb/1085/auth-keys#generate-an-auth-key for instructions. # Ollama-specific variables -OLLAMA_API_KEY= # Optional: set a secret key to restrict API access (leave blank to disable auth) +# OLLAMA_API_KEY= # Optional: set a secret key to restrict API access (leave blank to disable auth) diff --git a/services/ollama/compose.yaml b/services/ollama/compose.yaml index db7b1efa..7c454541 100644 --- a/services/ollama/compose.yaml +++ b/services/ollama/compose.yaml @@ -41,8 +41,10 @@ services: # If any DNS issues arise, use your preferred DNS provider by uncommenting the config below #dns: # - ${DNS_SERVER} - networks: - - yourNetwork # Optional: connect to an existing proxy network so other containers can reach Ollama via its Tailscale IP + + # networks: + # - yourNetwork # Optional: connect to an existing proxy network so other containers can reach Ollama via its Tailscale IP + healthcheck: test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:41234/healthz"] # Check Tailscale has a Tailnet IP and is operational interval: 1m # How often to perform the check @@ -73,6 +75,6 @@ services: start_period: 30s # Time to wait before starting health checks restart: always -networks: - yourNetwork: - external: true # Assumes an existing external Docker network named "yourNetwork" +# networks: +# yourNetwork: +# external: true # Assumes an existing external Docker network named "yourNetwork" From 093e620d3d56abfbee296c3c1dd3de153d30355c Mon Sep 17 00:00:00 2001 From: RychidM Date: Sun, 12 Apr 2026 08:01:43 +0000 Subject: [PATCH 3/4] Add Ollama service to general README table and update compose file. --- README.md | 1 + services/ollama/compose.yaml | 18 +++++++++--------- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 67ea435b..dc687279 100644 --- a/README.md +++ b/README.md @@ -186,6 +186,7 @@ A huge thank you to all our contributors! ScaleTail wouldn’t be what it is tod | 🖥️ **Node-RED** | A flow-based development tool for visual programming. | [Details](services/nodered) | | 🖥️ **Portainer** | A lightweight management UI which allows you to easily manage your Docker environments. | [Details](services/portainer) | | 🔍 **searXNG** | A free internet metasearch engine which aggregates results from various search services. | [Details](services/searxng) | +| 🧠 **Ollama** | A self-hosted solution for running open large language models (LLMs) locally with an OpenAI-compatible API. | [Details](services/ollama) | ### 📈 Monitoring and Analytics diff --git a/services/ollama/compose.yaml b/services/ollama/compose.yaml index 7c454541..1650e48a 100644 --- a/services/ollama/compose.yaml +++ b/services/ollama/compose.yaml @@ -8,8 +8,8 @@ configs: "AllowFunnel":{"$${TS_CERT_DOMAIN}:443":false}} services: -# Make sure you have updated/checked the .env file with the correct variables. -# All the ${ xx } need to be defined there. + # Make sure you have updated/checked the .env file with the correct variables. + # All the ${ xx } need to be defined there. # Tailscale Sidecar Configuration tailscale: @@ -21,10 +21,10 @@ services: - TS_STATE_DIR=/var/lib/tailscale - TS_SERVE_CONFIG=/config/serve.json # Tailscale Serve configuration to expose the web interface on your local Tailnet - remove this line if not required - TS_USERSPACE=false - - TS_ENABLE_HEALTH_CHECK=true # Enable healthcheck endpoint: "/healthz" - - TS_LOCAL_ADDR_PORT=127.0.0.1:41234 # The : for the healthz endpoint - #- TS_ACCEPT_DNS=true # Uncomment when using MagicDNS + - TS_ENABLE_HEALTH_CHECK=true # Enable healthcheck endpoint: "/healthz" + - TS_LOCAL_ADDR_PORT=127.0.0.1:41234 # The : for the healthz endpoint - TS_AUTH_ONCE=true + # - TS_ACCEPT_DNS=true # Uncomment when using MagicDNS configs: - source: ts-serve target: /config/serve.json @@ -44,9 +44,9 @@ services: # networks: # - yourNetwork # Optional: connect to an existing proxy network so other containers can reach Ollama via its Tailscale IP - + healthcheck: - test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:41234/healthz"] # Check Tailscale has a Tailnet IP and is operational + test: [ "CMD", "wget", "--spider", "-q", "http://127.0.0.1:41234/healthz" ] # Check Tailscale has a Tailnet IP and is operational interval: 1m # How often to perform the check timeout: 10s # Time to wait for the check to succeed retries: 3 # Number of retries before marking as unhealthy @@ -60,15 +60,15 @@ services: container_name: app-${SERVICE} # Name for local container management environment: - OLLAMA_HOST=0.0.0.0:11434 - - OLLAMA_API_KEY=${OLLAMA_API_KEY} # Optional: set an API key to restrict access - OLLAMA_KEEP_ALIVE=24h # Optional: keeps models loaded in memory (default is 5 min) + # - OLLAMA_API_KEY=${OLLAMA_API_KEY} # Optional: set an API key to restrict access volumes: - ./${SERVICE}-data:/root/.ollama # Stores downloaded models depends_on: tailscale: condition: service_healthy healthcheck: - test: ["CMD", "pgrep", "-f", "${SERVICE}"] # Check if Ollama process is running + test: [ "CMD", "pgrep", "-f", "${SERVICE}" ] # Check if Ollama process is running interval: 1m # How often to perform the check timeout: 10s # Time to wait for the check to succeed retries: 3 # Number of retries before marking as unhealthy From 8ac86625afe9670d524439dab7f9226bf87ecc1b Mon Sep 17 00:00:00 2001 From: crypt0rr <57799908+crypt0rr@users.noreply.github.com> Date: Sun, 12 Apr 2026 12:51:59 +0200 Subject: [PATCH 4/4] Update .env and README for Ollama service configuration - Add time zone setting for containers in .env - Improve formatting of the configuration table in README --- services/ollama/.env | 4 ++++ services/ollama/README.md | 8 ++++---- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/services/ollama/.env b/services/ollama/.env index d303b58d..a2ba5626 100644 --- a/services/ollama/.env +++ b/services/ollama/.env @@ -13,5 +13,9 @@ DNS_SERVER=9.9.9.9 # Preferred DNS server for Tailscale. Uncomment the "dns:" se # Tailscale Configuration TS_AUTHKEY= # Auth key from https://tailscale.com/admin/authkeys. See: https://tailscale.com/kb/1085/auth-keys#generate-an-auth-key for instructions. +#Time Zone setting for containers +TZ=Europe/Amsterdam # See: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones + +# Any Container environment variables are declared below. See https://docs.docker.com/compose/how-tos/environment-variables/ # Ollama-specific variables # OLLAMA_API_KEY= # Optional: set a secret key to restrict API access (leave blank to disable auth) diff --git a/services/ollama/README.md b/services/ollama/README.md index 36e39798..b363fc5b 100644 --- a/services/ollama/README.md +++ b/services/ollama/README.md @@ -32,10 +32,10 @@ If you don't use a shared proxy network, remove the `networks:` sections from `c ## Volumes -| Path | Purpose | -|------|---------| -| `./config` | Tailscale serve config (`serve.json`) | -| `./ts/state` | Tailscale persistent state | +| Path | Purpose | +| --------------- | ------------------------------------------------------------------ | +| `./config` | Tailscale serve config (`serve.json`) | +| `./ts/state` | Tailscale persistent state | | `./ollama-data` | Downloaded Ollama models (can be large — ensure enough disk space) | ## MagicDNS and HTTPS