A fully featured distributed task queue inspired by Celery — built with Go, Redis, and Docker.
This system supports:
- Multiple workers
- Automatic retries with exponential backoff
- Dead Letter Queue (DLQ)
- Result backend
- Job scheduling (like Celery Beat)
- Prometheus metrics
- REST API
- Dashboard endpoints
- Handler registry system (plug‑and‑play job types)
Designed for production-level reliability, but simple enough for learning and extension.
- Redis-backed distributed queue
- Main Queue (
jobs) - Dead Letter Queue (DLQ) (
jobs:dead) - Reliable Pop/Push
- Concurrent worker pool (
Nworkers per process) - Job handler registry (no
switch-case, plugin-style extensibility) - Exponential backoff retry system
- Automatic DLQ routing
- Stores results in Redis backend
- Graceful shutdown support
- Uses
robfig/cron - Works like Celery Beat
- Produces jobs at specific times
- Independent of workers (decoupled)
- Prometheus metrics:
jobs_processed_totaljobs_failed_total
- Submit jobs via REST
- Get job results
- Get queue status
- Shows queue size
- Shows DLQ size
- List jobs in queues
- Full Docker Compose setup:
- Redis
- API
- Worker pool
- Scheduler
┌─────────────────────────┐
│ Scheduler │
│ (Cron Producer) │
└─────────────┬───────────┘
│
▼
┌────────────────┐
│ Redis Queue │
│ jobs + jobs:dead
└────────────────┘
│
▼
┌─────────────────────────────────┐
│ Workers │
│ - handler registry │
│ - retry w/ backoff │
│ - DLQ routing │
│ - result backend │
└─────────────────────────────────┘
│
▼
┌────────────────┐
│ Result Backend │
│ result:<id> │
└────────────────┘
API → Push Jobs → Workers → Save Results → Query Results
Dashboard → Inspect Queues + DLQ
Metrics → Prometheus
go-task-queue/
├── cmd/
│ ├── api/ # API server entry point
│ ├── worker/ # Worker entry point
│ └── scheduler/ # Cron-based scheduler entry
│
├── internal/
│ ├── jobs/ # Job model + types
│ ├── queue/ # Redis queue + DLQ logic
│ ├── worker/ # Worker pool + handlers + retry
│ ├── scheduler/ # Cron scheduler logic
│ ├── results/ # Redis result backend
│ ├── metrics/ # Prometheus metrics
│ └── dashboard/ # HTTP dashboard
│
├── docker-compose.yml
├── Dockerfile
├── go.mod
└── README.md
- Docker & Docker Compose
- Go 1.19+ (for local development)
- Redis (auto-launched via docker-compose)
git clone <repository-url>
cd go-task-queue
docker-compose up --buildThis will start:
- Redis (6379)
- API server (8080)
- Worker pool
- Scheduler
- Prometheus metrics endpoint
- Dashboard endpoint
POST /jobs
{
"type": "email",
"payload": { ... },
"max_retries": 5
}
GET /jobs/{id}
GET /dashboard/queue
GET /dashboard/dead
Environment variables:
REDIS_ADDR default: localhost:6379
REDIS_QUEUE default: jobs
WORKER_COUNT (per worker process)
Configured inside docker-compose.yml.
Add more workers:
docker-compose up --scale worker=5Workers will automatically load-balance because they all pop from the same Redis queue.
go mod downloaddocker-compose up redisgo run ./cmd/apigo run ./cmd/workergo run ./cmd/schedulergo test ./...
Add your preferred license.
Pull requests welcome!
Feel free to add new job types, new queue strategies, or new transports.