A rate limiting service built with Express, Redis, and TypeScript. Uses the sliding window log algorithm with Redis sorted sets to track and limit API requests per user.
Every incoming request to /api/* goes through the rate limiter middleware. The middleware:
- Identifies the user by their API key (falls back to IP if no key is provided)
- Looks up the user's tier and its rate limit config from Redis
- Checks how many requests the user has made in the current time window
- Either lets the request through or returns a
429 Too Many Requests
Rate limit info is included in response headers on every request:
X-RateLimit-Limit: 5
X-RateLimit-Remaining: 3
X-RateLimit-Reset: 1711234627
When a request gets blocked:
HTTP/1.1 429 Too Many Requests
Retry-After: 47
{ "error": "Rate limit exceeded", "limit": 5, "remaining": 0, "retry_after": 47 }
- Express 5 — HTTP server and routing
- Redis — stores request timestamps (sorted sets) and tier/key configs (hashes)
- ioredis — Redis client for Node.js
- TypeScript — type safety
- tsx — runs TypeScript directly without a build step
cd limitron
docker compose up --buildThat's it. Redis and the app both start up. Server runs on http://localhost:3000.
Make sure you have Redis running locally on port 6379.
cd limitron
npm install
cp .env.example .env
npx tsx src/index.tsGET /api/data — dummy endpoint to test rate limiting
curl http://localhost:3000/api/dataGET /api/health — checks if Redis is connected
curl http://localhost:3000/api/healthGET /api/status — shows your current rate limit usage without consuming a request
curl -H "X-API-Key: your-key" http://localhost:3000/api/statusPOST /admin/tiers — create a rate limit tier
curl -X POST http://localhost:3000/admin/tiers \
-H "Content-Type: application/json" \
-d '{"name": "free", "max_requests": 5, "window_seconds": 60}'GET /admin/tiers — list all tiers
curl http://localhost:3000/admin/tiersPOST /admin/keys — generate a new API key
curl -X POST http://localhost:3000/admin/keys \
-H "Content-Type: application/json" \
-d '{"tier": "free"}'GET /admin/keys — list all API keys
curl http://localhost:3000/admin/keysCreate a tier with a low limit, generate a key, then hit the endpoint a bunch of times:
# create a tier that allows 5 requests per minute
curl -X POST http://localhost:3000/admin/tiers \
-H "Content-Type: application/json" \
-d '{"name": "free", "max_requests": 5, "window_seconds": 60}'
# generate an API key on that tier
curl -X POST http://localhost:3000/admin/keys \
-H "Content-Type: application/json" \
-d '{"tier": "free"}'
# copy the key from the response
# burst test — send 10 requests, first 5 pass, rest get 429
for i in $(seq 1 10); do
echo "Request $i:"
curl -s -H "X-API-Key: YOUR_KEY" http://localhost:3000/api/data
echo ""
doneWithout an API key, requests are identified by IP and use the default limits from .env (100 requests per 60 seconds).
src/
├── config/
│ └── redis.ts # redis connection
├── middleware/
│ └── rateLimiter.ts # sliding window logic + express middleware
├── routes/
│ ├── api.ts # protected endpoints
│ └── admin.ts # key and tier management
├── internal/
│ └── types.ts # shared interfaces
└── index.ts # entry point, wires everything together
later
- Redis sorted sets and how they map to the sliding window log algorithm
- Writing Express middleware that intercepts requests before they hit route handlers
- HTTP 429 status code and standard rate limit headers (X-RateLimit-Limit, Remaining, Reset, Retry-After)
- Tier-based access control with API keys
- Factory pattern for configurable middleware