Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
289 changes: 289 additions & 0 deletions mpg/client-configuration.html.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,289 @@
---
title: "Client-Side Connection Configuration"
layout: docs
nav: mpg
date: 2026-03-25
---

This guide covers how to configure your application's database client for reliable, performant connections to Fly Managed Postgres. It explains why certain settings matter and provides configuration examples for popular libraries across multiple languages.

For a quick summary of the essentials, see [Connect Your Client](/docs/mpg/connect-your-client/).

## Why client configuration matters

Your Fly.io apps, as well as your Fly.io Postgres databases, sit behind Fly.io's proxy. Public and private traffic route through this proxy. Sometimes our proxy restarts and the proxy does its best to drain connections before restarting. Postgres doesn't have a protocol level mechanism to tell clients to stop sending queries on a particular connection, so we have to rely on client-side configuration to handle graceful handoff.

The proxy's shutdown timeout is **10 minutes**. Any connection that remains open after that is terminated. If your application holds connections for longer than this — which is the default behavior of most connection pools — you might run into errors like `tcp recv (idle): closed` or `ECONNRESET` during proxy deployments.

The fix is straightforward: configure your connection pool to **proactively recycle connections** on a shorter interval than the proxy's timeout.

## Recommended settings

| Setting | Recommended value | Why |
|---------|-------------------|-----|
| Max connection lifetime | **600s** (10 min) | Recycle connections before the proxy closes them |
| Idle connection timeout | **300s** (5 min) | Releases unused connections before they're forcibly closed |
| Prepared statements | **Disabled** in transaction mode | PgBouncer can't track per-connection prepared statement state |
| Connection retries | **Enabled** with backoff | Handle transient connection drops during proxy restarts |

## PgBouncer mode and your client

All MPG clusters include PgBouncer for connection pooling. The pool mode you choose on the cluster side affects what your client can do. See [Cluster Configuration Options](/docs/mpg/configuration/) for how to change modes.

**Session mode** (default): A PgBouncer connection is held for the entire client session. Full PostgreSQL feature compatibility — prepared statements, advisory locks, `LISTEN/NOTIFY`, and multi-statement transactions all work normally. Lower connection reuse.

**Transaction mode**: PgBouncer assigns a connection per transaction and returns it to the pool afterward. Higher throughput and connection reuse, but:

- **Named prepared statements** don't work — you must use unnamed/extended query protocol
- **Advisory locks** are not session-scoped — use the direct URL for migrations
- **`LISTEN/NOTIFY`** doesn't work — use an alternative notifier (see the [Phoenix guide](/docs/mpg/guides-examples/phoenix-guide/) for Oban examples)
- **`SET` commands** affect only the current transaction

If your ORM or driver supports it, transaction mode with unnamed prepared statements is the better choice for most web applications.

## Language-specific configuration

<details data-render="markdown">
<summary>Node.js — pg (node-postgres)</summary>

```javascript
const { Pool } = require('pg');

const pool = new Pool({
connectionString: process.env.DATABASE_URL,

// Pool sizing
max: 10,

// Connection lifecycle
idleTimeoutMillis: 300_000, // 5 min — close idle connections
maxLifetimeMillis: 600_000, // 10 min — recycle before proxy timeout
connectionTimeoutMillis: 5_000, // 5s — fail fast on connection attempts
});
```

`maxLifetimeMillis` was added in `pg-pool` 3.6.0 (included with `pg` 8.11+). If you're on an older version, upgrade — this setting is critical for reliable connections on Fly.

</details>

<details data-render="markdown">
<summary>Node.js — Prisma</summary>

Add `pgbouncer=true` and connection pool parameters to your connection string:

```env
DATABASE_URL="postgresql://fly-user:<password>@pgbouncer.<cluster>.flympg.net/fly-db?pgbouncer=true&connection_limit=10&pool_timeout=30"
```

The `pgbouncer=true` parameter tells Prisma to disable prepared statements and adjust its connection handling for PgBouncer compatibility.

In your Prisma schema:

```prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
```

Prisma manages its own connection pool internally. The `connection_limit` parameter controls the pool size per Prisma client instance.

</details>

<details data-render="markdown">
<summary>Python — SQLAlchemy</summary>

```python
import os
from sqlalchemy import create_engine

engine = create_engine(
os.environ["DATABASE_URL"],

# Pool sizing
pool_size=10,
max_overflow=5,

# Connection lifecycle
pool_recycle=600, # 10 min — recycle connections before proxy timeout
pool_timeout=30, # 30s — wait for available connection
pool_pre_ping=True, # verify connections before use

# Disable prepared statements for PgBouncer transaction mode
# (uncomment if using transaction mode)
# connect_args={"prepare_threshold": 0}, # for psycopg3
# connect_args={"options": "-c plan_cache_mode=force_custom_plan"}, # alternative
)
```

`pool_recycle` is the max connection lifetime — SQLAlchemy will close and replace connections older than this value.

`pool_pre_ping` issues a lightweight `SELECT 1` before each connection checkout. This adds a small round-trip but catches stale connections before your query fails.

</details>

<details data-render="markdown">
<summary>Python — psycopg3 connection pool</summary>

```python
import os
from psycopg_pool import ConnectionPool

pool = ConnectionPool(
conninfo=os.environ["DATABASE_URL"],

# Pool sizing
min_size=2,
max_size=10,

# Connection lifecycle
max_lifetime=600, # 10 min — recycle before proxy timeout
max_idle=300, # 5 min — close idle connections
reconnect_timeout=5,

# Disable prepared statements for PgBouncer transaction mode
# (uncomment if using transaction mode)
# kwargs={"prepare_threshold": 0},
)
```

</details>

<details data-render="markdown">
<summary>Go — database/sql with pgx</summary>

```go
import (
"database/sql"
"os"
"time"

_ "github.com/jackc/pgx/v5/stdlib"
)

// Open the connection pool.
db, err := sql.Open("pgx", os.Getenv("DATABASE_URL"))
if err != nil {
log.Fatal(err)
}

// Pool sizing
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(5)

// Connection lifecycle
db.SetConnMaxLifetime(10 * time.Minute) // recycle before proxy timeout
db.SetConnMaxIdleTime(5 * time.Minute) // close idle connections
```

Go's `database/sql` handles connection recycling natively. `SetConnMaxLifetime` is the key setting — it ensures no connection is reused beyond the specified duration.

To disable prepared statements for PgBouncer transaction mode, use the `default_query_exec_mode` connection parameter:

```go
connStr := os.Getenv("DATABASE_URL") + "?default_query_exec_mode=exec"
db, err := sql.Open("pgx", connStr)
```

</details>

<details data-render="markdown">
<summary>Ruby — ActiveRecord (Rails)</summary>

```yaml
# config/database.yml
production:
url: <%= ENV["DATABASE_URL"] %>
pool: <%= ENV.fetch("RAILS_MAX_THREADS", 5) %>
idle_timeout: 300 # 5 min — close idle connections
checkout_timeout: 5
prepared_statements: false # required for PgBouncer transaction mode
```

For connection max lifetime, Rails 7.2+ supports `max_lifetime` natively:

```yaml
# config/database.yml (Rails 7.2+)
production:
url: <%= ENV["DATABASE_URL"] %>
pool: <%= ENV.fetch("RAILS_MAX_THREADS", 5) %>
max_lifetime: 600 # 10 min — recycle before proxy timeout
idle_timeout: 300
checkout_timeout: 5
prepared_statements: false
```

On older Rails versions, connections will still be recycled by the idle timeout if your app has enough traffic. For low-traffic apps on older Rails, consider the [`activerecord-connection_reaper`](https://github.com/mperham/activerecord-connection_reaper) gem or a periodic reconnection task.

</details>

<details data-render="markdown">
<summary>Elixir/Phoenix — Ecto</summary>

```elixir
# config/runtime.exs
config :my_app, MyApp.Repo,
url: System.fetch_env!("DATABASE_URL"),
pool_size: 8,
queue_target: 5_000,
queue_interval: 5_000,
prepare: :unnamed # required for PgBouncer transaction mode
```

For comprehensive Phoenix setup including migrations, Oban configuration, and Ecto-specific troubleshooting, see the [Phoenix with Managed Postgres](/docs/mpg/guides-examples/phoenix-guide/) guide.

<div class="note icon">
**Note on connection lifetime in Ecto:** Postgrex does not currently support a max connection lifetime setting. Connections are recycled only when they encounter errors or are explicitly disconnected. The idle timeout and PgBouncer's own `server_lifetime` setting (default 1800s) provide some protection, but for the most reliable behavior during proxy restarts, a `max_lifetime` option in Postgrex/DBConnection would be ideal. This is a known gap.
</div>

<!-- TODO: Update this section when postgrex adds max_lifetime support -->

</details>

### Common connection errors

**`FATAL: too many connections for role`** or **`remaining connection slots are reserved for roles with the SUPERUSER attribute`**: Your total pool size across all processes exceeds the PgBouncer connection limit. To fix:

- Reduce `pool_size` / `max` in each process
- Switch to **transaction** pool mode for better connection reuse
- Check for connection leaks (connections opened but never returned to the pool)

**Calculating your total pool usage:** If you have 3 web processes with `pool_size: 10` and 2 worker processes with `pool_size: 5`, your total is `(3 × 10) + (2 × 5) = 40` connections.

## Troubleshooting

### `tcp recv (idle): closed` or `tcp recv (idle): timeout`

**Cause:** The proxy or PgBouncer closed an idle connection. This happens during proxy deployments (the proxy drains connections on restart) or when PgBouncer's idle timeout is reached.

**Fix:** Set your client's idle timeout to **300 seconds** (5 min) and max connection lifetime to **600 seconds** (10 min). Most connection pools reconnect automatically when a connection is closed — these errors are transient. If you're seeing them frequently outside of proxy deployments, reduce your pool size so fewer connections sit idle.

### `ECONNRESET` or "connection reset by peer"

**Cause:** A long-lived connection was terminated during something like a proxy restart. Connections that remain open too long during a proxy drain may be forcibly closed.

**Fix:** Set max connection lifetime to **600 seconds** (10 min) or less so connections are recycled before the proxy needs to kill them. Enable retry logic with backoff for transient failures.

### `FATAL 08P01 protocol_violation` on login

**Cause:** Your client is sending named prepared statements through PgBouncer in transaction mode. PgBouncer can't route prepared statements to the correct backend connection in this mode.

**Fix:** Disable named prepared statements in your client configuration:
- **Node.js (pg):** This is the default behavior — no change needed
- **Prisma:** Add `?pgbouncer=true` to your connection string
- **Python (psycopg3):** Set `prepare_threshold=0`
- **Go (pgx):** Use `default_query_exec_mode=exec`
- **Ruby (ActiveRecord):** Set `prepared_statements: false`
- **Elixir (Ecto):** Set `prepare: :unnamed`

### `prepared statement "..." does not exist`

**Cause:** Same as above — named prepared statements being used with PgBouncer in transaction mode.

**Fix:** Same as the `protocol_violation` fix above.

### Connection hangs on startup

**Cause:** DNS resolution failure on Fly's internal IPv6 network. Your app can't resolve the `.flympg.net` address.

**Fix:** Ensure your app is configured for IPv6. For Elixir apps, see the [IPv6 settings guide](https://fly.io/docs/elixir/getting-started/#important-ipv6-settings). For other runtimes, verify that your DNS resolver supports AAAA records on the Fly private network.
4 changes: 3 additions & 1 deletion mpg/configuration.html.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@ date: 2025-08-18

## Connection Pooling

All Managed Postgres clusters come with PGBouncer for connection pooling, which helps manage database connections efficiently. You can configure how PGBouncer assigns connections to clients by changing the pool mode.
All Managed Postgres clusters come with PGBouncer for connection pooling, which helps manage database connections efficiently. You can configure how PGBouncer assigns connections to clients by changing the pool mode.

For configuring your application's connection pool settings (lifetime, idle timeout, pool size, and language-specific examples), see [Client-Side Connection Configuration](/docs/mpg/client-configuration/).

### Pool Mode Options

Expand Down
48 changes: 48 additions & 0 deletions mpg/connect-your-client.html.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: Connect Your Client
layout: docs
nav: mpg
date: 2026-03-25
---

After [creating your MPG cluster](/docs/mpg/create-and-connect/) and attaching your app, configure your database client for reliable connections. These settings prevent dropped connections during routine proxy maintenance and keep your connection pool healthy.

## Use the pooled connection URL

Always connect your application through PgBouncer using the **pooled URL** (the default when you attach an app). The pooled URL looks like:

```
postgresql://fly-user:<password>@pgbouncer.<cluster>.flympg.net/fly-db
```

Use the **direct URL** (`direct.<cluster>.flympg.net`) only for migrations, advisory locks, or `LISTEN/NOTIFY` — operations that require session-level stickiness.

## Set connection lifetime and idle timeout

Fly's edge proxy mediates connections between your app and your database. During routine deployments, the proxy restarts and long-lived connections are severed. Set your client's **max connection lifetime** so connections are recycled before the proxy needs to kill them.

| Setting | Recommended value | Why |
|---------|-------------------|-----|
| Max connection lifetime | **600 seconds** (10 min) | Recycle connections before the proxy closes them |
| Idle connection timeout | **300 seconds** (5 min) | Releases unused connections before they're forcibly closed |

## Keep your pool size modest

Match your pool size to your plan's capacity. Oversized pools waste PgBouncer slots and can trigger connection limit errors.

| Plan tier | Suggested pool size per process |
|-----------|-------------------------------|
| Basic / Starter | 5–10 |
| Launch and above | 10–20 |

If you run multiple processes (e.g., web + background workers), the total across all processes should stay within these ranges.

## Disable prepared statements in transaction mode

If your PgBouncer pool mode is set to **Transaction** (required for Ecto; recommended for high-throughput apps), you must disable named prepared statements in your client. PgBouncer can't track prepared statements across transactions.

See [Cluster Configuration Options](/docs/mpg/configuration/) for how to change your pool mode.

## Next steps

For language-specific configuration examples (Node.js, Python, Go, Ruby, Elixir), detailed troubleshooting, and connection limit details, see the full [Client-Side Connection Configuration](/docs/mpg/client-configuration/) guide.
4 changes: 3 additions & 1 deletion mpg/guides-examples/phoenix-guide.html.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ date: 2025-09-16
author: Kaelyn
---

For general connection configuration that applies to all languages — connection lifetime, idle timeouts, proxy restart behavior, and troubleshooting — see [Client-Side Connection Configuration](/docs/mpg/client-configuration/).

This guide explains the key **Managed Postgres (MPG)-specific adjustments** you need when connecting a Phoenix app. We'll focus on:

1. Connection Pooling Settings
Expand Down Expand Up @@ -114,7 +116,7 @@ Older versions required the Repeater plugin. Since Oban 2.14 (2023), polling fal

### Common errors and fixes

- `tcp recv (idle): closed` or `tcp recv (idle): timeout` — These are idle connection reclaimed by the pooler, and don't represent an issue as Ecto reconnects automatically. To remove them, lower your pool size or ignore.
- `tcp recv (idle): closed` or `tcp recv (idle): timeout` — These occur when the Fly proxy or PgBouncer closes an idle connection, often during routine proxy deployments. Ecto reconnects automatically, so these are transient. To reduce their frequency, lower your pool size so fewer connections sit idle. For a full explanation of why this happens and how to configure connection lifetime and idle timeouts, see [Client-Side Connection Configuration — Troubleshooting](/docs/mpg/client-configuration/#troubleshooting).
- `FATAL 08P01 protocol_violation` on login — Set `prepare: :unnamed` and ensure PgBouncer is in Transaction mode.
- Oban jobs not running — Use a non-Postgres notifier (PG or Phoenix) behind PgBouncer, or run Oban on a direct Repo. On Oban ≥ 2.14, do not add Repeater (polling fallback is automatic when PubSub isn't available).
- Migrations hanging or failing — Run migrations with the direct database URL (via `release_command` or a one-off SSH command), not through PgBouncer.
Expand Down
10 changes: 9 additions & 1 deletion partials/_mpg_nav.html.erb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,15 @@
open: true,
links: [
{ text: "Create and Connect to a Managed Postgres Cluster", path: "/docs/mpg/create-and-connect/" },
{ text: "Cluster Configuration Options", path: "/docs/mpg/configuration/" }
{ text: "Cluster Configuration Options", path: "/docs/mpg/configuration/" },
{ text: "Connect Your Client", path: "/docs/mpg/connect-your-client/" }
]
},
{
title: "Connection & Clients",
open: true,
links: [
{ text: "Client-Side Connection Configuration", path: "/docs/mpg/client-configuration/" }
]
},
{
Expand Down
Loading