From 87e14eeebd6bb0e9fa3a7e753294fa7626a7863b Mon Sep 17 00:00:00 2001 From: Jon Phenow Date: Wed, 25 Mar 2026 16:11:49 -0500 Subject: [PATCH 1/5] docs: add client configuration guides for MPG MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This adds comprehensive documentation for configuring Managed Postgres client connections, addressing a critical gap where users lacked guidance on preventing dropped connections during proxy maintenance. The core problem: Fly's edge proxy restarts periodically (typically during deployments), and connections held longer than the proxy's 15-minute shutdown timeout are forcibly closed. Without proper configuration, most applications encounter `ECONNRESET` or `tcp recv (idle): closed` errors during these restarts. Connection pool configuration is the fix, but it was poorly documented across the MPG docs. This change adds two guides. "Connect Your Client" is a quick-start checklist covering the essential settings: max connection lifetime (600s), idle timeout (300s), pool size recommendations, and prepared statement handling. "Client-Side Connection Configuration" is a comprehensive reference with language-specific examples for Node.js, Python, Go, Ruby, and Elixir, detailed explanations of how PgBouncer modes affect client behavior, connection limit tables per plan tier, and a full troubleshooting section addressing common errors. The navigation has been restructured to surface both guides prominently, and existing guides (cluster configuration, Phoenix) now cross-link to the new documentation. This makes the recommended connection settings discoverable at the point where users need them most — when connecting their applications to MPG. --- mpg/client-configuration.html.md | 288 ++++++++++++++++++++++ mpg/configuration.html.md | 4 +- mpg/connect-your-client.html.md | 48 ++++ mpg/guides-examples/phoenix-guide.html.md | 4 +- partials/_mpg_nav.html.erb | 10 +- 5 files changed, 351 insertions(+), 3 deletions(-) create mode 100644 mpg/client-configuration.html.md create mode 100644 mpg/connect-your-client.html.md diff --git a/mpg/client-configuration.html.md b/mpg/client-configuration.html.md new file mode 100644 index 0000000000..6ebd035750 --- /dev/null +++ b/mpg/client-configuration.html.md @@ -0,0 +1,288 @@ +--- +title: "Client-Side Connection Configuration" +layout: docs +nav: mpg +date: 2026-03-25 +--- + +This guide covers how to configure your application's database client for reliable, performant connections to Fly Managed Postgres. It explains why certain settings matter and provides configuration examples for popular libraries across multiple languages. + +For a quick summary of the essentials, see [Connect Your Client](/docs/mpg/connect-your-client/). + +## Why client configuration matters + +Your application connects to Managed Postgres through Fly's edge proxy and PgBouncer. The proxy handles TLS termination and routes traffic across Fly's internal network. Periodically — typically during proxy deployments — the proxy needs to restart. + +For HTTP/2 and WebSocket connections, the proxy sends a `GOAWAY` frame that tells clients to gracefully finish in-flight requests and open new connections. The PostgreSQL wire protocol has no equivalent mechanism. When the proxy restarts, it waits for existing connections to drain, but it can't tell your Postgres client to stop sending queries on the current connection. + +The proxy's shutdown timeout is **15 minutes**. Any connection that remains open after that is terminated. If your application holds connections for longer than this — which is the default behavior of most connection pools — you'll see errors like `tcp recv (idle): closed` or `ECONNRESET` during proxy deployments. + +The fix is straightforward: configure your connection pool to **proactively recycle connections** on a shorter interval than the proxy's timeout. + +## Recommended settings + +| Setting | Recommended value | Why | +|---------|-------------------|-----| +| Max connection lifetime | **600s** (10 min) | Connections recycle before the proxy's 15-min shutdown timeout | +| Idle connection timeout | **300s** (5 min) | Releases unused connections before they're forcibly closed | +| Pool size | **5–10** (Basic/Starter), **10–20** (Launch+) | Match your plan's PgBouncer capacity | +| Prepared statements | **Disabled** in transaction mode | PgBouncer can't track per-connection prepared statement state | +| Connection retries | **Enabled** with backoff | Handle transient connection drops during proxy restarts | + +## PgBouncer mode and your client + +All MPG clusters include PgBouncer for connection pooling. The pool mode you choose on the cluster side affects what your client can do. See [Cluster Configuration Options](/docs/mpg/configuration/) for how to change modes. + +**Session mode** (default): A PgBouncer connection is held for the entire client session. Full PostgreSQL feature compatibility — prepared statements, advisory locks, `LISTEN/NOTIFY`, and multi-statement transactions all work normally. Lower connection reuse. + +**Transaction mode**: PgBouncer assigns a connection per transaction and returns it to the pool afterward. Higher throughput and connection reuse, but: + +- **Named prepared statements** don't work — you must use unnamed/extended query protocol +- **Advisory locks** are not session-scoped — use the direct URL for migrations +- **`LISTEN/NOTIFY`** doesn't work — use an alternative notifier (see the [Phoenix guide](/docs/mpg/guides-examples/phoenix-guide/) for Oban examples) +- **`SET` commands** affect only the current transaction + +If your ORM or driver supports it, transaction mode with unnamed prepared statements is the better choice for most web applications. + +## Language-specific configuration + +### Node.js — pg (node-postgres) + +```javascript +const { Pool } = require('pg'); + +const pool = new Pool({ + connectionString: process.env.DATABASE_URL, + + // Pool sizing + max: 10, + + // Connection lifecycle + idleTimeoutMillis: 300_000, // 5 min — close idle connections + maxLifetimeMillis: 600_000, // 10 min — recycle before proxy timeout + connectionTimeoutMillis: 5_000, // 5s — fail fast on connection attempts +}); +``` + +`maxLifetimeMillis` was added in `pg-pool` 3.6.0 (included with `pg` 8.11+). If you're on an older version, upgrade — this setting is critical for reliable connections on Fly. + +### Node.js — Prisma + +Add `pgbouncer=true` and connection pool parameters to your connection string: + +```env +DATABASE_URL="postgresql://fly-user:@pgbouncer..flympg.net/fly-db?pgbouncer=true&connection_limit=10&pool_timeout=30" +``` + +The `pgbouncer=true` parameter tells Prisma to disable prepared statements and adjust its connection handling for PgBouncer compatibility. + +In your Prisma schema: + +```prisma +datasource db { + provider = "postgresql" + url = env("DATABASE_URL") +} +``` + +Prisma manages its own connection pool internally. The `connection_limit` parameter controls the pool size per Prisma client instance. If you run multiple processes, keep total connections within your plan's capacity. + +### Python — SQLAlchemy + +```python +import os +from sqlalchemy import create_engine + +engine = create_engine( + os.environ["DATABASE_URL"], + + # Pool sizing + pool_size=10, + max_overflow=5, + + # Connection lifecycle + pool_recycle=600, # 10 min — recycle connections before proxy timeout + pool_timeout=30, # 30s — wait for available connection + pool_pre_ping=True, # verify connections before use + + # Disable prepared statements for PgBouncer transaction mode + # (uncomment if using transaction mode) + # connect_args={"prepare_threshold": 0}, # for psycopg3 + # connect_args={"options": "-c plan_cache_mode=force_custom_plan"}, # alternative +) +``` + +`pool_recycle` is the max connection lifetime — SQLAlchemy will close and replace connections older than this value. + +`pool_pre_ping` issues a lightweight `SELECT 1` before each connection checkout. This adds a small round-trip but catches stale connections before your query fails. + +### Python — psycopg3 connection pool + +```python +import os +from psycopg_pool import ConnectionPool + +pool = ConnectionPool( + conninfo=os.environ["DATABASE_URL"], + + # Pool sizing + min_size=2, + max_size=10, + + # Connection lifecycle + max_lifetime=600, # 10 min — recycle before proxy timeout + max_idle=300, # 5 min — close idle connections + reconnect_timeout=5, + + # Disable prepared statements for PgBouncer transaction mode + # (uncomment if using transaction mode) + # kwargs={"prepare_threshold": 0}, +) +``` + +### Go — database/sql with pgx + +```go +import ( + "database/sql" + "os" + "time" + + _ "github.com/jackc/pgx/v5/stdlib" +) + +// Open the connection pool. +db, err := sql.Open("pgx", os.Getenv("DATABASE_URL")) +if err != nil { + log.Fatal(err) +} + +// Pool sizing +db.SetMaxOpenConns(10) +db.SetMaxIdleConns(5) + +// Connection lifecycle +db.SetConnMaxLifetime(10 * time.Minute) // recycle before proxy timeout +db.SetConnMaxIdleTime(5 * time.Minute) // close idle connections +``` + +Go's `database/sql` handles connection recycling natively. `SetConnMaxLifetime` is the key setting — it ensures no connection is reused beyond the specified duration. + +To disable prepared statements for PgBouncer transaction mode, use the `default_query_exec_mode` connection parameter: + +```go +connStr := os.Getenv("DATABASE_URL") + "?default_query_exec_mode=exec" +db, err := sql.Open("pgx", connStr) +``` + +### Ruby — ActiveRecord (Rails) + +```yaml +# config/database.yml +production: + url: <%= ENV["DATABASE_URL"] %> + pool: <%= ENV.fetch("RAILS_MAX_THREADS", 5) %> + idle_timeout: 300 # 5 min — close idle connections + checkout_timeout: 5 + prepared_statements: false # required for PgBouncer transaction mode +``` + +For connection max lifetime, Rails 7.2+ supports `max_lifetime` natively: + +```yaml +# config/database.yml (Rails 7.2+) +production: + url: <%= ENV["DATABASE_URL"] %> + pool: <%= ENV.fetch("RAILS_MAX_THREADS", 5) %> + max_lifetime: 600 # 10 min — recycle before proxy timeout + idle_timeout: 300 + checkout_timeout: 5 + prepared_statements: false +``` + +On older Rails versions, connections will still be recycled by the idle timeout if your app has enough traffic. For low-traffic apps on older Rails, consider the [`activerecord-connection_reaper`](https://github.com/mperham/activerecord-connection_reaper) gem or a periodic reconnection task. + +### Elixir/Phoenix — Ecto + +```elixir +# config/runtime.exs +config :my_app, MyApp.Repo, + url: System.fetch_env!("DATABASE_URL"), + pool_size: 8, + queue_target: 5_000, + queue_interval: 5_000, + prepare: :unnamed # required for PgBouncer transaction mode +``` + +For comprehensive Phoenix setup including migrations, Oban configuration, and Ecto-specific troubleshooting, see the [Phoenix with Managed Postgres](/docs/mpg/guides-examples/phoenix-guide/) guide. + +
+**Note on connection lifetime in Ecto:** Postgrex does not currently support a max connection lifetime setting. Connections are recycled only when they encounter errors or are explicitly disconnected. The idle timeout and PgBouncer's own `server_lifetime` setting (default 1800s) provide some protection, but for the most reliable behavior during proxy restarts, a `max_lifetime` option in Postgrex/DBConnection would be ideal. This is a known gap. +
+ + + +## Connection limits + + + +Each MPG plan has a fixed number of PgBouncer connection slots shared across all clients. If your total pool size (across all app processes) exceeds this limit, new connections will be queued or rejected. + +| Plan | PgBouncer max client connections | Direct max connections | +|------|--------------------------------|----------------------| +| Basic | _TBD_ | _TBD_ | +| Starter | _TBD_ | _TBD_ | +| Launch | _TBD_ | _TBD_ | +| Scale | _TBD_ | _TBD_ | +| Performance | _TBD_ | _TBD_ | + +### Common connection limit errors + +**`FATAL: too many connections for role`** or **`remaining connection slots are reserved for roles with the SUPERUSER attribute`**: Your total pool size across all processes exceeds the PgBouncer connection limit. To fix: + +- Reduce `pool_size` / `max` in each process +- Switch to **transaction** pool mode for better connection reuse +- Check for connection leaks (connections opened but never returned to the pool) + +**Calculating your total pool usage:** If you have 3 web processes with `pool_size: 10` and 2 worker processes with `pool_size: 5`, your total is `(3 × 10) + (2 × 5) = 40` connections. This must be within your plan's PgBouncer limit. + +## Troubleshooting + +### `tcp recv (idle): closed` or `tcp recv (idle): timeout` + +**Cause:** The proxy or PgBouncer closed an idle connection. This happens during proxy deployments (the proxy drains connections on restart) or when PgBouncer's idle timeout is reached. + +**Fix:** Set your client's idle timeout to **300 seconds** (5 min) and max connection lifetime to **600 seconds** (10 min). Most connection pools reconnect automatically when a connection is closed — these errors are transient. If you're seeing them frequently outside of proxy deployments, reduce your pool size so fewer connections sit idle. + +### `ECONNRESET` or "connection reset by peer" + +**Cause:** A long-lived connection was terminated during a proxy restart. The proxy's shutdown timeout is 15 minutes — any connection older than that is forcibly closed. + +**Fix:** Set max connection lifetime to **600 seconds** (10 min) so connections are recycled before the proxy needs to kill them. Enable retry logic with backoff for transient failures. + +### `FATAL 08P01 protocol_violation` on login + +**Cause:** Your client is sending named prepared statements through PgBouncer in transaction mode. PgBouncer can't route prepared statements to the correct backend connection in this mode. + +**Fix:** Disable named prepared statements in your client configuration: +- **Node.js (pg):** This is the default behavior — no change needed +- **Prisma:** Add `?pgbouncer=true` to your connection string +- **Python (psycopg3):** Set `prepare_threshold=0` +- **Go (pgx):** Use `default_query_exec_mode=exec` +- **Ruby (ActiveRecord):** Set `prepared_statements: false` +- **Elixir (Ecto):** Set `prepare: :unnamed` + +### `prepared statement "..." does not exist` + +**Cause:** Same as above — named prepared statements being used with PgBouncer in transaction mode. + +**Fix:** Same as the `protocol_violation` fix above. + +### Connection hangs on startup + +**Cause:** DNS resolution failure on Fly's internal IPv6 network. Your app can't resolve the `.flympg.net` address. + +**Fix:** Ensure your app is configured for IPv6. For Elixir apps, see the [IPv6 settings guide](https://fly.io/docs/elixir/getting-started/#important-ipv6-settings). For other runtimes, verify that your DNS resolver supports AAAA records on the Fly private network. diff --git a/mpg/configuration.html.md b/mpg/configuration.html.md index 157d4d0ea5..317eaa427f 100644 --- a/mpg/configuration.html.md +++ b/mpg/configuration.html.md @@ -13,7 +13,9 @@ date: 2025-08-18 ## Connection Pooling -All Managed Postgres clusters come with PGBouncer for connection pooling, which helps manage database connections efficiently. You can configure how PGBouncer assigns connections to clients by changing the pool mode. +All Managed Postgres clusters come with PGBouncer for connection pooling, which helps manage database connections efficiently. You can configure how PGBouncer assigns connections to clients by changing the pool mode. + +For configuring your application's connection pool settings (lifetime, idle timeout, pool size, and language-specific examples), see [Client-Side Connection Configuration](/docs/mpg/client-configuration/). ### Pool Mode Options diff --git a/mpg/connect-your-client.html.md b/mpg/connect-your-client.html.md new file mode 100644 index 0000000000..404ddde914 --- /dev/null +++ b/mpg/connect-your-client.html.md @@ -0,0 +1,48 @@ +--- +title: Connect Your Client +layout: docs +nav: mpg +date: 2026-03-25 +--- + +After [creating your MPG cluster](/docs/mpg/create-and-connect/) and attaching your app, configure your database client for reliable connections. These settings prevent dropped connections during routine proxy maintenance and keep your connection pool healthy. + +## Use the pooled connection URL + +Always connect your application through PgBouncer using the **pooled URL** (the default when you attach an app). The pooled URL looks like: + +``` +postgresql://fly-user:@pgbouncer..flympg.net/fly-db +``` + +Use the **direct URL** (`direct..flympg.net`) only for migrations, advisory locks, or `LISTEN/NOTIFY` — operations that require session-level stickiness. + +## Set connection lifetime and idle timeout + +Fly's edge proxy mediates connections between your app and your database. During routine deployments, the proxy restarts and long-lived connections are severed. Set your client's **max connection lifetime** so connections are recycled before the proxy needs to kill them. + +| Setting | Recommended value | Why | +|---------|-------------------|-----| +| Max connection lifetime | **600 seconds** (10 min) | Connections recycle before the proxy's 15-min shutdown timeout | +| Idle connection timeout | **300 seconds** (5 min) | Releases unused connections before they're forcibly closed | + +## Keep your pool size modest + +Match your pool size to your plan's capacity. Oversized pools waste PgBouncer slots and can trigger connection limit errors. + +| Plan tier | Suggested pool size per process | +|-----------|-------------------------------| +| Basic / Starter | 5–10 | +| Launch and above | 10–20 | + +If you run multiple processes (e.g., web + background workers), the total across all processes should stay within these ranges. + +## Disable prepared statements in transaction mode + +If your PgBouncer pool mode is set to **Transaction** (required for Ecto; recommended for high-throughput apps), you must disable named prepared statements in your client. PgBouncer can't track prepared statements across transactions. + +See [Cluster Configuration Options](/docs/mpg/configuration/) for how to change your pool mode. + +## Next steps + +For language-specific configuration examples (Node.js, Python, Go, Ruby, Elixir), detailed troubleshooting, and connection limit details, see the full [Client-Side Connection Configuration](/docs/mpg/client-configuration/) guide. diff --git a/mpg/guides-examples/phoenix-guide.html.md b/mpg/guides-examples/phoenix-guide.html.md index 811780fc1f..739c0841c4 100644 --- a/mpg/guides-examples/phoenix-guide.html.md +++ b/mpg/guides-examples/phoenix-guide.html.md @@ -6,6 +6,8 @@ date: 2025-09-16 author: Kaelyn --- +For general connection configuration that applies to all languages — connection lifetime, idle timeouts, proxy restart behavior, and troubleshooting — see [Client-Side Connection Configuration](/docs/mpg/client-configuration/). + This guide explains the key **Managed Postgres (MPG)-specific adjustments** you need when connecting a Phoenix app. We'll focus on: 1. Connection Pooling Settings @@ -114,7 +116,7 @@ Older versions required the Repeater plugin. Since Oban 2.14 (2023), polling fal ### Common errors and fixes -- `tcp recv (idle): closed` or `tcp recv (idle): timeout` — These are idle connection reclaimed by the pooler, and don't represent an issue as Ecto reconnects automatically. To remove them, lower your pool size or ignore. +- `tcp recv (idle): closed` or `tcp recv (idle): timeout` — These occur when the Fly proxy or PgBouncer closes an idle connection, often during routine proxy deployments. Ecto reconnects automatically, so these are transient. To reduce their frequency, lower your pool size so fewer connections sit idle. For a full explanation of why this happens and how to configure connection lifetime and idle timeouts, see [Client-Side Connection Configuration — Troubleshooting](/docs/mpg/client-configuration/#troubleshooting). - `FATAL 08P01 protocol_violation` on login — Set `prepare: :unnamed` and ensure PgBouncer is in Transaction mode. - Oban jobs not running — Use a non-Postgres notifier (PG or Phoenix) behind PgBouncer, or run Oban on a direct Repo. On Oban ≥ 2.14, do not add Repeater (polling fallback is automatic when PubSub isn't available). - Migrations hanging or failing — Run migrations with the direct database URL (via `release_command` or a one-off SSH command), not through PgBouncer. diff --git a/partials/_mpg_nav.html.erb b/partials/_mpg_nav.html.erb index a3b3ec6331..6634209466 100644 --- a/partials/_mpg_nav.html.erb +++ b/partials/_mpg_nav.html.erb @@ -15,7 +15,15 @@ open: true, links: [ { text: "Create and Connect to a Managed Postgres Cluster", path: "/docs/mpg/create-and-connect/" }, - { text: "Cluster Configuration Options", path: "/docs/mpg/configuration/" } + { text: "Cluster Configuration Options", path: "/docs/mpg/configuration/" }, + { text: "Connect Your Client", path: "/docs/mpg/connect-your-client/" } + ] + }, + { + title: "Connection & Clients", + open: true, + links: [ + { text: "Client-Side Connection Configuration", path: "/docs/mpg/client-configuration/" } ] }, { From 3903a1050baa2f513ff388670de567074374e249 Mon Sep 17 00:00:00 2001 From: Jon Phenow Date: Wed, 25 Mar 2026 16:18:48 -0500 Subject: [PATCH 2/5] Correct some words and values --- mpg/client-configuration.html.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/mpg/client-configuration.html.md b/mpg/client-configuration.html.md index 6ebd035750..f78e541837 100644 --- a/mpg/client-configuration.html.md +++ b/mpg/client-configuration.html.md @@ -11,11 +11,9 @@ For a quick summary of the essentials, see [Connect Your Client](/docs/mpg/conne ## Why client configuration matters -Your application connects to Managed Postgres through Fly's edge proxy and PgBouncer. The proxy handles TLS termination and routes traffic across Fly's internal network. Periodically — typically during proxy deployments — the proxy needs to restart. +Your Fly.io apps, as well as your Fly.io Postgres databases, sit behind Fly.io's proxy. Public and private traffic route through this proxy. Sometimes our proxy restarts and the proxy does its best to drain connections before restarting. Postgres doesn't have a protocol level mechanism to tell clients to stop sending queries on a particular connection, so we have to rely on client-side configuration to handle graceful handoff. -For HTTP/2 and WebSocket connections, the proxy sends a `GOAWAY` frame that tells clients to gracefully finish in-flight requests and open new connections. The PostgreSQL wire protocol has no equivalent mechanism. When the proxy restarts, it waits for existing connections to drain, but it can't tell your Postgres client to stop sending queries on the current connection. - -The proxy's shutdown timeout is **15 minutes**. Any connection that remains open after that is terminated. If your application holds connections for longer than this — which is the default behavior of most connection pools — you'll see errors like `tcp recv (idle): closed` or `ECONNRESET` during proxy deployments. +The proxy's shutdown timeout is **10 minutes**. Any connection that remains open after that is terminated. If your application holds connections for longer than this — which is the default behavior of most connection pools — you might run into errors like `tcp recv (idle): closed` or `ECONNRESET` during proxy deployments. The fix is straightforward: configure your connection pool to **proactively recycle connections** on a shorter interval than the proxy's timeout. @@ -25,7 +23,7 @@ The fix is straightforward: configure your connection pool to **proactively recy |---------|-------------------|-----| | Max connection lifetime | **600s** (10 min) | Connections recycle before the proxy's 15-min shutdown timeout | | Idle connection timeout | **300s** (5 min) | Releases unused connections before they're forcibly closed | -| Pool size | **5–10** (Basic/Starter), **10–20** (Launch+) | Match your plan's PgBouncer capacity | +| Pool size | **5–10** | Match your plan's PgBouncer capacity (see Connection tab of your cluster's dashboard) | | Prepared statements | **Disabled** in transaction mode | PgBouncer can't track per-connection prepared statement state | | Connection retries | **Enabled** with backoff | Handle transient connection drops during proxy restarts | From 3aa2c439055a6bccd097241da62915e49deda7a9 Mon Sep 17 00:00:00 2001 From: Jon Phenow Date: Wed, 25 Mar 2026 16:23:47 -0500 Subject: [PATCH 3/5] docs: wrap language examples in collapsible details MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The client configuration guide lists examples for seven different languages and drivers. This creates a long, dense page that's difficult to scan. Wrapping each language section in `
` tags makes the page more discoverable—users can quickly find their language of choice and expand only what they need. This also improves the reading experience for users who are only interested in one or two specific languages, reducing visual clutter while keeping all reference material readily accessible. --- mpg/client-configuration.html.md | 35 +++++++++++++++++++++++++------- 1 file changed, 28 insertions(+), 7 deletions(-) diff --git a/mpg/client-configuration.html.md b/mpg/client-configuration.html.md index f78e541837..da0cda444a 100644 --- a/mpg/client-configuration.html.md +++ b/mpg/client-configuration.html.md @@ -44,7 +44,8 @@ If your ORM or driver supports it, transaction mode with unnamed prepared statem ## Language-specific configuration -### Node.js — pg (node-postgres) +
+Node.js — pg (node-postgres) ```javascript const { Pool } = require('pg'); @@ -64,7 +65,10 @@ const pool = new Pool({ `maxLifetimeMillis` was added in `pg-pool` 3.6.0 (included with `pg` 8.11+). If you're on an older version, upgrade — this setting is critical for reliable connections on Fly. -### Node.js — Prisma +
+ +
+Node.js — Prisma Add `pgbouncer=true` and connection pool parameters to your connection string: @@ -85,7 +89,10 @@ datasource db { Prisma manages its own connection pool internally. The `connection_limit` parameter controls the pool size per Prisma client instance. If you run multiple processes, keep total connections within your plan's capacity. -### Python — SQLAlchemy +
+ +
+Python — SQLAlchemy ```python import os @@ -114,7 +121,10 @@ engine = create_engine( `pool_pre_ping` issues a lightweight `SELECT 1` before each connection checkout. This adds a small round-trip but catches stale connections before your query fails. -### Python — psycopg3 connection pool +
+ +
+Python — psycopg3 connection pool ```python import os @@ -138,7 +148,10 @@ pool = ConnectionPool( ) ``` -### Go — database/sql with pgx +
+ +
+Go — database/sql with pgx ```go import ( @@ -173,7 +186,10 @@ connStr := os.Getenv("DATABASE_URL") + "?default_query_exec_mode=exec" db, err := sql.Open("pgx", connStr) ``` -### Ruby — ActiveRecord (Rails) +
+ +
+Ruby — ActiveRecord (Rails) ```yaml # config/database.yml @@ -200,7 +216,10 @@ production: On older Rails versions, connections will still be recycled by the idle timeout if your app has enough traffic. For low-traffic apps on older Rails, consider the [`activerecord-connection_reaper`](https://github.com/mperham/activerecord-connection_reaper) gem or a periodic reconnection task. -### Elixir/Phoenix — Ecto +
+ +
+Elixir/Phoenix — Ecto ```elixir # config/runtime.exs @@ -220,6 +239,8 @@ For comprehensive Phoenix setup including migrations, Oban configuration, and Ec +
+ ## Connection limits - -Each MPG plan has a fixed number of PgBouncer connection slots shared across all clients. If your total pool size (across all app processes) exceeds this limit, new connections will be queued or rejected. - -| Plan | PgBouncer max client connections | Direct max connections | -|------|--------------------------------|----------------------| -| Basic | _TBD_ | _TBD_ | -| Starter | _TBD_ | _TBD_ | -| Launch | _TBD_ | _TBD_ | -| Scale | _TBD_ | _TBD_ | -| Performance | _TBD_ | _TBD_ | - -### Common connection limit errors +### Common connection errors **`FATAL: too many connections for role`** or **`remaining connection slots are reserved for roles with the SUPERUSER attribute`**: Your total pool size across all processes exceeds the PgBouncer connection limit. To fix: @@ -266,7 +248,7 @@ Each MPG plan has a fixed number of PgBouncer connection slots shared across all - Switch to **transaction** pool mode for better connection reuse - Check for connection leaks (connections opened but never returned to the pool) -**Calculating your total pool usage:** If you have 3 web processes with `pool_size: 10` and 2 worker processes with `pool_size: 5`, your total is `(3 × 10) + (2 × 5) = 40` connections. This must be within your plan's PgBouncer limit. +**Calculating your total pool usage:** If you have 3 web processes with `pool_size: 10` and 2 worker processes with `pool_size: 5`, your total is `(3 × 10) + (2 × 5) = 40` connections. ## Troubleshooting @@ -278,9 +260,9 @@ Each MPG plan has a fixed number of PgBouncer connection slots shared across all ### `ECONNRESET` or "connection reset by peer" -**Cause:** A long-lived connection was terminated during a proxy restart. Connections that remain open too long during a proxy drain are forcibly closed. +**Cause:** A long-lived connection was terminated during something like a proxy restart. Connections that remain open too long during a proxy drain may be forcibly closed. -**Fix:** Set max connection lifetime to **600 seconds** (10 min) so connections are recycled before the proxy needs to kill them. Enable retry logic with backoff for transient failures. +**Fix:** Set max connection lifetime to **600 seconds** (10 min) or less so connections are recycled before the proxy needs to kill them. Enable retry logic with backoff for transient failures. ### `FATAL 08P01 protocol_violation` on login