+Elixir/Phoenix — Ecto
+
+```elixir
+# config/runtime.exs
+config :my_app, MyApp.Repo,
+ url: System.fetch_env!("DATABASE_URL"),
+ pool_size: 8,
+ queue_target: 5_000,
+ queue_interval: 5_000,
+ prepare: :unnamed # required for PgBouncer transaction mode
+```
+
+For comprehensive Phoenix setup including migrations, Oban configuration, and Ecto-specific troubleshooting, see the [Phoenix with Managed Postgres](/docs/mpg/guides-examples/phoenix-guide/) guide.
+
+
+**Note on connection lifetime in Ecto:** Postgrex does not currently support a max connection lifetime setting. Connections are recycled only when they encounter errors or are explicitly disconnected. The idle timeout and PgBouncer's own `server_lifetime` setting (default 1800s) provide some protection, but for the most reliable behavior during proxy restarts, a `max_lifetime` option in Postgrex/DBConnection would be ideal. This is a known gap.
+
+
+
+
+
+
+### Common connection errors
+
+**`FATAL: too many connections for role`** or **`remaining connection slots are reserved for roles with the SUPERUSER attribute`**: Your total pool size across all processes exceeds the PgBouncer connection limit. To fix:
+
+- Reduce `pool_size` / `max` in each process
+- Switch to **transaction** pool mode for better connection reuse
+- Check for connection leaks (connections opened but never returned to the pool)
+
+**Calculating your total pool usage:** If you have 3 web processes with `pool_size: 10` and 2 worker processes with `pool_size: 5`, your total is `(3 × 10) + (2 × 5) = 40` connections.
+
+## Troubleshooting
+
+### `tcp recv (idle): closed` or `tcp recv (idle): timeout`
+
+**Cause:** The proxy or PgBouncer closed an idle connection. This happens during proxy deployments (the proxy drains connections on restart) or when PgBouncer's idle timeout is reached.
+
+**Fix:** Set your client's idle timeout to **300 seconds** (5 min) and max connection lifetime to **600 seconds** (10 min). Most connection pools reconnect automatically when a connection is closed — these errors are transient. If you're seeing them frequently outside of proxy deployments, reduce your pool size so fewer connections sit idle.
+
+### `ECONNRESET` or "connection reset by peer"
+
+**Cause:** A long-lived connection was terminated during something like a proxy restart. Connections that remain open too long during a proxy drain may be forcibly closed.
+
+**Fix:** Set max connection lifetime to **600 seconds** (10 min) or less so connections are recycled before the proxy needs to kill them. Enable retry logic with backoff for transient failures.
+
+### `FATAL 08P01 protocol_violation` on login
+
+**Cause:** Your client is sending named prepared statements through PgBouncer in transaction mode. PgBouncer can't route prepared statements to the correct backend connection in this mode.
+
+**Fix:** Disable named prepared statements in your client configuration:
+- **Node.js (pg):** This is the default behavior — no change needed
+- **Prisma:** Add `?pgbouncer=true` to your connection string
+- **Python (psycopg3):** Set `prepare_threshold=0`
+- **Go (pgx):** Use `default_query_exec_mode=exec`
+- **Ruby (ActiveRecord):** Set `prepared_statements: false`
+- **Elixir (Ecto):** Set `prepare: :unnamed`
+
+### `prepared statement "..." does not exist`
+
+**Cause:** Same as above — named prepared statements being used with PgBouncer in transaction mode.
+
+**Fix:** Same as the `protocol_violation` fix above.
+
+### Connection hangs on startup
+
+**Cause:** DNS resolution failure on Fly's internal IPv6 network. Your app can't resolve the `.flympg.net` address.
+
+**Fix:** Ensure your app is configured for IPv6. For Elixir apps, see the [IPv6 settings guide](https://fly.io/docs/elixir/getting-started/#important-ipv6-settings). For other runtimes, verify that your DNS resolver supports AAAA records on the Fly private network.
diff --git a/mpg/configuration.html.md b/mpg/configuration.html.md
index 157d4d0ea5..317eaa427f 100644
--- a/mpg/configuration.html.md
+++ b/mpg/configuration.html.md
@@ -13,7 +13,9 @@ date: 2025-08-18
## Connection Pooling
-All Managed Postgres clusters come with PGBouncer for connection pooling, which helps manage database connections efficiently. You can configure how PGBouncer assigns connections to clients by changing the pool mode.
+All Managed Postgres clusters come with PGBouncer for connection pooling, which helps manage database connections efficiently. You can configure how PGBouncer assigns connections to clients by changing the pool mode.
+
+For configuring your application's connection pool settings (lifetime, idle timeout, pool size, and language-specific examples), see [Client-Side Connection Configuration](/docs/mpg/client-configuration/).
### Pool Mode Options
diff --git a/mpg/connect-your-client.html.md b/mpg/connect-your-client.html.md
new file mode 100644
index 0000000000..66189a6cee
--- /dev/null
+++ b/mpg/connect-your-client.html.md
@@ -0,0 +1,48 @@
+---
+title: Connect Your Client
+layout: docs
+nav: mpg
+date: 2026-03-25
+---
+
+After [creating your MPG cluster](/docs/mpg/create-and-connect/) and attaching your app, configure your database client for reliable connections. These settings prevent dropped connections during routine proxy maintenance and keep your connection pool healthy.
+
+## Use the pooled connection URL
+
+Always connect your application through PgBouncer using the **pooled URL** (the default when you attach an app). The pooled URL looks like:
+
+```
+postgresql://fly-user: