Skip to content

fix(migration): update set_updated_at() trigger to write _updated_at#196

Open
matingathani wants to merge 1 commit intostripe:mainfrom
matingathani:issue-91-trigger
Open

fix(migration): update set_updated_at() trigger to write _updated_at#196
matingathani wants to merge 1 commit intostripe:mainfrom
matingathani:issue-91-trigger

Conversation

@matingathani
Copy link
Copy Markdown

Summary

Migration 0048 renamed the timestamp column from updated_at to _updated_at, but the set_updated_at() trigger function was never updated to match. As a result it was writing to a non-existent column via jsonb_populate_record, effectively making the trigger a no-op for objects that only have _updated_at.

Fix: Replace the jsonb_populate_record approach with a direct NEW._updated_at = now() assignment.

Test plan

  • Apply the migration in a test database and verify _updated_at is updated on row changes
  • Confirm no existing updated_at-only tables are broken (those tables use set_updated_at_metadata(), not this function)

Closes #91

Copilot AI review requested due to automatic review settings March 30, 2026 00:52
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the bootstrap migration’s set_updated_at() trigger function to correctly write to the renamed _updated_at column (instead of attempting to populate updated_at via jsonb_populate_record), aligning trigger behavior with the current timestamp column naming.

Changes:

  • Update set_updated_at() in the SQL bootstrap migration to assign NEW._updated_at = now().
  • Update the embedded migration SQL string to match the bootstrap migration change.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
packages/sync-engine/src/database/migrations/0000_initial_migration.sql Changes set_updated_at() trigger function implementation to write _updated_at directly.
packages/sync-engine/src/database/migrations-embedded.ts Updates embedded SQL for the 0000 migration to match the new trigger function body.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 10 to 12
BEGIN
-- Support both legacy "updated_at" and newer "_updated_at" columns.
-- jsonb_populate_record silently ignores keys that are not present on NEW.
NEW := jsonb_populate_record(
NEW,
jsonb_build_object(
'updated_at', now(),
'_updated_at', now()
)
);
NEW._updated_at = now();
RETURN NEW;
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There’s no automated regression test covering the behavior change in set_updated_at() (it now directly assigns NEW._updated_at). Given the existing migration tests, add a DB-backed test that performs an UPDATE on a table with an _updated_at column (e.g. accounts) and asserts _updated_at advances, to prevent this trigger from silently regressing again.

Copilot uses AI. Check for mistakes.
Comment on lines 9 to 13
export const embeddedMigrations: EmbeddedMigration[] = [
{
name: '0000_initial_migration.sql',
sql: '-- Internal sync metadata schema bootstrap for OpenAPI runtime.\n-- Schema-qualified objects use the explicit {{sync_schema}} placeholder.\n-- Uses idempotent DDL so it can be safely re-run.\n\nCREATE EXTENSION IF NOT EXISTS btree_gist;\n\nCREATE OR REPLACE FUNCTION set_updated_at() RETURNS trigger\n LANGUAGE plpgsql\nAS $$\nBEGIN\n -- Support both legacy "updated_at" and newer "_updated_at" columns.\n -- jsonb_populate_record silently ignores keys that are not present on NEW.\n NEW := jsonb_populate_record(\n NEW,\n jsonb_build_object(\n \'updated_at\', now(),\n \'_updated_at\', now()\n )\n );\n RETURN NEW;\nEND;\n$$;\n\nCREATE OR REPLACE FUNCTION set_updated_at_metadata() RETURNS trigger\n LANGUAGE plpgsql\nAS $$\nBEGIN\n NEW.updated_at = now();\n RETURN NEW;\nEND;\n$$;\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."accounts" (\n "_raw_data" jsonb NOT NULL,\n "id" text GENERATED ALWAYS AS ((_raw_data->>\'id\')::text) STORED,\n "api_key_hashes" text[] NOT NULL DEFAULT \'{}\',\n "first_synced_at" timestamptz NOT NULL DEFAULT now(),\n "_last_synced_at" timestamptz NOT NULL DEFAULT now(),\n "_updated_at" timestamptz NOT NULL DEFAULT now(),\n PRIMARY KEY ("id")\n);\nCREATE INDEX IF NOT EXISTS "idx_accounts_api_key_hashes"\n ON {{sync_schema}}."accounts" USING GIN ("api_key_hashes");\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."accounts";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."accounts"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at();\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_managed_webhooks" (\n "id" text PRIMARY KEY,\n "object" text,\n "url" text NOT NULL,\n "enabled_events" jsonb NOT NULL,\n "description" text,\n "enabled" boolean,\n "livemode" boolean,\n "metadata" jsonb,\n "secret" text NOT NULL,\n "status" text,\n "api_version" text,\n "created" bigint,\n "last_synced_at" timestamptz,\n "updated_at" timestamptz NOT NULL DEFAULT now(),\n "account_id" text NOT NULL\n);\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n DROP CONSTRAINT IF EXISTS "managed_webhooks_url_account_unique";\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n ADD CONSTRAINT "managed_webhooks_url_account_unique" UNIQUE ("url", "account_id");\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n DROP CONSTRAINT IF EXISTS "fk_managed_webhooks_account";\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n ADD CONSTRAINT "fk_managed_webhooks_account"\n FOREIGN KEY ("account_id") REFERENCES {{sync_schema}}."accounts" (id);\nCREATE INDEX IF NOT EXISTS "idx_managed_webhooks_status"\n ON {{sync_schema}}."_managed_webhooks" ("status");\nCREATE INDEX IF NOT EXISTS "idx_managed_webhooks_enabled"\n ON {{sync_schema}}."_managed_webhooks" ("enabled");\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."_managed_webhooks";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."_managed_webhooks"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at_metadata();\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_sync_runs" (\n "_account_id" text NOT NULL,\n "started_at" timestamptz NOT NULL DEFAULT now(),\n "closed_at" timestamptz,\n "max_concurrent" integer NOT NULL DEFAULT 3,\n "triggered_by" text,\n "error_message" text,\n "updated_at" timestamptz NOT NULL DEFAULT now(),\n PRIMARY KEY ("_account_id", "started_at")\n);\nALTER TABLE {{sync_schema}}."_sync_runs"\n ADD COLUMN IF NOT EXISTS "error_message" text;\nALTER TABLE {{sync_schema}}."_sync_runs"\n DROP CONSTRAINT IF EXISTS "fk_sync_runs_account";\nALTER TABLE {{sync_schema}}."_sync_runs"\n ADD CONSTRAINT "fk_sync_runs_account"\n FOREIGN KEY ("_account_id") REFERENCES {{sync_schema}}."accounts" (id);\nALTER TABLE {{sync_schema}}."_sync_runs"\n DROP CONSTRAINT IF EXISTS one_active_run_per_account;\nALTER TABLE {{sync_schema}}."_sync_runs"\n DROP CONSTRAINT IF EXISTS one_active_run_per_account_triggered_by;\nALTER TABLE {{sync_schema}}."_sync_runs"\n ADD CONSTRAINT one_active_run_per_account_triggered_by\n EXCLUDE (\n "_account_id" WITH =,\n COALESCE(triggered_by, \'default\') WITH =\n ) WHERE (closed_at IS NULL);\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."_sync_runs";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."_sync_runs"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at_metadata();\nCREATE INDEX IF NOT EXISTS "idx_sync_runs_account_status"\n ON {{sync_schema}}."_sync_runs" ("_account_id", "closed_at");\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_sync_obj_runs" (\n "_account_id" text NOT NULL,\n "run_started_at" timestamptz NOT NULL,\n "object" text NOT NULL,\n "status" text NOT NULL DEFAULT \'pending\'\n CHECK (status IN (\'pending\', \'running\', \'complete\', \'error\')),\n "started_at" timestamptz,\n "completed_at" timestamptz,\n "processed_count" integer NOT NULL DEFAULT 0,\n "cursor" text,\n "page_cursor" text,\n "created_gte" integer NOT NULL DEFAULT 0,\n "created_lte" integer NOT NULL DEFAULT 0,\n "priority" integer NOT NULL DEFAULT 0,\n "error_message" text,\n "updated_at" timestamptz NOT NULL DEFAULT now(),\n PRIMARY KEY ("_account_id", "run_started_at", "object", "created_gte", "created_lte")\n);\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "page_cursor" text;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "created_gte" integer NOT NULL DEFAULT 0;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "created_lte" integer NOT NULL DEFAULT 0;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "priority" integer NOT NULL DEFAULT 0;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "error_message" text;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n DROP CONSTRAINT IF EXISTS "fk_sync_obj_runs_parent";\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD CONSTRAINT "fk_sync_obj_runs_parent"\n FOREIGN KEY ("_account_id", "run_started_at")\n REFERENCES {{sync_schema}}."_sync_runs" ("_account_id", "started_at");\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."_sync_obj_runs";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."_sync_obj_runs"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at_metadata();\nCREATE INDEX IF NOT EXISTS "idx_sync_obj_runs_status"\n ON {{sync_schema}}."_sync_obj_runs" ("_account_id", "run_started_at", "status");\nCREATE INDEX IF NOT EXISTS "idx_sync_obj_runs_priority"\n ON {{sync_schema}}."_sync_obj_runs" ("_account_id", "run_started_at", "status", "priority");\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_rate_limits" (\n key TEXT PRIMARY KEY,\n count INTEGER NOT NULL DEFAULT 0,\n window_start TIMESTAMPTZ NOT NULL DEFAULT now()\n);\n\nCREATE OR REPLACE FUNCTION {{sync_schema}}.check_rate_limit(\n rate_key TEXT,\n max_requests INTEGER,\n window_seconds INTEGER\n)\nRETURNS VOID AS $$\nDECLARE\n now TIMESTAMPTZ := clock_timestamp();\n window_length INTERVAL := make_interval(secs => window_seconds);\n current_count INTEGER;\nBEGIN\n PERFORM pg_advisory_xact_lock(hashtext(rate_key));\n\n INSERT INTO {{sync_schema}}."_rate_limits" (key, count, window_start)\n VALUES (rate_key, 1, now)\n ON CONFLICT (key) DO UPDATE\n SET count = CASE\n WHEN "_rate_limits".window_start + window_length <= now\n THEN 1\n ELSE "_rate_limits".count + 1\n END,\n window_start = CASE\n WHEN "_rate_limits".window_start + window_length <= now\n THEN now\n ELSE "_rate_limits".window_start\n END;\n\n SELECT count INTO current_count FROM {{sync_schema}}."_rate_limits" WHERE key = rate_key;\n\n IF current_count > max_requests THEN\n RAISE EXCEPTION \'Rate limit exceeded for %\', rate_key;\n END IF;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE VIEW {{sync_schema}}."sync_runs" AS\nSELECT\n r._account_id as account_id,\n r.started_at,\n r.closed_at,\n r.triggered_by,\n r.max_concurrent,\n COALESCE(SUM(o.processed_count), 0) as total_processed,\n COUNT(o.*) as total_objects,\n COUNT(*) FILTER (WHERE o.status = \'complete\') as complete_count,\n COUNT(*) FILTER (WHERE o.status = \'error\') as error_count,\n COUNT(*) FILTER (WHERE o.status = \'running\') as running_count,\n COUNT(*) FILTER (WHERE o.status = \'pending\') as pending_count,\n STRING_AGG(o.error_message, \'; \') FILTER (WHERE o.error_message IS NOT NULL) as error_message,\n CASE\n WHEN r.closed_at IS NULL AND COUNT(*) FILTER (WHERE o.status = \'running\') > 0 THEN \'running\'\n WHEN r.closed_at IS NULL AND (COUNT(o.*) = 0 OR COUNT(o.*) = COUNT(*) FILTER (WHERE o.status = \'pending\')) THEN \'pending\'\n WHEN r.closed_at IS NULL THEN \'running\'\n WHEN COUNT(*) FILTER (WHERE o.status = \'error\') > 0 THEN \'error\'\n ELSE \'complete\'\n END as status\nFROM {{sync_schema}}."_sync_runs" r\nLEFT JOIN {{sync_schema}}."_sync_obj_runs" o\n ON o._account_id = r._account_id\n AND o.run_started_at = r.started_at\nGROUP BY r._account_id, r.started_at, r.closed_at, r.triggered_by, r.max_concurrent;\n\nDROP FUNCTION IF EXISTS {{sync_schema}}."sync_obj_progress"(TEXT, TIMESTAMPTZ);\nCREATE OR REPLACE VIEW {{sync_schema}}."sync_obj_progress" AS\nSELECT\n r."_account_id" AS account_id,\n r.run_started_at,\n r.object,\n ROUND(\n 100.0 * COUNT(*) FILTER (WHERE r.status = \'complete\') / NULLIF(COUNT(*), 0),\n 1\n ) AS pct_complete,\n COALESCE(SUM(r.processed_count), 0) AS processed\nFROM {{sync_schema}}."_sync_obj_runs" r\nWHERE r.run_started_at = (\n SELECT MAX(s.started_at)\n FROM {{sync_schema}}."_sync_runs" s\n WHERE s."_account_id" = r."_account_id"\n)\nGROUP BY r."_account_id", r.run_started_at, r.object;\n',
sql: '-- Internal sync metadata schema bootstrap for OpenAPI runtime.\n-- Schema-qualified objects use the explicit {{sync_schema}} placeholder.\n-- Uses idempotent DDL so it can be safely re-run.\n\nCREATE EXTENSION IF NOT EXISTS btree_gist;\n\nCREATE OR REPLACE FUNCTION set_updated_at() RETURNS trigger\n LANGUAGE plpgsql\nAS $$\nBEGIN\n NEW._updated_at = now();\n RETURN NEW;\nEND;\n$$;\n\nCREATE OR REPLACE FUNCTION set_updated_at_metadata() RETURNS trigger\n LANGUAGE plpgsql\nAS $$\nBEGIN\n NEW.updated_at = now();\n RETURN NEW;\nEND;\n$$;\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."accounts" (\n "_raw_data" jsonb NOT NULL,\n "id" text GENERATED ALWAYS AS ((_raw_data->>\'id\')::text) STORED,\n "api_key_hashes" text[] NOT NULL DEFAULT \'{}\',\n "first_synced_at" timestamptz NOT NULL DEFAULT now(),\n "_last_synced_at" timestamptz NOT NULL DEFAULT now(),\n "_updated_at" timestamptz NOT NULL DEFAULT now(),\n PRIMARY KEY ("id")\n);\nCREATE INDEX IF NOT EXISTS "idx_accounts_api_key_hashes"\n ON {{sync_schema}}."accounts" USING GIN ("api_key_hashes");\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."accounts";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."accounts"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at();\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_managed_webhooks" (\n "id" text PRIMARY KEY,\n "object" text,\n "url" text NOT NULL,\n "enabled_events" jsonb NOT NULL,\n "description" text,\n "enabled" boolean,\n "livemode" boolean,\n "metadata" jsonb,\n "secret" text NOT NULL,\n "status" text,\n "api_version" text,\n "created" bigint,\n "last_synced_at" timestamptz,\n "updated_at" timestamptz NOT NULL DEFAULT now(),\n "account_id" text NOT NULL\n);\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n DROP CONSTRAINT IF EXISTS "managed_webhooks_url_account_unique";\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n ADD CONSTRAINT "managed_webhooks_url_account_unique" UNIQUE ("url", "account_id");\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n DROP CONSTRAINT IF EXISTS "fk_managed_webhooks_account";\nALTER TABLE {{sync_schema}}."_managed_webhooks"\n ADD CONSTRAINT "fk_managed_webhooks_account"\n FOREIGN KEY ("account_id") REFERENCES {{sync_schema}}."accounts" (id);\nCREATE INDEX IF NOT EXISTS "idx_managed_webhooks_status"\n ON {{sync_schema}}."_managed_webhooks" ("status");\nCREATE INDEX IF NOT EXISTS "idx_managed_webhooks_enabled"\n ON {{sync_schema}}."_managed_webhooks" ("enabled");\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."_managed_webhooks";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."_managed_webhooks"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at_metadata();\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_sync_runs" (\n "_account_id" text NOT NULL,\n "started_at" timestamptz NOT NULL DEFAULT now(),\n "closed_at" timestamptz,\n "max_concurrent" integer NOT NULL DEFAULT 3,\n "triggered_by" text,\n "error_message" text,\n "updated_at" timestamptz NOT NULL DEFAULT now(),\n PRIMARY KEY ("_account_id", "started_at")\n);\nALTER TABLE {{sync_schema}}."_sync_runs"\n ADD COLUMN IF NOT EXISTS "error_message" text;\nALTER TABLE {{sync_schema}}."_sync_runs"\n DROP CONSTRAINT IF EXISTS "fk_sync_runs_account";\nALTER TABLE {{sync_schema}}."_sync_runs"\n ADD CONSTRAINT "fk_sync_runs_account"\n FOREIGN KEY ("_account_id") REFERENCES {{sync_schema}}."accounts" (id);\nALTER TABLE {{sync_schema}}."_sync_runs"\n DROP CONSTRAINT IF EXISTS one_active_run_per_account;\nALTER TABLE {{sync_schema}}."_sync_runs"\n DROP CONSTRAINT IF EXISTS one_active_run_per_account_triggered_by;\nALTER TABLE {{sync_schema}}."_sync_runs"\n ADD CONSTRAINT one_active_run_per_account_triggered_by\n EXCLUDE (\n "_account_id" WITH =,\n COALESCE(triggered_by, \'default\') WITH =\n ) WHERE (closed_at IS NULL);\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."_sync_runs";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."_sync_runs"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at_metadata();\nCREATE INDEX IF NOT EXISTS "idx_sync_runs_account_status"\n ON {{sync_schema}}."_sync_runs" ("_account_id", "closed_at");\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_sync_obj_runs" (\n "_account_id" text NOT NULL,\n "run_started_at" timestamptz NOT NULL,\n "object" text NOT NULL,\n "status" text NOT NULL DEFAULT \'pending\'\n CHECK (status IN (\'pending\', \'running\', \'complete\', \'error\')),\n "started_at" timestamptz,\n "completed_at" timestamptz,\n "processed_count" integer NOT NULL DEFAULT 0,\n "cursor" text,\n "page_cursor" text,\n "created_gte" integer NOT NULL DEFAULT 0,\n "created_lte" integer NOT NULL DEFAULT 0,\n "priority" integer NOT NULL DEFAULT 0,\n "error_message" text,\n "updated_at" timestamptz NOT NULL DEFAULT now(),\n PRIMARY KEY ("_account_id", "run_started_at", "object", "created_gte", "created_lte")\n);\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "page_cursor" text;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "created_gte" integer NOT NULL DEFAULT 0;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "created_lte" integer NOT NULL DEFAULT 0;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "priority" integer NOT NULL DEFAULT 0;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD COLUMN IF NOT EXISTS "error_message" text;\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n DROP CONSTRAINT IF EXISTS "fk_sync_obj_runs_parent";\nALTER TABLE {{sync_schema}}."_sync_obj_runs"\n ADD CONSTRAINT "fk_sync_obj_runs_parent"\n FOREIGN KEY ("_account_id", "run_started_at")\n REFERENCES {{sync_schema}}."_sync_runs" ("_account_id", "started_at");\nDROP TRIGGER IF EXISTS handle_updated_at ON {{sync_schema}}."_sync_obj_runs";\nCREATE TRIGGER handle_updated_at\nBEFORE UPDATE ON {{sync_schema}}."_sync_obj_runs"\nFOR EACH ROW EXECUTE FUNCTION set_updated_at_metadata();\nCREATE INDEX IF NOT EXISTS "idx_sync_obj_runs_status"\n ON {{sync_schema}}."_sync_obj_runs" ("_account_id", "run_started_at", "status");\nCREATE INDEX IF NOT EXISTS "idx_sync_obj_runs_priority"\n ON {{sync_schema}}."_sync_obj_runs" ("_account_id", "run_started_at", "status", "priority");\n\nCREATE TABLE IF NOT EXISTS {{sync_schema}}."_rate_limits" (\n key TEXT PRIMARY KEY,\n count INTEGER NOT NULL DEFAULT 0,\n window_start TIMESTAMPTZ NOT NULL DEFAULT now()\n);\n\nCREATE OR REPLACE FUNCTION {{sync_schema}}.check_rate_limit(\n rate_key TEXT,\n max_requests INTEGER,\n window_seconds INTEGER\n)\nRETURNS VOID AS $$\nDECLARE\n now TIMESTAMPTZ := clock_timestamp();\n window_length INTERVAL := make_interval(secs => window_seconds);\n current_count INTEGER;\nBEGIN\n PERFORM pg_advisory_xact_lock(hashtext(rate_key));\n\n INSERT INTO {{sync_schema}}."_rate_limits" (key, count, window_start)\n VALUES (rate_key, 1, now)\n ON CONFLICT (key) DO UPDATE\n SET count = CASE\n WHEN "_rate_limits".window_start + window_length <= now\n THEN 1\n ELSE "_rate_limits".count + 1\n END,\n window_start = CASE\n WHEN "_rate_limits".window_start + window_length <= now\n THEN now\n ELSE "_rate_limits".window_start\n END;\n\n SELECT count INTO current_count FROM {{sync_schema}}."_rate_limits" WHERE key = rate_key;\n\n IF current_count > max_requests THEN\n RAISE EXCEPTION \'Rate limit exceeded for %\', rate_key;\n END IF;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE VIEW {{sync_schema}}."sync_runs" AS\nSELECT\n r._account_id as account_id,\n r.started_at,\n r.closed_at,\n r.triggered_by,\n r.max_concurrent,\n COALESCE(SUM(o.processed_count), 0) as total_processed,\n COUNT(o.*) as total_objects,\n COUNT(*) FILTER (WHERE o.status = \'complete\') as complete_count,\n COUNT(*) FILTER (WHERE o.status = \'error\') as error_count,\n COUNT(*) FILTER (WHERE o.status = \'running\') as running_count,\n COUNT(*) FILTER (WHERE o.status = \'pending\') as pending_count,\n STRING_AGG(o.error_message, \'; \') FILTER (WHERE o.error_message IS NOT NULL) as error_message,\n CASE\n WHEN r.closed_at IS NULL AND COUNT(*) FILTER (WHERE o.status = \'running\') > 0 THEN \'running\'\n WHEN r.closed_at IS NULL AND (COUNT(o.*) = 0 OR COUNT(o.*) = COUNT(*) FILTER (WHERE o.status = \'pending\')) THEN \'pending\'\n WHEN r.closed_at IS NULL THEN \'running\'\n WHEN COUNT(*) FILTER (WHERE o.status = \'error\') > 0 THEN \'error\'\n ELSE \'complete\'\n END as status\nFROM {{sync_schema}}."_sync_runs" r\nLEFT JOIN {{sync_schema}}."_sync_obj_runs" o\n ON o._account_id = r._account_id\n AND o.run_started_at = r.started_at\nGROUP BY r._account_id, r.started_at, r.closed_at, r.triggered_by, r.max_concurrent;\n\nDROP FUNCTION IF EXISTS {{sync_schema}}."sync_obj_progress"(TEXT, TIMESTAMPTZ);\nCREATE OR REPLACE VIEW {{sync_schema}}."sync_obj_progress" AS\nSELECT\n r."_account_id" AS account_id,\n r.run_started_at,\n r.object,\n ROUND(\n 100.0 * COUNT(*) FILTER (WHERE r.status = \'complete\') / NULLIF(COUNT(*), 0),\n 1\n ) AS pct_complete,\n COALESCE(SUM(r.processed_count), 0) AS processed\nFROM {{sync_schema}}."_sync_obj_runs" r\nWHERE r.run_started_at = (\n SELECT MAX(s.started_at)\n FROM {{sync_schema}}."_sync_runs" s\n WHERE s."_account_id" = r."_account_id"\n)\nGROUP BY r."_account_id", r.run_started_at, r.object;\n',
},
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This updates the embedded SQL for migration 0000_initial_migration.sql. Because the migration hash is derived from the filename + SQL content, altering the embedded content for id 0 will also trigger the “initial migration hash changed” path in runMigrationsWithContent and cause a schema reset for existing installs. Prefer adding a new embedded migration entry (matching a new 0001_*.sql file) rather than changing the SQL for 0000.

Copilot uses AI. Check for mistakes.
Comment on lines 7 to 12
CREATE OR REPLACE FUNCTION set_updated_at() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
-- Support both legacy "updated_at" and newer "_updated_at" columns.
-- jsonb_populate_record silently ignores keys that are not present on NEW.
NEW := jsonb_populate_record(
NEW,
jsonb_build_object(
'updated_at', now(),
'_updated_at', now()
)
);
NEW._updated_at = now();
RETURN NEW;
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the contents of the 0000_initial_migration.sql migration will change its computed hash; runMigrationsWithContent treats a hash change for migration id 0 as a signal to DROP SCHEMA ... CASCADE and recreate the schema (see packages/sync-engine/src/database/migrate.ts:684-689). This risks destructive data loss for existing installs on upgrade. Instead of modifying migration 0000, add a new migration (e.g. 0001_fix_set_updated_at.sql) that does CREATE OR REPLACE FUNCTION set_updated_at() so existing databases get the fix without triggering a schema reset.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: set_updated_at() trigger still writes to updated_at instead of _updated_at

2 participants