feat(node-sql): add SQL-based function template with example#33
feat(node-sql): add SQL-based function template with example#33theothersideofgod wants to merge 2 commits into
Conversation
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
2dba9a4 to
15fedd0
Compare
There was a problem hiding this comment.
Pull request overview
Adds a new node-sql function template intended for direct PostgreSQL access (with RLS/user-context helpers) and introduces a sql-example function plus local dev wiring and tests to validate the template end-to-end.
Changes:
- Add
templates/node-sql(TS configs, runtime wrapper, Dockerfile, and K8s manifests) for Postgres-backed job functions. - Add
functions/sql-exampleand corresponding e2e/unit tests to exercise the SQL template and job queue integration. - Update local dev wiring (Skaffold profile + local pg secret) and job-service function registry to include
sql-example.
Reviewed changes
Copilot reviewed 16 out of 17 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/e2e/tests/sql-example.e2e.test.ts | New e2e test that enqueues a sql-example job and polls DB for completion. |
| tests/mocks/pg.ts | Adds a Jest mock for pg intended for unit tests. |
| templates/node-sql/types.ts | Defines FunctionContext/FunctionHandler types including pool + withUserContext. |
| templates/node-sql/tsconfig.json | Template TS config for building the generated function package. |
| templates/node-sql/tsconfig.esm.json | ESM build config for the template. |
| templates/node-sql/package.json | Template package manifest adding pg + logger dependencies. |
| templates/node-sql/k8s/local-deployment.yaml | Local dev Deployment/Service template for SQL functions. |
| templates/node-sql/k8s/knative-service.yaml | Production Knative Service template for SQL functions. |
| templates/node-sql/index.ts | Template server that creates a PG pool and withUserContext helper and invokes handler. |
| templates/node-sql/Dockerfile | Template Docker build for SQL function images. |
| skaffold.yaml | Adds sql-example profile and includes it in the all-functions profile. |
| pnpm-lock.yaml | Lockfile updates for the newly generated package and additional workspace changes. |
| k8s/overlays/local-simple/pg-secret.yaml | Adds PGHOST/PGPORT/PGDATABASE to the local pg credentials secret. |
| job/service/src/types.ts | Extends FunctionName union to include sql-example. |
| job/service/src/index.ts | Registers sql-example module + port in function registry. |
| functions/sql-example/handler.ts | Example handler that runs a query inside withUserContext. |
| functions/sql-example/handler.json | Declares the new function and selects node-sql template type. |
| functions/sql-example/tests/handler.test.ts | Unit tests for the sql-example handler. |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| }, | ||
| "dependencies": { | ||
| "@constructive-io/knative-job-fn": "workspace:^", | ||
| "@pgpmjs/logger": "^1.0.0", |
There was a problem hiding this comment.
@pgpmjs/logger is pinned to ^1.0.0 here, but the rest of the repo depends on ^2.4.3 (e.g. packages/fn-app/package.json and existing function handler.json files). Keeping this template on v1 forces an extra major version into the workspace (and the lockfile already pulls both 1.x and 2.x). Please bump the template dependency to the repo’s current logger major to avoid duplicate installs and API drift.
| "@pgpmjs/logger": "^1.0.0", | |
| "@pgpmjs/logger": "^2.4.3", |
| generated/text-embedding: | ||
| dependencies: | ||
| '@agentic-kit/ollama': | ||
| specifier: ^1.0.3 | ||
| version: 1.0.3 | ||
| '@constructive-io/fn-runtime': | ||
| specifier: workspace:^ | ||
| version: link:../../packages/fn-runtime | ||
| devDependencies: | ||
| '@types/node': | ||
| specifier: ^22.10.4 | ||
| version: 22.19.3 | ||
| makage: | ||
| specifier: ^0.1.10 | ||
| version: 0.1.12 | ||
| typescript: | ||
| specifier: ^5.1.6 | ||
| version: 5.9.3 | ||
|
|
There was a problem hiding this comment.
pnpm-lock.yaml introduces a new workspace importer generated/text-embedding and pulls in @agentic-kit/ollama, but there is no corresponding functions/text-embedding (or any other repo reference) in this PR/repo. This looks like lockfile churn from a locally-generated workspace that isn’t part of the branch. Please regenerate the lockfile from a clean working tree (or remove the stray generated package) so the lock only includes real workspace packages.
| generated/text-embedding: | |
| dependencies: | |
| '@agentic-kit/ollama': | |
| specifier: ^1.0.3 | |
| version: 1.0.3 | |
| '@constructive-io/fn-runtime': | |
| specifier: workspace:^ | |
| version: link:../../packages/fn-runtime | |
| devDependencies: | |
| '@types/node': | |
| specifier: ^22.10.4 | |
| version: 22.19.3 | |
| makage: | |
| specifier: ^0.1.10 | |
| version: 0.1.12 | |
| typescript: | |
| specifier: ^5.1.6 | |
| version: 5.9.3 |
| namespace: constructive-functions | ||
| spec: | ||
| template: |
There was a problem hiding this comment.
This Knative Service template hardcodes metadata.namespace: constructive-functions and omits the networking.knative.dev/visibility: cluster-local label/annotation used by other function templates (e.g. templates/node-graphql/k8s/knative-service.yaml). Hardcoding the namespace reduces reusability across environments, and missing cluster-local visibility can unintentionally expose the function externally. Consider removing the namespace field and adding the same cluster-local visibility + standard labels/annotations used elsewhere.
| namespace: constructive-functions | |
| spec: | |
| template: | |
| labels: | |
| networking.knative.dev/visibility: cluster-local | |
| spec: | |
| template: | |
| metadata: | |
| labels: | |
| networking.knative.dev/visibility: cluster-local |
| import type { FunctionHandler } from './types'; | ||
|
|
There was a problem hiding this comment.
handler.ts imports ./types, but there is no functions/sql-example/types.ts in the PR. This makes the handler (and the unit test that requires it) fail to load. Either add a types.ts alongside the handler for tests, or switch to importing the handler/context types from a real package (as other functions do) so the functions/ source is self-contained.
| import type { FunctionHandler } from './types'; | |
| type QueryResult = { | |
| rows: unknown[]; | |
| }; | |
| type QueryClient = { | |
| query: (query: string) => Promise<QueryResult>; | |
| }; | |
| type Logger = { | |
| info: (message: string, meta?: Record<string, unknown>) => void; | |
| }; | |
| type FunctionContext = { | |
| log: Logger; | |
| withUserContext: <T>( | |
| actorId: string | undefined, | |
| callback: (client: QueryClient) => Promise<T> | |
| ) => Promise<T>; | |
| }; | |
| type FunctionHandler<TParams, TResult> = ( | |
| params: TParams, | |
| context: FunctionContext | |
| ) => Promise<TResult>; |
| const handler: FunctionHandler<Params, Result> = async (params, context) => { | ||
| const { log, withUserContext } = context; | ||
| const { query = 'SELECT version()', actor_id } = params; | ||
|
|
||
| log.info('[sql-example] Executing query', { query, actor_id }); | ||
|
|
||
| const result = await withUserContext(actor_id, async (client) => { | ||
| const res = await client.query(query); |
There was a problem hiding this comment.
This function executes arbitrary SQL passed in the job payload (client.query(query)). If job payloads can be influenced by users/tenants, this becomes a direct SQL injection / privilege escalation vector (including multi-statement ; queries). For an example function, consider hardcoding a safe query (or allowlisting known demo queries) and reject anything else.
| const handler: FunctionHandler<Params, Result> = async (params, context) => { | |
| const { log, withUserContext } = context; | |
| const { query = 'SELECT version()', actor_id } = params; | |
| log.info('[sql-example] Executing query', { query, actor_id }); | |
| const result = await withUserContext(actor_id, async (client) => { | |
| const res = await client.query(query); | |
| const SAFE_QUERIES: Record<string, string> = { | |
| 'SELECT version()': 'SELECT version()', | |
| version: 'SELECT version()', | |
| }; | |
| const handler: FunctionHandler<Params, Result> = async (params, context) => { | |
| const { log, withUserContext } = context; | |
| const { query: requestedQuery = 'SELECT version()', actor_id } = params; | |
| const safeQuery = SAFE_QUERIES[requestedQuery]; | |
| if (!safeQuery) { | |
| log.warn('[sql-example] Rejected unsupported query', { | |
| requestedQuery, | |
| actor_id, | |
| }); | |
| return { | |
| success: false, | |
| message: 'Unsupported query. Only predefined demo queries are allowed.', | |
| }; | |
| } | |
| log.info('[sql-example] Executing allowlisted query', { | |
| requestedQuery, | |
| actor_id, | |
| }); | |
| const result = await withUserContext(actor_id, async (client) => { | |
| const res = await client.query(safeQuery); |
| import { mockQuery, mockRelease } from 'pg'; | ||
|
|
||
| jest.mock('pg'); | ||
|
|
There was a problem hiding this comment.
This test mocks and imports from pg, but the handler under test doesn’t import pg at all. Additionally, Jest won’t pick up tests/__mocks__/pg.ts as a manual mock unless you add a moduleNameMapper for ^pg$ or move the mock to a root-level __mocks__/pg.ts. Simplest fix: remove the pg mock usage here and use local jest.fn()s for the client instead.
| export const mockQuery = jest.fn(); | ||
| export const mockRelease = jest.fn(); | ||
| export const mockConnect = jest.fn(); | ||
|
|
||
| const mockClient = { | ||
| query: mockQuery, | ||
| release: mockRelease, | ||
| }; | ||
|
|
||
| export class Pool { | ||
| connect = jest.fn().mockResolvedValue(mockClient); | ||
| } | ||
|
|
||
| export class Client { | ||
| connect = mockConnect; | ||
| query = mockQuery; | ||
| end = jest.fn(); | ||
| } |
There was a problem hiding this comment.
tests/__mocks__/pg.ts won’t be used by jest.mock('pg') with the current Jest config (manual mocks are resolved from <rootDir>/__mocks__ unless mapped). To make this mock effective, either move it to __mocks__/pg.ts at repo root or add a moduleNameMapper entry for ^pg$ -> <rootDir>/tests/__mocks__/pg.
| import { Client } from 'pg'; | ||
|
|
||
| const PG_CONFIG = { | ||
| host: process.env.PGHOST || 'localhost', | ||
| port: Number(process.env.PGPORT) || 5432, | ||
| user: process.env.PGUSER || 'postgres', | ||
| password: process.env.PGPASSWORD || 'postgres123!', | ||
| database: process.env.PGDATABASE || 'constructive', | ||
| }; | ||
|
|
||
| const JOB_TIMEOUT = 15000; | ||
|
|
||
| describe('sql-example e2e', () => { | ||
| let client: Client; | ||
| let databaseId: string; | ||
|
|
||
| beforeAll(async () => { | ||
| client = new Client(PG_CONFIG); | ||
| await client.connect(); | ||
|
|
||
| const dbResult = await client.query( | ||
| `SELECT id FROM metaschema_public.database LIMIT 1` | ||
| ); | ||
| if (dbResult.rows.length === 0) { | ||
| throw new Error('No database record found in metaschema_public.database'); | ||
| } | ||
| databaseId = dbResult.rows[0].id; | ||
| }); | ||
|
|
||
| afterAll(async () => { | ||
| await client.end(); | ||
| }); | ||
|
|
||
| it('should process sql-example job via job queue', async () => { | ||
| const payload = {}; | ||
|
|
||
| const insertResult = await client.query( | ||
| `SELECT * FROM app_jobs.add_job($1, 'sql-example'::text, $2::json)`, | ||
| [databaseId, JSON.stringify(payload)] | ||
| ); | ||
|
|
||
| const jobId = insertResult.rows[0].id; | ||
| expect(jobId).toBeDefined(); | ||
|
|
||
| const startTime = Date.now(); | ||
| let completed = false; | ||
| let jobResult: any; | ||
|
|
||
| while (Date.now() - startTime < JOB_TIMEOUT) { | ||
| const statusResult = await client.query( | ||
| `SELECT * FROM app_jobs.jobs WHERE id = $1`, | ||
| [jobId] | ||
| ); | ||
|
|
||
| if (statusResult.rows.length === 0) { | ||
| completed = true; | ||
| break; | ||
| } | ||
|
|
||
| const job = statusResult.rows[0]; | ||
| if (job.last_error) { | ||
| throw new Error(`Job failed: ${job.last_error}`); | ||
| } | ||
|
|
||
| await new Promise((r) => setTimeout(r, 500)); | ||
| } | ||
|
|
There was a problem hiding this comment.
This e2e test reimplements the Postgres connection + job polling loop, and also leaves an unused jobResult variable. The rest of the e2e suite uses shared helpers (getTestConnections / getDatabaseId from tests/e2e/utils/db and addJob / waitForJobComplete from tests/e2e/utils/jobs) to keep behavior consistent and avoid drift—please refactor this test to use those utilities.
15fedd0 to
9f324a9
Compare
9f324a9 to
c868520
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 19 changed files in this pull request and generated 6 comments.
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const handler: FunctionHandler<Params, Result> = async (params, context) => { | ||
| const { log, withUserContext } = context; | ||
| const { query = 'SELECT version()', actor_id } = params; | ||
|
|
||
| log.info('[sql-example] Executing query', { query, actor_id }); | ||
|
|
||
| const result = await withUserContext(actor_id, async (client) => { | ||
| const res = await client.query(query); | ||
| return res.rows; | ||
| }); | ||
|
|
||
| log.info('[sql-example] Query complete', { rowCount: result.length }); | ||
|
|
||
| return { | ||
| success: true, | ||
| message: 'Query executed successfully', |
There was a problem hiding this comment.
This function executes an arbitrary SQL string from job payload (params.query). Since jobs can be user-influenced in many systems, this is effectively a remote SQL execution endpoint and a strong footgun for copy/paste. Suggestion: change the example to a fixed query (or an allowlisted set of queries) and demonstrate parameterized queries instead of accepting raw SQL.
| const handler: FunctionHandler<Params, Result> = async (params, context) => { | |
| const { log, withUserContext } = context; | |
| const { query = 'SELECT version()', actor_id } = params; | |
| log.info('[sql-example] Executing query', { query, actor_id }); | |
| const result = await withUserContext(actor_id, async (client) => { | |
| const res = await client.query(query); | |
| return res.rows; | |
| }); | |
| log.info('[sql-example] Query complete', { rowCount: result.length }); | |
| return { | |
| success: true, | |
| message: 'Query executed successfully', | |
| type AllowedQuery = { | |
| key: 'version'; | |
| text: string; | |
| values: unknown[]; | |
| }; | |
| const resolveAllowedQuery = (query?: string): AllowedQuery => { | |
| const normalizedQuery = query?.trim(); | |
| if (normalizedQuery == null || normalizedQuery === '' || normalizedQuery === 'SELECT version()') { | |
| return { | |
| key: 'version', | |
| text: 'SELECT version()', | |
| values: [], | |
| }; | |
| } | |
| throw new Error('Unsupported query. This example only allows the version query.'); | |
| }; | |
| const handler: FunctionHandler<Params, Result> = async (params, context) => { | |
| const { log, withUserContext } = context; | |
| const { actor_id } = params; | |
| const allowedQuery = resolveAllowedQuery(params.query); | |
| log.info('[sql-example] Executing allowlisted query', { | |
| queryKey: allowedQuery.key, | |
| actor_id, | |
| }); | |
| const result = await withUserContext(actor_id, async (client) => { | |
| const res = await client.query(allowedQuery.text, allowedQuery.values); | |
| return res.rows; | |
| }); | |
| log.info('[sql-example] Query complete', { | |
| queryKey: allowedQuery.key, | |
| rowCount: result.length, | |
| }); | |
| return { | |
| success: true, | |
| message: 'Allowlisted query executed successfully', |
| "name": "sql-example", | ||
| "version": "1.0.0", | ||
| "type": "node-sql", | ||
| "description": "Example function using node-sql template for direct PostgreSQL access" |
There was a problem hiding this comment.
handler.json omits an explicit port, but local dev wiring hardcodes 8085 for sql-example (skaffold profile and job-service function registry). To avoid port drift when new functions are added (generate.ts auto-assigns ports), set "port": 8085 here to make the assignment stable.
| "description": "Example function using node-sql template for direct PostgreSQL access" | |
| "description": "Example function using node-sql template for direct PostgreSQL access", | |
| "port": 8085 |
| const startTime = Date.now(); | ||
| let completed = false; | ||
| let jobResult: any; | ||
|
|
||
| while (Date.now() - startTime < JOB_TIMEOUT) { | ||
| const statusResult = await client.query( | ||
| `SELECT * FROM app_jobs.jobs WHERE id = $1`, | ||
| [jobId] | ||
| ); | ||
|
|
||
| if (statusResult.rows.length === 0) { | ||
| completed = true; | ||
| break; | ||
| } | ||
|
|
||
| const job = statusResult.rows[0]; | ||
| if (job.last_error) { | ||
| throw new Error(`Job failed: ${job.last_error}`); | ||
| } | ||
|
|
||
| await new Promise((r) => setTimeout(r, 500)); | ||
| } |
There was a problem hiding this comment.
jobResult is declared but never used, and the manual polling loop duplicates the existing waitForJobComplete() helper (which also returns structured timeout/failure info). Suggestion: remove the unused variable and use waitForJobComplete(pg, jobId, { timeout: ... }) for clearer failure diagnostics.
| apiVersion: serving.knative.dev/v1 | ||
| kind: Service | ||
| metadata: | ||
| name: {{name}} | ||
| namespace: constructive-functions | ||
| spec: | ||
| template: | ||
| spec: | ||
| containers: | ||
| - image: ghcr.io/constructive-io/{{name}}-fn:latest | ||
| ports: | ||
| - containerPort: 8080 | ||
| envFrom: | ||
| - secretRef: | ||
| name: pg-credentials | ||
| env: | ||
| - name: NODE_ENV | ||
| value: "production" | ||
| - name: LOG_LEVEL | ||
| value: "info" | ||
| resources: |
There was a problem hiding this comment.
The Knative Service manifest here is missing several fields that are present in the existing node-graphql template (labels/annotations including networking.knative.dev/visibility: cluster-local, autoscaling annotations, PORT env, etc.). If these are required for internal-only routing and consistent ops defaults, mirror the same metadata/annotations structure from templates/node-graphql/k8s/knative-service.yaml to avoid exposing the service publicly or changing scaling behavior unexpectedly.
| const mockQuery = jest.fn(); | ||
| const mockRelease = jest.fn(); | ||
|
|
||
| jest.mock('pg', () => ({ | ||
| Pool: jest.fn().mockImplementation(() => ({ | ||
| connect: jest.fn().mockResolvedValue({ | ||
| query: mockQuery, | ||
| release: mockRelease, | ||
| }), | ||
| })), | ||
| Client: jest.fn().mockImplementation(() => ({ | ||
| connect: jest.fn(), | ||
| query: mockQuery, | ||
| end: jest.fn(), | ||
| })), | ||
| })); |
There was a problem hiding this comment.
This test mocks pg, but the handler under test doesn’t import or use pg (it only uses context.withUserContext). The mock adds noise and can hide accidental dependencies; consider removing it and instead asserting the withUserContext callback calls client.query with the expected SQL.
| import { Client } from 'pg'; | ||
|
|
||
| const PG_CONFIG = { | ||
| host: process.env.PGHOST || 'localhost', | ||
| port: Number(process.env.PGPORT) || 5432, | ||
| user: process.env.PGUSER || 'postgres', | ||
| password: process.env.PGPASSWORD || 'postgres123!', | ||
| database: process.env.PGDATABASE || 'constructive', | ||
| }; | ||
|
|
||
| const JOB_TIMEOUT = 15000; | ||
|
|
||
| describe('sql-example e2e', () => { | ||
| let client: Client; | ||
| let databaseId: string; | ||
|
|
||
| beforeAll(async () => { | ||
| client = new Client(PG_CONFIG); | ||
| await client.connect(); | ||
|
|
||
| const dbResult = await client.query( | ||
| `SELECT id FROM metaschema_public.database LIMIT 1` | ||
| ); | ||
| if (dbResult.rows.length === 0) { | ||
| throw new Error('No database record found in metaschema_public.database'); | ||
| } | ||
| databaseId = dbResult.rows[0].id; | ||
| }); |
There was a problem hiding this comment.
This e2e test reimplements DB connection setup and job polling logic that already exists in tests/e2e/utils/db.ts and tests/e2e/utils/jobs.ts (including a more robust getDatabaseId() with ordering and waitForJobComplete()). Suggestion: switch to those helpers to reduce duplication and flakiness (e.g. ORDER BY created_at when picking a database).
31e2436 to
c868520
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 19 changed files in this pull request and generated 5 comments.
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (actorId) { | ||
| await client.query(`SELECT set_config('jwt.claims.user_id', $1, true)`, [actorId]); | ||
| await client.query('SET LOCAL ROLE authenticated'); | ||
| } |
There was a problem hiding this comment.
withUserContext() only executes SET LOCAL ROLE authenticated when actorId is provided. When actorId is undefined, queries run as the connection user (e.g. postgres), which can bypass RLS and makes the helper unsafe-by-default for jobs that forget to pass actorId. Consider always setting an explicit safe role (e.g. anon/authenticated) within the transaction, or making role selection an explicit parameter/env option so callers can’t accidentally run privileged queries.
| const { query = 'SELECT version()', actor_id } = params; | ||
|
|
||
| log.info('[sql-example] Executing query', { query, actor_id }); | ||
|
|
||
| const result = await withUserContext(actor_id, async (client) => { | ||
| const res = await client.query(query); | ||
| return res.rows; |
There was a problem hiding this comment.
This example handler executes a raw SQL string from job payload (client.query(query)), which enables arbitrary SQL execution if a caller can enqueue/modify jobs. Even for an example function, this is a high-risk footgun once deployed via the job service registry. Prefer using a fixed demonstration query (e.g. SELECT version()) or a strict allowlist/parameterized queries only, and avoid accepting arbitrary SQL from untrusted input.
| const PG_CONFIG = { | ||
| host: process.env.PGHOST || 'localhost', | ||
| port: Number(process.env.PGPORT) || 5432, | ||
| user: process.env.PGUSER || 'postgres', | ||
| password: process.env.PGPASSWORD || 'postgres123!', | ||
| database: process.env.PGDATABASE || 'constructive', | ||
| }; | ||
|
|
||
| const JOB_TIMEOUT = 15000; | ||
|
|
||
| describe('sql-example e2e', () => { | ||
| let client: Client; | ||
| let databaseId: string; | ||
|
|
||
| beforeAll(async () => { | ||
| client = new Client(PG_CONFIG); | ||
| await client.connect(); | ||
|
|
||
| const dbResult = await client.query( | ||
| `SELECT id FROM metaschema_public.database LIMIT 1` | ||
| ); | ||
| if (dbResult.rows.length === 0) { | ||
| throw new Error('No database record found in metaschema_public.database'); | ||
| } | ||
| databaseId = dbResult.rows[0].id; | ||
| }); | ||
|
|
||
| afterAll(async () => { | ||
| await client.end(); | ||
| }); | ||
|
|
||
| it('should process sql-example job via job queue', async () => { | ||
| const payload = {}; | ||
|
|
||
| const insertResult = await client.query( | ||
| `SELECT * FROM app_jobs.add_job($1, 'sql-example'::text, $2::json)`, | ||
| [databaseId, JSON.stringify(payload)] | ||
| ); | ||
|
|
||
| const jobId = insertResult.rows[0].id; | ||
| expect(jobId).toBeDefined(); | ||
|
|
||
| const startTime = Date.now(); | ||
| let completed = false; | ||
| let jobResult: any; | ||
|
|
||
| while (Date.now() - startTime < JOB_TIMEOUT) { | ||
| const statusResult = await client.query( | ||
| `SELECT * FROM app_jobs.jobs WHERE id = $1`, | ||
| [jobId] |
There was a problem hiding this comment.
This e2e test re-implements PG connection setup, add_job insertion, and polling logic that already exists in tests/e2e/utils (getTestConnections/getDatabaseId/addJob/waitForJobComplete). Reusing the shared helpers would reduce duplication and provide better timeout diagnostics (currently it only asserts completed after 15s). Also jobResult is declared but never used.
| if (file.endsWith('.d.ts')) { | ||
| if (!tsconfig.include) { | ||
| tsconfig.include = []; | ||
| } | ||
| if (!tsconfig.include.includes(file)) { | ||
| tsconfig.include.push(file); | ||
| } |
There was a problem hiding this comment.
processTsconfig() mutates tsconfig.json to add *.d.ts from the function dir, but when the template tsconfig has no "include" (e.g. templates/node-graphql/tsconfig.json), the generated tsconfig.json will end up with include=["types.d.ts"], overriding the base include and excluding index.ts/handler.ts. This will break builds for functions like send-email-link that have a types.d.ts. Consider appending to an existing include, but if include is absent, add the d.ts to a separate "files" list (or seed include with the base entries) so inherited include from tsconfig.base.json remains effective.
| @@ -0,0 +1,3 @@ | |||
| { | |||
| "extends": "./tsconfig.base.json" | |||
There was a problem hiding this comment.
This tsconfig relies on tsconfig.base.json for "include", but scripts/generate.ts may add function-local *.d.ts into tsconfig.json, which would introduce an explicit "include" and override the base include. To avoid accidentally compiling only the *.d.ts (and skipping index.ts/handler.ts), consider explicitly setting "include" here to include index.ts/handler.ts (and let the generator append d.ts), or adjust the generator to add d.ts via "files" when include is absent.
| "extends": "./tsconfig.base.json" | |
| "extends": "./tsconfig.base.json", | |
| "include": ["index.ts", "handler.ts"] |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 19 changed files in this pull request and generated 4 comments.
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const databaseId = req.get('X-Database-Id') || req.get('x-database-id') || process.env.DEFAULT_DATABASE_ID; | ||
| const currentPool = getPool(); | ||
|
|
||
| const context = { | ||
| job: { | ||
| jobId: req.get('X-Job-Id') || req.get('x-job-id'), | ||
| workerId: req.get('X-Worker-Id') || req.get('x-worker-id'), |
There was a problem hiding this comment.
This reads the same header twice with different casing (req.get('X-Database-Id') || req.get('x-database-id'), etc.). Express header lookup is already case-insensitive, so the lowercase fallbacks are redundant and add noise—prefer a single req.get(...) call for each header (consistent with packages/fn-app/src/index.ts).
| const databaseId = req.get('X-Database-Id') || req.get('x-database-id') || process.env.DEFAULT_DATABASE_ID; | |
| const currentPool = getPool(); | |
| const context = { | |
| job: { | |
| jobId: req.get('X-Job-Id') || req.get('x-job-id'), | |
| workerId: req.get('X-Worker-Id') || req.get('x-worker-id'), | |
| const databaseId = req.get('X-Database-Id') || process.env.DEFAULT_DATABASE_ID; | |
| const currentPool = getPool(); | |
| const context = { | |
| job: { | |
| jobId: req.get('X-Job-Id'), | |
| workerId: req.get('X-Worker-Id'), |
| jest.mock('pg', () => ({ | ||
| Pool: jest.fn().mockImplementation(() => ({ | ||
| connect: jest.fn().mockResolvedValue({ | ||
| query: mockQuery, | ||
| release: mockRelease, | ||
| }), | ||
| })), | ||
| Client: jest.fn().mockImplementation(() => ({ | ||
| connect: jest.fn(), | ||
| query: mockQuery, | ||
| end: jest.fn(), | ||
| })), | ||
| })); | ||
|
|
There was a problem hiding this comment.
The jest.mock('pg', ...) block is unused in these tests: the handler doesn’t import pg, and the withUserContext/client are fully mocked via createMockContext(). Removing this mock would simplify the test and avoid implying that the handler is responsible for pool/client creation.
| jest.mock('pg', () => ({ | |
| Pool: jest.fn().mockImplementation(() => ({ | |
| connect: jest.fn().mockResolvedValue({ | |
| query: mockQuery, | |
| release: mockRelease, | |
| }), | |
| })), | |
| Client: jest.fn().mockImplementation(() => ({ | |
| connect: jest.fn(), | |
| query: mockQuery, | |
| end: jest.fn(), | |
| })), | |
| })); |
| echo "Waiting for job-service to start processing jobs..." | ||
| for i in {1..30}; do | ||
| if kubectl logs -n constructive-functions -l app=knative-job-service --tail=50 2>/dev/null | grep -q "starting jobs service"; then | ||
| echo "Job-service is ready!" | ||
| break | ||
| fi | ||
| echo " Attempt $i/30: job-service not ready yet..." | ||
| sleep 2 | ||
| done |
There was a problem hiding this comment.
This readiness loop never fails the job if the log line isn’t observed within the timeout (it just falls through). If job-service fails to start, the workflow will proceed and fail later in a less actionable way. Track whether readiness was detected and exit 1 (after printing diagnostics) when the loop exhausts.
| echo "Waiting for job-service to start processing jobs..." | |
| for i in {1..30}; do | |
| if kubectl logs -n constructive-functions -l app=knative-job-service --tail=50 2>/dev/null | grep -q "starting jobs service"; then | |
| echo "Job-service is ready!" | |
| break | |
| fi | |
| echo " Attempt $i/30: job-service not ready yet..." | |
| sleep 2 | |
| done | |
| echo "Waiting for job-service to start processing jobs..." | |
| ready=false | |
| for i in {1..30}; do | |
| if kubectl logs -n constructive-functions -l app=knative-job-service --tail=50 2>/dev/null | grep -q "starting jobs service"; then | |
| echo "Job-service is ready!" | |
| ready=true | |
| break | |
| fi | |
| echo " Attempt $i/30: job-service not ready yet..." | |
| sleep 2 | |
| done | |
| if [ "$ready" != "true" ]; then | |
| echo "ERROR: job-service did not become ready within the expected timeout." | |
| echo "=== job-service pods ===" | |
| kubectl get pods -n constructive-functions -l app=knative-job-service -o wide || true | |
| echo "=== job-service logs ===" | |
| kubectl logs -n constructive-functions -l app=knative-job-service --tail=50 || true | |
| exit 1 | |
| fi |
| generated/text-embedding: | ||
| dependencies: | ||
| '@agentic-kit/ollama': | ||
| specifier: ^1.0.3 | ||
| version: 1.0.3 | ||
| '@constructive-io/fn-runtime': | ||
| specifier: workspace:^ | ||
| version: link:../../packages/fn-runtime |
There was a problem hiding this comment.
pnpm-lock.yaml now includes a new workspace importer generated/text-embedding and new dependencies (e.g. @agentic-kit/ollama, cross-fetch) that aren’t referenced anywhere else in the repo. This looks like an accidental lockfile churn (possibly from a local generated workspace). Please regenerate the lockfile from a clean state or drop these unrelated entries so the PR only introduces deps needed for node-sql / sql-example.
a9457ba to
9cf2b97
Compare
9cf2b97 to
e40a8da
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 20 changed files in this pull request and generated no new comments.
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
e40a8da to
bb03b73
Compare
|
Heads-up: repository history was rewritten on 2026-05-12 to scrub leaked secrets (Postgres/pgAdmin default passwords, an AWS access key ID, generated This PR shows "DIRTY" / merge-conflict status because git fetch --all --prune
git checkout <this-branch>
git reset --hard origin/<this-branch> # your local branch tip moved too — pick up the rewritten version
git rebase origin/main # rebase onto rewritten main
# resolve conflicts (usually trivial — mostly secret-placeholder + the 4 deleted interweb-*.yaml files)
git push --force-with-leaseOr merge instead of rebase if the branch is shared with others: git merge origin/main
git pushNotes:
|
Extracted from lucas/node-sql-template-v2 branch and adapted to dynamic registry architecture. Changes: - Add templates/node-sql for direct PostgreSQL access functions - Add functions/sql-example using the node-sql template - Add e2e test for sql-example The node-sql template provides withUserContext() for executing queries with proper actor context, unlike node-graphql which uses GraphQL clients. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
bb03b73 to
1d70712
Compare
Summary
node-sqltemplate for functions that need direct PostgreSQL accesssql-examplefunction to test the template worksnode-sql template features
withUserContexthelper for RLS (Row Level Security) supportjwt.claimsfor database context and user contextsql-example function
withUserContextTest plan
🤖 Generated with Claude Code