Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
eb5a904
functions testing harness with simple-bash. hello-world, and runtime-…
Jovonni Jan 8, 2026
ce3bce3
various function prototypes, working on common function contract
Jovonni Jan 8, 2026
6559bdf
added cicd
Jovonni Jan 9, 2026
0ad374c
updated various functions, with global runner
Jovonni Jan 9, 2026
7177a64
twilio, and using new envvars
Jovonni Jan 9, 2026
23d8ae0
added new env vars to the test-runner.ts, can pass all (for now, TODO)
Jovonni Jan 9, 2026
ba1f218
dotenv dep for .env
Jovonni Jan 9, 2026
c07e4ef
updated several functions
Jovonni Jan 9, 2026
f6a363f
makefile changes and scripts and lock
Jovonni Jan 9, 2026
874a38e
chore: ignore opencode binary artifact
Jovonni Jan 9, 2026
4c136e4
Merge pull request #4 from constructive-io/d/function-testing-foundation
Jovonni Jan 10, 2026
531d0a2
warn on gql fallback
Jovonni Jan 10, 2026
c8bca84
added matrix
Jovonni Jan 10, 2026
219dc23
calvin api a envvar
Jovonni Jan 10, 2026
d4163e9
Merge pull request #6 from constructive-io/dev/testing-strategy
Jovonni Jan 10, 2026
0a6b034
feat: add pgpm-dump function and standardize k8s test runner to v4
Jovonni Jan 11, 2026
dfce124
fix(ci): make kind binary path resolution dynamic in Makefile
Jovonni Jan 11, 2026
ecd7c5d
fix(ci): parameterize KIND_CLUSTER_NAME to support CI 'local' cluster
Jovonni Jan 11, 2026
c34dc03
fix(ci): remove unconfigured submodule _calvincode_build from git index
Jovonni Jan 11, 2026
22aa874
Merge pull request #5 from constructive-io/dev/various-functions-2
Jovonni Jan 11, 2026
9584893
add node runtime and runner logic
Jovonni Jan 17, 2026
a14959b
setup build scripts and root configuration
Jovonni Jan 17, 2026
19fe2bc
add hello world function infrastructure
Jovonni Jan 18, 2026
e40a358
add simple bash function infrastructure
Jovonni Jan 19, 2026
ef49f51
add db management functions infrastructure
Jovonni Jan 20, 2026
56693ed
scaffold llm functions infrastructure
Jovonni Jan 21, 2026
4ac8697
add integration functions infrastructure
Jovonni Jan 22, 2026
e19e8ae
add auth and github utilities infrastructure
Jovonni Jan 23, 2026
a64bddd
add general utility functions infrastructure
Jovonni Jan 24, 2026
490395e
add experimental language runners infrastructure
Jovonni Jan 25, 2026
aee097f
implement core logic for all functions using sdk
Jovonni Jan 26, 2026
c6ea1f4
fix test runner rbac permissions
Jovonni Jan 27, 2026
6c4b8f9
update kubernetes manifests for deployment
Jovonni Jan 27, 2026
e2817fa
fix all cloud functions tests and update verification
Jovonni Jan 27, 2026
d68f38c
last fix for pytorch gpu and rust hello world funcs
Jovonni Jan 27, 2026
d883c73
cleanup repo artifacts and track vendored sdk
Jovonni Jan 27, 2026
e714a8a
refactor: use single source sdk.tgz, robust makefile, and fixed tests
Jovonni Jan 27, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions .agent/rules/new-func-requirements.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
trigger: always_on
---

NOTE: ALL INDEX.TS MUST HAVE A TYPED GRAPHQL QUERY, PERIOD, DONT EVER USE STRING BASED GQL QUERY.. EEVR...

okay so how do we know everything belowis impemented and tested then? THEN ANALYSE ALL OF THE GIT STATUS AND GIT DIFFS TO EDUCATE ME ON WHAT WAS CHANGED AND HOW IT ALIGNS TO WHAT WAS ASKED OF ME BELOW:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's clean up some english, looks fun maybe that helps llm lol — I'd just have an agent pass over to clean up grammer/spelling


systematically, we need to know

TRIPLE CHECK THE BOTTOM requirements to see if we have addressed it all lollllll

IDK YOU SHOULD PROB GO STUDY THE CONSTUCTIVE-DB REPO AND SEARCH FOR THE SERVICES PACKAGE TO SEE WHAT WE CAN USE THERE OR SOEMTHING... CUZ YOU SHOULDNT BE MAKING SQL FILES....

I SEE YOU MADE SCHEMAS, BUT ANY SCHEMA SHOULD BE A PGPM MODULE INSIDE OF CONSTRUCTIVE-DB REPO.... SO IM NOT SURE WHY YOU EVEN DID THAT, DO WE NEED THE SCHEMAS? WERE WE ASKED TO DO THAT FROM OUR ORIGINAL ASKS HERE? HELP ME UNDERSTAND WHY YOU DID THIS..

OLD PROMPTS:

ONLY TOUCH CONSTRUCTIVE-FUNCTIONS...CONTINUE:

WE ARE WORKING IN CONSTRUCTIVE-FUNCTIONS REPO:

GO MAKE SURE YOU IMPLEMENT THIS, COME UP WITH A DETAILED VERBOSE PLAN TO DO THIS

okay now come up with a strategy to achieve the following criteria. Break these down into a checklist of criteria:

ACTUAL TASKS:

```
For the functions: i think we want a couple of features:
Functions should be importable and publish functions.
That way then running them locally in a combined server of sorts, we should be able to import them into the server and be able to run them
Each function should be configureable with env vars, or configs:
one config file should be able to provide overrides for each of the components, so could be loaded up from individual config files, or a combined one
Each function should have its own docker image:
currently we have one large docker image with everything, and running functions from there
Each function should be runable locally:
function does not need to know anything about knative, so should be able to run as a local server in docker-compose or with pnpm directly as well
cnc cli should be able to invoke functions. similar to cnc jobs up commands
cc: @Zhi Zhen (note that we would eventually not use subdir constructive/functions but the other repo: constructive-functions).
```


also we need:


1) creating a database in function
2:59
2) being able to run function as user
3:00
We made issues earlier but basically our ingress was blocking long requests so we want to start tracking flow now
3:00


AND

also — we should discuss another
6:18
3) keeping a service db in sync with child databases
6:18
like a router database for when we get into sharding
6:18
this would be key for scale
6:19
services_public would exist on all the databases
6:19
but the children dbs would push the router
6:19
then we can have multiple graphile nodejs processes that look one ONE services_public on the router/services db to figure out which databases to connect to
6:20
gives us a some type of API sharding that way
6:20
I think that, combined with moving data between databases, and we're gonna be in decent shape
5 changes: 5 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
**/node_modules
packages
dist
.git
.env
41 changes: 41 additions & 0 deletions .github/workflows/test-k8s-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,24 @@ jobs:
k8s-ci-test:
runs-on: ubuntu-latest
timeout-minutes: 45
strategy:
fail-fast: false
matrix:
function:
- hello-world
- llm-internal-calvin
- opencode-headless
- twilio-sms
- llm-external
- send-email-link
- crypto-login
- github-repo-creator
- pytorch-gpu
- runtime-script
- rust-hello-world
- simple-bash
- simple-email
- stripe-function

steps:
- name: Checkout
Expand Down Expand Up @@ -196,6 +214,29 @@ jobs:
echo "All pods (final):" && kubectl get pods -A
echo "Knative services:" && kubectl get ksvc -A || true

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: 9

- name: Install dependencies
run: pnpm install --no-frozen-lockfile

- name: Build and Load Test Runner Image
run: |
make build-test-runner KIND_CLUSTER_NAME=local

- name: Run K8s Tests
run: |
# Ensure kubectl proxy port is available or managed by the runner
pnpm exec ts-node scripts/test-runner.ts --function ${{ matrix.function }}


- name: Dump diagnostics on failure
if: always()
run: |
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ web_modules/

# Output of 'npm pack'
*.tgz
!sdk.tgz

# Yarn Integrity file
.yarn-integrity
Expand Down Expand Up @@ -137,3 +138,5 @@ dist
# Vite logs files
vite.config.js.timestamp-*
vite.config.ts.timestamp-*
functions/opencode-headless/_calvincode_build
functions/opencode-headless/bin/
125 changes: 117 additions & 8 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,30 +1,47 @@
.PHONY: build clean lint docker-build docker-build-simple-email docker-build-send-email-link docker-push docker-push-simple-email docker-push-send-email-link
.PHONY: build clean lint test test-all build-test-runner docker-build docker-build-simple-email docker-build-send-email-link docker-push docker-push-simple-email docker-push-send-email-link

REGISTRY := ghcr.io/constructive-io/constructive-functions
# Detect kind binary (search PATH, fallback to Homebrew)
KIND_BIN := $(shell which kind)
ifeq ($(KIND_BIN),)
KIND_BIN := /opt/homebrew/bin/kind
endif
KIND_CLUSTER_NAME ?= interweb-local

SUBDIRS := functions/hello-world functions/simple-email functions/send-email-link functions/runtime-script

build:
pnpm run build
pnpm -r build

clean:
pnpm run clean
pnpm -r clean

lint:
pnpm run lint
pnpm -r lint

test:
pnpm -r test

# Docker Build & Push (Restored)
docker-build-runtime:
@echo "Building Shared Node Runtime..."
docker build -t constructive/node-runtime:latest functions/_runtimes/node -f functions/_runtimes/node/Dockerfile.runtime

docker-build: docker-build-runtime

docker-build:
@echo "Building Docker images for functions..."
@for fn in functions/*; do \
if [ -f "$$fn/Dockerfile" ]; then \
echo "Building $$fn..."; \
docker build -t "$(REGISTRY)/$$(basename $$fn):latest" "$$fn"; \
docker build -t "$(REGISTRY)/$$(basename $$fn):latest" -f "$$fn/Dockerfile" .; \
fi \
done

docker-build-simple-email:
docker build -t $(REGISTRY)/simple-email:latest functions/simple-email
docker build -t $(REGISTRY)/simple-email:latest -f functions/simple-email/Dockerfile .

docker-build-send-email-link:
docker build -t $(REGISTRY)/send-email-link:latest functions/send-email-link
docker build -t $(REGISTRY)/send-email-link:latest -f functions/send-email-link/Dockerfile .

docker-push:
@echo "Pushing Docker images to $(REGISTRY)..."
Expand All @@ -40,3 +57,95 @@ docker-push-simple-email:

docker-push-send-email-link:
docker push $(REGISTRY)/send-email-link:latest

# Bulk Kind Load
kind-load-all:
@echo "Loading all function images into Kind..."
@for fn in functions/*; do \
if [ -f "$$fn/Dockerfile" ]; then \
echo "Loading $$fn..."; \
$(KIND_BIN) load docker-image "$(REGISTRY)/$$(basename $$fn):latest" --name $(KIND_CLUSTER_NAME); \
fi \
done

# Kubernetes Test Runner
# Run All Tests inside K8s (Centralized Runner)
# Depends on building and loading ALL images to ensure environment is complete.
test-k8s-all: docker-build kind-load-all
@echo "Running all K8s tests via centralized KubernetesJS runner..."
pnpm exec ts-node scripts/test-runner.ts

# Generic target to run specific function test (e.g., make test-k8s-hello-world)
test-k8s-%:
@echo "Running K8s test for function: $*"
pnpm exec ts-node scripts/test-runner.ts --function $*

build-test-runner:
@echo "Building Shared Test Runner Image..."
docker build -f functions/_runtimes/node/Dockerfile.test -t constructive/function-test-runner:v9 .
$(KIND_BIN) load docker-image constructive/function-test-runner:v9 --name $(KIND_CLUSTER_NAME)

rebuild-all-runners: build-test-runner
@echo "All runners rebuilt and loaded into Kind."

# Individual Test Shortcuts
test-k8s-create-db:
pnpm exec ts-node scripts/test-runner.ts --function create-db

test-k8s-crypto-login:
pnpm exec ts-node scripts/test-runner.ts --function crypto-login

test-k8s-github-repo-creator:
pnpm exec ts-node scripts/test-runner.ts --function github-repo-creator

test-k8s-hello-world:
pnpm exec ts-node scripts/test-runner.ts --function hello-world

test-k8s-llm-external:
pnpm exec ts-node scripts/test-runner.ts --function llm-external

test-k8s-llm-internal-calvin:
pnpm exec ts-node scripts/test-runner.ts --function llm-internal-calvin

test-k8s-opencode-headless:
pnpm exec ts-node scripts/test-runner.ts --function opencode-headless

test-k8s-pgpm-dump:
pnpm exec ts-node scripts/test-runner.ts --function pgpm-dump

test-k8s-runtime-script:
pnpm exec ts-node scripts/test-runner.ts --function runtime-script

test-k8s-send-email-link:
pnpm exec ts-node scripts/test-runner.ts --function send-email-link

test-k8s-simple-bash:
pnpm exec ts-node scripts/test-runner.ts --function simple-bash

test-k8s-simple-email:
pnpm exec ts-node scripts/test-runner.ts --function simple-email

test-k8s-stripe-function:
pnpm exec ts-node scripts/test-runner.ts --function stripe-function

test-k8s-twilio-sms:
pnpm exec ts-node scripts/test-runner.ts --function twilio-sms

test-k8s-pytorch-gpu:
docker build -t constructive/pytorch-gpu:latest functions/pytorch-gpu
$(KIND_BIN) load docker-image constructive/pytorch-gpu:latest --name $(KIND_CLUSTER_NAME)
pnpm exec ts-node scripts/test-runner.ts --function pytorch-gpu

test-k8s-rust-hello-world:
docker build -t constructive/rust-hello-world:latest functions/rust-hello-world
$(KIND_BIN) load docker-image constructive/rust-hello-world:latest --name $(KIND_CLUSTER_NAME)
pnpm exec ts-node scripts/test-runner.ts --function rust-hello-world

# Cleanup K8s Resources
k8s-clean:
@echo "Cleaning up K8s jobs for constructive-functions..."
# Delete all jobs matching test-* or *-exec-* pattern (batch delete)
@kubectl get jobs -n default --no-headers -o custom-columns=":metadata.name" | grep -E "^test-|-exec-" | xargs kubectl delete job -n default --ignore-not-found || true
# Delete all pods matching test-* or *-exec-* pattern (orphaned pods) (batch delete)
@kubectl get pods -n default --no-headers -o custom-columns=":metadata.name" | grep -E "^test-|-exec-" | xargs kubectl delete pod -n default --ignore-not-found || true
@echo "Done."
74 changes: 74 additions & 0 deletions functions/_runtimes/agentic/Dockerfile.agentic
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Python runtime for LLM API inference (OpenAI & Claude)
# This module makes API calls to OpenAI and Anthropic for LLM inference
# For the full fat container with Ollama & local models, see Dockerfile.ollama
# Based on: /Users/0xj0/Documents/projects/LQL/CONSTRUCTIVE/agentic-foundation

##################### heres what is had inside of (/Users/0xj0/Documents/projects/LQL/CONSTRUCTIVE/agentic-foundation) -- GO VERIFY YOURSELF

# Builder Stage
FROM rust:latest as builder
WORKDIR /app
COPY . .
# Build agent_core
RUN cargo build --release --bin agent_core

# Runtime Stage - "Fat Container"
FROM ubuntu:22.04
WORKDIR /app

# Set non-interactive install
ENV DEBIAN_FRONTEND=noninteractive

# 1. Install Basic Tools & Runtimes (Python, Node, System Utils)
RUN apt-get update && apt-get install -y \
curl wget git build-essential \
python3 python3-pip python3-venv \
nodejs npm \
postgresql-14 postgresql-client-14 \
sudo \
libnss3 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 \
libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1 libasound2 \
chromium-browser \
&& rm -rf /var/lib/apt/lists/*

# 2. Install Rust in Runtime (for the agent to use `cargo`)
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"

# 3. Install PostGraphile
# 3. Install PostGraphile
RUN npm install -g pnpm && pnpm add -g postgraphile @graphile-contrib/pg-simplify-inflector

# 4. Install Ollama & Bake Models
# We install Ollama, then start it in the background to pull models into the image layers.
RUN curl -fsSL https://ollama.com/install.sh | sh

# Pre-pull Models (Using available equivalents for the '2025' spec models)
# GPT-OSS -> llama3.2 (Small, open, robust)
# Qwen3-VL -> llava (Vision model standard in Ollama)
# Devstral -> qwen2.5-coder (Excellent coding model)
# Nemotron -> mistral (Strong reasoning)
RUN nohup bash -c "ollama serve" & \
sleep 10 && \
ollama pull llama3.2 && \
ollama pull llava && \
ollama pull qwen2.5-coder && \
ollama pull mistral && \
pkill ollama

# 5. Setup Data & Permissions
RUN mkdir -p /var/lib/postgresql/data && chown -R postgres:postgres /var/lib/postgresql/data

# 6. Copy Binaries & Scripts
COPY --from=builder /app/target/release/agent_core /app/agent_core
COPY scripts/entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh

# 7. Config
ENV DATABASE_URL=postgres://agent:agent@localhost:5432/agentic
ENV OLLAMA_HOST=0.0.0.0:11434

EXPOSE 3000 5432 11434 5000

ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["./agent_core"]
18 changes: 18 additions & 0 deletions functions/_runtimes/node/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM node:22-alpine

WORKDIR /usr/src/app

COPY package.json ./

RUN npm install -g pnpm@10.12.2 && pnpm install --prod

COPY dist ./dist
COPY runner.js ./runner.js

ENV NODE_ENV=production
ENV PORT=8080

USER node

CMD ["node", "runner.js", "dist/index.js"]

Loading