pg_dump -h localhost -p 54322 -U postgres -d postgres --schema-only --no-owner --no-acl > init.sqlI still had to remove some objects that were created somehow and thus caused duplication errors:
CREATE FUNCTION vault.secrets_encrypt_secret_secret() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
new.secret = CASE WHEN new.secret IS NULL THEN NULL ELSE
CASE WHEN new.key_id IS NULL THEN NULL ELSE pg_catalog.encode(
pgsodium.crypto_aead_det_encrypt(
pg_catalog.convert_to(new.secret, 'utf8'),
pg_catalog.convert_to((new.id::text || new.description::text || new.created_at::text || new.updated_at::text)::text, 'utf8'),
new.key_id::uuid,
new.nonce
),
'base64') END END;
RETURN new;
END;
$$;
CREATE VIEW vault.decrypted_secrets AS
SELECT secrets.id,
secrets.name,
secrets.description,
secrets.secret,
CASE
WHEN (secrets.secret IS NULL) THEN NULL::text
ELSE
CASE
WHEN (secrets.key_id IS NULL) THEN NULL::text
ELSE convert_from(pgsodium.crypto_aead_det_decrypt(decode(secrets.secret, 'base64'::text), convert_to(((((secrets.id)::text || secrets.description) || (secrets.created_at)::text) || (secrets.updated_at)::text), 'utf8'::name), secrets.key_id, secrets.nonce), 'utf8'::name)
END
END AS decrypted_secret,
secrets.key_id,
secrets.nonce,
secrets.created_at,
secrets.updated_at
FROM vault.secrets;psql -h localhost -p 54322 -U postgres -d postgres < supabase/dump-data-06.01.2024.sqlUsing scoped_session didn't work since this object keeps references to the initial DB URL in factories.
Besides, it's an unnecessary complexity, as we can simply store the session in the context variable.
Update: Since we're using Serverless architecture, we don't need to worry about concurrency (all requests run in separate invocations of lambda), so we can just use a global session object.
We run on AWS Fargate, which is an auto-provisioning service on top of ECS. It's a serverless service, which means we don't need to worry about the underlying infrastructure.
The service is accessed via an Application Load Balancer (ALB) called GateKeeper. It routes both
HTTP and HTTPS traffic to the Target Group consisting of only one target - the ECS cluster. The target
group forwards any traffic to the 5050 port of the container. So any traffic ends up uniformly
in the same place.
I went with new IAM Identity Center setup, which is a successor of AWS SSO. It's now a recommended way of managing user access to AWS resources, including AWS CLI.
- Created a new IAM Identity user for myself (called it Sergey Mosin,
serge-guardian). - Created a Permission Set
PowerUserAccess. - Assigned the permission set to the user in the
AWS Accountssection in the dashboard. - Completed aws cli setup via
aws configure ssocommand.
Now in the ~/.aws/config file I have profile serge-guardian-admin section.
First, in order to authenticate in AWS SSO tool, you need to launch:
aws sso login --sso-session serge-guardian
In order to use this profile and the new REsolution AWS account, aws cli commands need to include
--profile serge-guardian-admin flag, like this:
aws s3 ls --profile serge-guardian-adminsam init- only in the very beginning, to create a new project.sam build- to build the project. You can use--use-containerflag to build in a Docker container. Good for isolation.sam deploy --profile=<profile_name>- to deploy the project. You can use--guidedflag to configure the deployment step-by-step. The configuration will be written to thesamconfig.tomlfile, so it doesn't need to be repeated.
For local testing, the following commands are quite useful:
sam local start-apiwill start a local server on port 3000.sam local start-api --env-vars env.json --port 5050- start with env vars on 5050 port
sam local invoke --env-vars env.json --event <path_to_event>.json- for local testing. Here's detailed docs.
docker build -t resolution-api:latest .- build the image (fromserverdirectory)docker buildx build --platform linux/amd64,linux/arm64 --push -t 375747807787.dkr.ecr.us-east-1.amazonaws.com/resolution .- build the image for multiple platforms / need to login (see below) first
aws ecr get-login-password --region us-east-1 --profile serge-guardian-admin | docker login --username AWS --password-stdin 375747807787.dkr.ecr.us-east-1.amazonaws.comlogs in docker to the ECR, so it can push images there. 3aws ecr create-repository --repository-name resolution --region us-east-1 --profile serge-guardian-admincreates container calledresolution-hubin theus-east-1region. 4docker tag resolution-api:latest 375747807787.dkr.ecr.us-east-1.amazonaws.com/resolutiontag the image 5docker push 375747807787.dkr.ecr.us-east-1.amazonaws.com/resolution- push the image to the ECR This is the operation to be repeated
- [Local] Push docker image from local
- [Local] Connect to the EC2 instance via ssh:
ssh -i "REsolution-API-EC2.pem" ec2-user@ec2-54-209-19-205.compute-1.amazonaws.com - [Local] Copy the env file to the server
scp -i ~/.ssh/REsolution-API-EC2.pem .env.prod ec2-user@ec2-54-209-19-205.compute-1.amazonaws.com:/home/ec2-user/.env.prod - [EC2] Pull docker image from ECR
docker pull 375747807787.dkr.ecr.us-east-1.amazonaws.com/resolution
- [EC2] Run the container
docker run --env-file .env.prod -t -d -p 80:5050 375747807787.dkr.ecr.us-east-1.amazonaws.com/resolution
- Quick
template.yaml is the main file, where all the resources are defined.
Secret configuration is explained here.