Skip to content

mainframenzo/mainframenzo.com

Repository files navigation

mainframenzo.com

This is the public source for your silly blog. The documentation below addresses future you.

readme/preview.png

FYI: You reset the commit history every time you push to this public repository; it's always the "First commit." (as your mentor once said, "It's [always] a new day, baby")


You provide three ways to get started writing posts and managing website hosting infra:

  • Docker Container
  • Virtual Machine
  • Host OS (locally)

Docker is not a hard dependency for writing posts or managing website hosting infra, but it simplifies dependency installation and definitely made developing the parts libraries easier. You also provided a Virtual Machine as an "escape hatch". Mostly you write posts on your host OS (locally), and then run a container (via Docker) if you need to do parts libraries and one-offs development in tandem. If your host OS gets hosed, you can basically do everything in Docker (and therefore, also in the Virtual Machine) if you need to.

Table of Contents

AI

↑ Table of Contents

In your "day job", you've had the "pleasure" of using AI for productivity gains. It makes someone like you faster when it comes to in-abundance boilerplate (e.g. FastAPI + Pydantic). It makes the intern's demos better, but what they produce is still throwaway. The AI-generated code is often worse than if the intern had just gone through sample projects put out for Some SDK (at least until the Some SDK's sample projects are AI-generated).

You personally use AI mostly how you use StackOverflow (or any of the Stack "Exchanges", for that matter), which is primarily as a "search" tool you input text into when you've lost your marbles debugging some esoteric thing someone drummed up one day that now the entire internet is built on. You are experimenting with using it for:

  • God-awful work you don't care to do (generate an OpenAPI schema for a Github Webhook)
  • Work that would take exorbitant amounts of time (go find Wikipedia band links for each song in a playlist)
  • Catching some spelling/software mistakes made in the wee hours of the morning

You would not pay for use of the technology, though of course we all are subsidizing it one way or another. You find LLMs interesting in a Diamond Age sort of way (let's not forget the lesson there!) - you want to be enriched, not just to defer. So you still write code, and the writing is all you.

A list of AI-generated things in this source:

Local Development - Docker

↑ Table of Contents

If you already have your designs finalized and rendered, you can move along and just install some standard dependencies on your host OS to write your posts. Parts libraries and one-offs have the most dependencies, and it's easiest to use Docker to build their artifacts.

Local Development - Docker / Pre-reqs

↑ Table of Contents

  1. Docker 27.3.1, build ce12230 (used to launch a container for local development)
  2. just 1.46.0 (used as a task runner - had to install with snap when using Ubuntu)
  3. Real VNC (or any VNC viewer - used for viewing container guest OS display from host OS)

As an initial pre-requisite for using Docker, build the container image and push to your own local registry (more on why your own local registry is used can be found in this README later on). From a terminal, run:

just -f ./.justfiles/docker.just --working-directory . start-local-registry
just -f ./.justfiles/docker.just --working-directory . setup
just -f ./.justfiles/docker.just --working-directory . push-latest-development-image-to-local-registry

To use Docker to start viewing content and making changes to the website locally, from a terminal, run:

just -f ./.justfiles/docker.just --working-directory . develop-website

Other flags are available to you:

  • --skip-build-parts-libraries Skips generating .stl files rendering images of parts libraries
  • --skip-build-one-offs Skips building one-off projects

The <meblog-src>/src directory is mounted as a read/write volume, and changes you make on your host OS will propagate to the guest OS. You'll need to open up a browser tab at http://localhost:8080 to view the website. You can also open a browser tab up at http://localhost:8082 to use Visual Studio Code + OCP CAD Viewer in the container to preview changes you make to the parts libraries.

If you want to connect to the guest OS's desktop GUI, you used RealVNC and entered in :5901 as the "hostname or address", and meblog as the password: readme/vnc.png

Once inside the container, you can run these software programs to develop parts libraries:

  • CQ-editor - From a terminal, run: cd / && ./CQ-editor/CQ-editor

Local Development - Virtual Machine

↑ Table of Contents

There are two reasons you use local virtual machines:

  • Some of the software you are using hasn't quite caught up to the new Apple hardware (you go back and forth between Linux and MacOS laptops), and you're even having issues with Docker containers when it comes to some headless rendering dependencies. If you experience issues with installing dependencies, either on your host OS or via Docker, you can take it one step further and do local development on a virtual machine to "ease your pain".
  • It makes debugging hosted infra deployments easier and cheaper. If you are having issues deploying to Hetzner and debugging cloud-init, you can deploy the infra locally visa vie a virtual machine to "ease your pain".

Local Development - Virtual Machine / Pre-reqs

↑ Table of Contents

  1. multipass 2.4.3 ^ (used for local VM deployment with Terraform)
  2. Terraform v1.14.5 (used to deploy website infra to Hetzner)
  3. just 1.46.0 (used as a task runner - had to install with snap when using Ubuntu)
  4. rsync version 2.6.9 protocol version 29 (used for copying files)

To launch the local virtual machine infra, from a terminal, run:

just -f ./.justfiles/infra.local.just --working-directory . setup
just -f ./.justfiles/infra.local.just --working-directory . deploy-local

To print the cloud-init logs for a local virtual machine infra's publish stage e.g. dev, from a terminal, run:

just -f ./.justfiles/infra.local.just --working-directory . cat-cloud-init-output-log dev

You can open up the LAN IP the local virtual machine was assigned in the browser, e.g. http://10.176.219.145/.

To ssh into the local virtual machine for a publish stage e.g. dev, from a terminal, run:

just -f ./.justfiles/infra.local.just --working-directory . ssh-into dev

To destroy the local virtual machine infra, from a terminal, run:

just -f ./.justfiles/infra.local.just --working-directory . destroy-local

Local Development

↑ Table of Contents

You can develop the website locally (on your host OS) if you've already pre-rendered the parts libraries.

Local Development / Pre-reqs

↑ Table of Contents

  1. 'Nix machine (you are using Framework 13 / Ubuntu, YMMV elsewhere)
  2. Node.js 20.16.0 (used for build and website dependency ecosystem)
  3. Python 3.13.1 (used for defining and rendering parts libraries)
  4. conda (use for installing Python dependencies for rendering parts libraries) 24.9.2
  5. just 1.46.0 (used as a task runner - had to install with snap when using Ubuntu)
  6. jq 1.6 (used for working with JSON in the CLI)
  7. tree v2.1.3 (snap) © 1996 - 2024 by Steve Baker, Thomas Moore, Francesc Rocher, Florian Sesser, Kyosuke Tokoro (used to print scrubbed source code when pushing to public version of this source)
  8. rsync version 2.6.9 protocol version 29 (used for copying files)
  9. lychee (used to check broken links)

Local Development / Pre-reqs for Parts Libraries

↑ Table of Contents

If you want to work on the parts libraries, the dependencies grow.

  1. Blender 4.2 (used for rendering parts libraries)
  2. FreeCAD 1.0.0 (used for analyzing parts libraries)
  3. CalculiX costerwi/homebrew-calculix/calculix-ccx (used for analyzing parts libraries)
  4. PySide and qt - (dependencies of FreeCAD - while you installed both via Homebrew (see the FreeCAD wiki on PySide/QT), and ran pip3 install pyside6, nothing worked)
  5. imagemagick 7.1.1-43 (used for generating .gif files of an assembly's movements)

You had issues getting any build123d development rendering tools working on MacOS (it required adding mamba as a dependency on top of conda, and something about the mamba install got wonked no matter what you tried). You tried these 3D viewers:

  1. OCP CAD Viewer
  2. CQ Editor
  3. blendquery

Docker should not be a hard dependency, but if you want to develop parts libraries and visualize the changes to them quickly, it will help you bypass any issues encountered on MacOS. Why? None of these 3D viewers seemed to work particularly well (or at all!). It could be that Apple and conda are very well the problem, not the software itself. You settled on OCP CAD Viewer, but using it is not "automatic" - see the Utilities / Parts Library Rendering section at the bottom of this readme for details on how to get that going inside a container.

Any other pre-reqs or third-party libraries needed will be downloaded at setup time automatically.

Local Development / Pre-reqs for Utilities

↑ Table of Contents

These are utilities you run to help write posts.

  1. jpegtran libjpeg-turbo version 2.1.5 (used for removing exif metadata)
  2. imagemagick 7.1.1-43 (used for checking exif metadata removed)
  3. ffmpeg 7.1 (used for compressing videos)
  4. f3d (used for previewing .stl files generated from parts libraries)
  5. Audacity (used for recording/editing audio)
  6. yt-dlp (used for downloading playlist media from YouTube - put this in ./build-utils/bin/yt-dlp_linux and make it executable)

Local Development / Pre-reqs for One-offs

↑ Table of Contents

Local Development / Pre-reqs for One-offs / Personal Case Study Number 2

↑ Table of Contents

This post has a Rust-based application which has a digital twin of the things you made for it. It compiles to WASM and is embedded in the post's page.

  1. VMWare Workstation (TLDR: used to model the digital twin geometry in SketchUp 2016, but optional since .vmdk file not included in this source - you exported to a .glb file that is!)
  2. BlenderKit (needed an account - used for adding nice materials to 3D models in Blender)
  3. BlenderGIS (needed an account - used for generating terrain in Blender which you then cross-referenced photogrammetry data)
  4. FIXME other art deps on readme
  5. rustup 1.28.1 (f9edccde0 2025-03-05) (used to update the Rust compiler and dependency manager)
  6. rustc 1.85.1 (4eb161250 2025-03-15) (used to compile the Rust application)
  7. cargo cargo 1.85.1 (d73d2caf9 2024-12-31) (used to manage the Rust application's dependencies)

Local Development / Frontend

↑ Table of Contents

Local Development / Frontend / Setup

↑ Table of Contents

Once you have the pre-reqs installed, install all the dependencies needed to build the website (basically everything, FYI). This has global side effects. From a terminal, run:

just -f ./.justfiles/dev.just --working-directory . setup

Other flags are available to you:

  • --skip-nodejs-deps Skips installing Node.js deps
  • --skip-global-nodejs-installs Skips installing global Node.js deps
  • --skip-python-deps Skips installing Python deps

Local Development / Frontend / Writing Posts

↑ Table of Contents

To start viewing content and making changes to the website locally, from a terminal, run:

source ~/.bash_profile && conda deactivate # Because you always forget about adding conda to PATH.
just -f ./.justfiles/frontend.just --working-directory . develop-website --skip-build-parts-libraries --skip-build-one-offs

This opens up a browser tab at http://localhost:8080. Making a change to most files rebuilds and reloads the website.

Posts are a mixture of Markdown and HTML. The Markdown syntax for the converter - Showdown - can be found here. Create posts in the <meblog-src>/src/frontend/posts/ directory as a .md file. The name of the file is the name of the post, sans the file extension (spaces in filenames...oh the humanity!).

Each post needs to specify some information as "JSON-in-an-HTML-comment" at the top of the file (a nod to Jekyll) in order to be picked up as content for the website:

<!-- 
{ 
  "draft": true,
  "type": "#thingsivemade",
  "publishedOn": "January 1, 1970", 
  "tagline": "\"When in the course of computer events...\""
}
-->

Have a look at some existing posts for examples of what else you can do with that JSON.

Tidbits you may forget:

  • Only posts marked as "draft": false will be published to the prod stage
Local Development / Frontend / Writing Posts / Style Guide

↑ Table of Contents

  1. Numbers are written, e.g. fifteen not 15, except: 1) years, e.g. 2025 not two thousand and five 2) monetary values, e.g. $5.15 USD, not five dollars and fifteen cents in USD 3) In my 20's/30's/40's not twenties/thirties/forties 4) resume job durations, e.g. 2, not two 5) Coordinates, e.g. (0, 0) not (zero, zero) 6) 15 miles-per-hour or miles-per-gallon not fifteen 7) 10% not ten percent
  2. Videos need be formatted as .mp4 files.
  3. Images can be any format supported by the web.
  4. Slideshow HTML templates are built using images prefixed with img_ in the respective post's image directory. To add images to slideshows, make sure that they follow the format: img_1.png, img_2.png, etc.
  5. For BOMs (csv files), remember that inches as " are escaped like "".

You could implement some sort of spellcheck linter thingy, but uh, where's the fucking fun in that?

Local Development / Frontend / Writing Posts / Parts Libraries

↑ Table of Contents

Each post has a "parts library", which is a collection of 3D file formats and CAD-as-code Python scripts that help define the #thingsivemade. It's an ongoing modernization process, and you're doing some work outside this source to make it all more cohesive. FYI.

To build all the parts libraries before developing the website, just drop the --skip-build-parts-libraries flag. From a terminal, run:

just -f ./.justfiles/frontend.just --working-directory . develop-website --skip-build-one-offs

Building parts libraries consists of things like rendering images of parts, converting parts to other 3D file formats the website makes use of, etc.

You couldn't get a parts library build to work 100% on your host OS when you used MacOS, so you use Docker to manually build a post's parts library when you're doing design work. To manually build a post's parts library, first start a terminal session in a container:

just -f ./.justfiles/docker.just --working-directory . shell

Then build the part library for the post inside the container:

cd /opt/app/meblog

just -f ./.justfiles/parts-libraries.just build ./src/frontend/public/parts-libraries/posts/volkswagen_bus_dashboard/v2/assembly.py

Local Development / Frontend / Tests

↑ Table of Contents

If you wish to run all the "end-to-end" integration tests for the frontend, from a terminal, run:

just -f ./.justfiles/frontend.just --working-directory . tests-integration-e2e development local dev local

Other flags are available to you:

  • --headed Shows the browser while running tests
  • --ui Shows the browser and opens up Playwright's test GUI (which is hella awesome)

Local Development / Backend

↑ Table of Contents

There is a minimal Node.js backend in place that is used for providing various API features to the frontend. You will already have installed its dependencies. If you wish to run the backend locally, from a terminal, run:

just -f ./.justfiles/backend.just --working-directory . build
just -f ./.justfiles/backend.just --working-directory . run development local dev local

Local Development / Backend / Tests

↑ Table of Contents

If you wish to run all curl-based functional tests for the backend, from another terminal, run:

just -f ./.justfiles/backend.just --working-directory . tests-functional-curl

If you wish to run a single, curl-based functional test for the backend, from another terminal, run:

just -f ./.justfiles/backend.just --working-directory . tests-functional-curl "auth.md"

Infra

↑ Table of Contents

Porkbun is your DNS registar now (more like regi star, amiright?!). You migrated away from GoDaddy because it sucked. Hetzner is your hosting provider now. You migrated away from AWS because it was too expensive, but kept the deployment code around in case Hetzner becomes too expensive or unavailable due to geopolitical issues. You're viewing this source on Github, which is where the public version of this source is shared...for now. The legacy infra documentation for using GoDaddy as your DNS registrar and hosting on AWS can be found here.

You've got a dev publish stage and a prod publish stage: dev.mainframenzo.com includes all content, including draft posts, but mainframenzo.com only has finished ones. You use dev to validate the content looks as expected before pushing to prod. You restrict access to dev via your public IP. You take dev down when you're not using it to save costs.

The infra setup is not fancy: there's just one Hetzner account for the publish stages dev and prod because this is a silly blog. We'll call your Hetzner account "main", which is an app stage, and when you deploy, you'll run commands that change resources for the respective publish stage in the "main" app stage (Hetzner account); publish stages - dev or prod - are only delineated by Terraform (groups of Hetzner resources) in the same Hetzner account. There are no bastions or any architectural fanciness because this is a silly blog, so Hetzner resources amount to 1 VM (Hetzner servers), an SSH key, and a firewall for each publish stage. You do not have enough "media" to justify cloud storage, so rsync is run as part of the post-depoy process when VMs are replaced to upload files that aren't stored in Git LFS (songs for playlists, etc.). Infra is never updated from the Hetzner infra itself - you always use your personal computers to do this. KISS.

As for CICD, you set it up so that after an initial deployment of the VMs themselves, all that is required to push content to the dev publish stage is to push Git changes to the private Github repository, and all that is required to push content to the prod publish stage is to push Git changes to the public Github repository. In both cases, there's some software running on the VMs that allows Github to notify them to sync the latest source code and deploy. You can run CICD locally from your machine, but you favor cloud-based CICD since you can just "commit and push the diff" from the offline place you're always in, allowing you to shut your computer off rather than wait for the build + deploy to finish. Set it and forget it. - RP.

Infra / Pre-reqs

↑ Table of Contents

  1. Docker 27.3.1, build ce12230 (used to build a container for local development, also used in CICD)
  2. Terraform v1.14.5 (used to deploy website infra to Hetzner)
  3. just 1.46.0 (used as a task runner - had to install with snap when using Ubuntu)
  4. jq 1.6 (used for working with JSON in the CLI)
  5. rsync version 2.6.9 protocol version 29 (used for copying files)

You also needed to:

  1. Create a Porkbun API key to use for updating DNS
  2. API access must be explicitly enabled per domain in Porkbun - you enabled it for mainframenzo.com .
  3. Create a Hetzner "project" and a read/write API token to use for managing resources required to host the website + CICD
  4. Create a Github personal access token with read-only permissions to "Contents" and "Metadata" to pull changes from only the private and public Github repositories, and also give it read/write permissions to "Webhooks" (so can automate creating the Github webhooks below)
  5. Create Github webhooks for the private and public versions of this source to hit Hetzner VMs and start CICD (see "Infra / Configure" for a script which does this)
  6. After a successful prod publish stage website build, CICD updates a Github "release" in the public Github repository to match the always-versioned v0.0.1 of this source. When infra boots, it serves that release (regardless of publish stage) as an interim measure until "git push" triggers CICD on the VM and a new successful website build is generated. CICD for the prod publish stage will always update that v0.0.1 release and overwrite the .zip file "asset" with an archive of the latest successfully-built website. FIXME If you were to start from scratch, that release wouldn't exist because you'd have never built the website successfully, and deploying infra would fail, and CICD would never get setup, so you've got a chicken-and-egg problem...that you may never need to solve.

Infra / Configure

↑ Table of Contents

To configure the "main" Hetzner account, configure the <meblog-src>/config/.env file with your Hetzner information for the "main" Hetzner account. Edit (some of) this information:

main_hetzner_api_token=
main_github_username=
main_github_token=
main_github_dev_webhook_id=do not edit this, it will be generated
main_github_prod_webhook_id=do not edit this, it will be generated
main_github_webhook_secret=do not edit this, it will be generated
main_porkbun_api_key=
main_porkbun_api_secret=
main_backend_dev_api_url=https://dev.mainframenzo.com/api
main_backend_prod_api_url=https://mainframenzo.com/api
main_your_username=
main_your_password=
main_jwt_secret=
main_registry_directory=
main_media_directory=

While you are in the <meblog-src>/config/.env file, also edit this information:

meblog_private_repo_name=
meblog_public_repo_name=

You can create the Github webhooks now that the <meblog-src>/config/.env is configured with your Github personal access token; from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . create-update-github-webhooks-main

This has side effects and updates the <meblog-src>/config/.env file.

Infra / Deploy

↑ Table of Contents

Deploying creates all resources necessary to host:

  • dev and prod websites for this blog on separate VMs
  • CICD on each publish stage's VM, which connects Github changes to CICD

To deploy resources to your "main" Hetzner account, from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . deploy-main

dev.mainframenzo.com is restricted to your public IP, which may change. If you find that you can't access dev.mainframenzo.com, just deploy the dev publish stage again.

Other flags are available to you:

  • --force-replace Forces VM for a publish stage to be replaced since not all Terraform changes will replace the VM
  • --use-prebuilt-development-container-image Uploads development container image to VM so you don't have to build it on the VM to get CICD to work
  • --certbot-staging Uses letsencrypt staging env - good for testing VM deploys since letsencrypt has limits you ran into

Before deploying, you can drastically speed up time to "VM CICD ready" by ensuring the development container image is built and pushed to a locally running container registry. From a terminal, run:

just -f ./.justfiles/docker.just --working-directory . start-local-registry
just -f ./.justfiles/docker.just --working-directory . push-latest-development-image-to-local-registry

If you just need to deploy to a specific publish stage in "main", e.g. dev, from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . deploy-publish-stage main dev

Infra / Deploy / Docker

↑ Table of Contents

You can run all of the deploy steps in Docker (the commands are nearly the same), just swap out the justfile. For example, to deploy resources to your "main" Hetzner account with minimal dependencies installed (in a container), from a terminal, run:

just -f ./.justfiles/docker.just --working-directory . deploy-main

FIXME Get --use-prebuilt-development-container-image working in Docker (pass-through port 5000 and volume).

Infra / Debug

↑ Table of Contents

To print the cloud-init logs for a publish stage e.g. dev, from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . cat-cloud-init-output-log dev

To ssh into the VM for a publish stage e.g. dev, from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . ssh-into dev

Infra / Publish

↑ Table of Contents

Infra / Publish / Private

↑ Table of Contents

When you want to publish content to the dev publish stage (IP restricted for staging changes), simply git push origin main from the private version of this source code. Github makes an HTTPS request (webhook) to the dev publish stage's VM, which starts the CICD process to deploy changes to the VM.

If you don't see the webhook fire, check the Github config. If it's not that, more than likely the Github IPs changed and the Hetzner firewall rules need to be updated - re-deploy the infra.

Infra / Publish / Public

↑ Table of Contents

You only work with a private version of this source that contains unpublished posts (drafts) and sensitive information. That information needs to get scrubbed and reset when this source is made public.

When you want to publish content to the prod publish stage, cd into the private version of this source code, then, from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . push-to-public-repo

Other flags are available to you:

  • --dry-run Does not push changes to Github

Github makes multiple HTTPS requests (webhooks) to the prod publish stage's VM (on account of how you have to push changes chunked to the public version of this source), and a queue handles buffering the start of the CICD process to deploy changes to the VM.

If you don't see the webhook fire, check the Github config. If it's not that, more than likely the Github IPs changed and the Hetzner firewall rules need to be updated - re-deploy the infra.

Infra / Ops

↑ Table of Contents

Once you SSH into a VM, here's what's useful:

  • /opt/app/cloud-init.log has less useful cloud config lifecycle logs
  • /opt/app/cloud-init-output.log has useful cloud config logs
  • /opt/app/meblog-vm-setup.log has more useful cloud config / VM boot logs
  • /var/log/nginx/access.logs has nginx access logs
  • /var/log/nginx/error.logs has nginx access logs
  • /opt/app/fail2ban.lg has fail2ban logs
  • /opt/app/meblog-backend.log has your API logs

Infra / Destroy

↑ Table of Contents

It is useful to blow all the Hetzner infra up from time-to-time. To remove all the Hetzner infrastructure for the main stage (start over!), from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . destroy-main

If you just need to destroy a specific publish stage in "main", e.g. dev, from a terminal, run:

just -f ./.justfiles/infra.just --working-directory . destroy-publish-stage main dev

Infra / Destroy / Docker

↑ Table of Contents

You can run all of the destroy steps in Docker (the commands are nearly the same), just swap out the justfile. For example, to remove all the Hetzner infrastructure for the main stage (start over!) with minimal dependencies installed (in a container), from a terminal, run:

just -f ./.justfiles/docker.just --working-directory . destroy-main

Infra / CICD

↑ Table of Contents

The CICD process only works in a container. You are going to want to build the website via CICD to validate build process changes before deploying to a publish stage. To vet CICD and build the website for the local app stage/dev publish stage/local app location, from a terminal, run:

just -f ./.justfiles/docker.just --working-directory . start-cicd local dev local

Other flags are available to you:

  • --skip-build-parts-libraries Skips generating .stl files rendering images of parts libraries
  • --skip-render-images-of-parts Generates .stl files et. al. but skips rendering images of parts libraries
  • --skip-build-one-offs Skips building one-off projects
  • --fast-render Dials down parts libraries rendering settings

Infra / Security

↑ Table of Contents

You will probably need a refresher on "security" by the time you get back to this. You are trying to avoid these things:

  • Personal information leaking
  • Secrets for Github leaking, allowing someone to delete the private & public versions of this source
  • Secrets for Porkbun leaking, allowing someone to mess with the DNS for this blog (2FA is enabled for everything else, and API access is disabled for other domains you own)
  • Secrets for Hetzner leaking, allowing someone to spin up a whole bunch of infra for...are the kids still crypto mining?
  • Bonus: preventing media files from getting downloaded

What you are doing to potentially enable those things happening:

  • Committing secrets in a .env file to the private version of this source
  • Passing secrets on the command line

What you are doing to mitigate those things happening:

  • Separating public and private sources
  • Scrubbing sensitive data before pushing to the public source
  • Deleting all history on "git push" to the public source
  • Overwriting the Git release asset instead of creating another release
  • Limiting VM SSH access to your public IP
  • Limiting dev VM uptime
  • Restricting dev VM access to your public IP
  • Secrets are replaced often
  • JWTs expire quickly and don't really do much except allow for IP banning (ephemeral - on VM replacement you start fresh) and enabling media file access
  • Copying built source (dist.frontend) to the public nginx /var/www/html directory to separate from source code
  • Sanitizing user data when received on the backend
  • Preventing hidden files from being served via nginx
  • Honeypot to ban bot crawlers from finding things, sensitive or not

Ok, now that you are back up to speed, here are the lifecycles of important secrets:

Github Webhook secret Occasionally when you deploy infra for a publish stage:

  • Generate a new Github Webhook secret for the publish stage
  • Update the <meblog-src>/config/.env file with the secret
  • Update the Github Webhook with the secret
  • Update the VM for the publish stage with the secrets by syncing the <meblog-src>/config/.env file to it

You should probably rotate the webhook secret more often, but FIXME you're still migrating CICD off AWS, so the webhook secrets aren't even in use.

JWT secret Every time you deploy infra for a publish stage:

  • Generate a new JWT secret for the publish stage
  • Update the <meblog-src>/config/.env file with the secret
  • Update the VM for the publish stage with the secrets by syncing the <meblog-src>/config/.env file to it

The side effects of this are:

  • If you, yes you, try to make an authenticated request - say convert a playlist to an m3u file to be able to download the media to your phone - it will fail with 403, the frontend will log you out, and you'll need to login again. Speaking of which...

Username/Password secrets Every time you deploy infra for a publish stage:

  • Generate new Username/Password Secrets for the publish stage
  • Update the <meblog-src>/config/.env file with the secrets
  • Update the VM for the publish stage with the secrets by syncing the <meblog-src>/config/.env file to it

The side effects of this are:

  • You will probably not have the correct username/password on your phone when you need to login. FIXME figure that out.

Utilities

↑ Table of Contents

Utilities / Images

↑ Table of Contents

Images may need to be "fixed up" before being website ready: they may contain lots of personal metadata you don't want exposed, and their names may be formatted incorrectly. Also, any images prefixed with IMG_ will be renamed to img_ starting at an index of 1. To "fixup" images, from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . fixup-images src/frontend/public/images/posts/footstool

CICD checks images for metadata you don't want exposed and blocks publishing if it finds unwanted information, but it's up to you to use the utilities to format/scrub images manually as deemed necessary.

Utilities / Videos

↑ Table of Contents

Videos may need to be "fixed up" before being website ready: Github has an upper limit for file size when not using Git LFS. You eventually caved to using Git LFS, but it still costs money with Github, so this utility provides a way to reduce the file size of videos (thus saving money).

To "fixup" all videos over 100 MiB, from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . fixup-videos

To "fixup" a video, from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . fixup-video $(pwd)/src/frontend/public/video/posts/adam/v1.mp4 $(pwd)/src/frontend/public/video/posts/adam/v1_reduced.mp4

Utilities / Playlist URLs

↑ Table of Contents

Playlist URLs may need to be "fixed up" before being website ready: if playlists/songs got added/updated, you use the URLs in the csv file to download the media 1x, sync to your website host after deploying, and stream to yourself as needed. These URLs may be broken if you're doing bulk updates; while LLMs are useful for bulk updating csv files, the URLs they provide may be out of date or just plain wrong, and this utility provides a way to fix them.

To "fixup" a playlists media links (make changes to the csv file!), from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . fixup-playlist-urls $(pwd)/src/frontend/playlists/pocket-symphonies.csv

Other flags are available to you:

  • --dry-run Does not make changes to the csv file

Utilities / Playlist Media

↑ Table of Contents

Playlist media is built from YouTube downloading. I know, I know. You have paid for some of these songs, though. And you would stream from a platform if they baked in ads (you'd suffer through them) to offline downloads and let you play songs in the order you want them played. Because you can't get that with YouTube or Spotify, you abuse the platforms whose companies abuse everyone, and of course the artists get fucked. C'est la vie.

To download playlist media (make changes to the media directory!), from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . download-playlist-media $(pwd)/src/frontend/playlists/pocket-symphonies.csv

Other flags are available to you:

  • --dry-run Does not make changes to the media directory

To convert playlists to other formats (m3u so you can download songs via VLC), from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . convert-playlist $(pwd)/src/frontend/playlists/pocket-symphonies.csv

Utilities / Parts Library Rendering

↑ Table of Contents

You had issues getting OCP CAD Viewer to work for previewing a parts library on your Apple M3 (you have since switched to Linux full-time). You had less issues using containers, but the development setup still needs some work.

Here's what you were looking for:

  1. Visual Studio Code to startup and open a terminal with the conda env containing dependencies activated
  2. OCP CAD Viewer's backend hot-reloading code when you use the "Save" hot keys on your keyboard

Here's what you got:

  1. Visual Studio Code starts up
  2. You open a Python assembly file in the editor and OCP CAD Viewer starts up automatically and renders a dummy file
  3. Manually open up a new terminal (right click in an existing terminal window to see the shortcut you always forget)
  4. Make an important change to the assembly file: from ocp_vscode import * provides a show function, but you can't just leave that in the source when it's not running (breaks the "headless" path), so uncomment it and save the assembly file
  5. Open up the readme in the editor to copy what you'll run (FIXME can't paste into the editor environment in my browser of choice)
  6. Run Python assembly file, either manually after change via PYTHONPATH=/opt/app/meblog/src/parts-library-tools /opt/conda/envs/meblog/bin/python /opt/app/meblog/src/frontend/public/parts-libraries/posts/test/assembly.py, or automatically via while inotifywait -e close_write /opt/app/meblog/src/frontend/public/parts-libraries/posts/test/assembly.py; do PYTHONPATH=/opt/app/meblog/src/parts-library-tools /opt/conda/envs/meblog/bin/python /opt/app/meblog/src/frontend/public/parts-libraries/posts/test/assembly.py; done

OCP CAD Viewer should display the assembly now: readme/render-parts-library-ocp-cad-viewer.png

If at any point OCP CAD Viewer gets wonky, just close the window, kill the terminal process, and select inside a Python assembly file to have it restart. Oof.

Utilities / Tests

↑ Table of Contents

To test your utility scripts, from a terminal, run:

just -f ./.justfiles/utils.just --working-directory . test

One-offs

↑ Table of Contents

One-offs / Zine Mode Intro

↑ Table of Contents

"Zine" mode, or "Dear Deader" mode (hehe) as you also like to call it, is an alternative way to view the website (well, one obvious alternative - there's another not so obvious alternative hidden as an Easter egg). This mode has a number of "hand drawn" features which you drew. You'll never need to do this again, probably. If you decide to update any of the features, the steps you took are documented below.

For more details, see its readme.

One-offs / Personal Case Study Number 2

↑ Table of Contents

This post has some embedded software via WASM which provides a "digital twin" of the house you rebuilt. It's a video game where you give a guided tour of the house (taking audio and text content from the corresponding blog post) and users can take the tour or explore the world manually.

For more details, see its readme.

License

↑ Table of Contents

Unless explicitly called out, all files in this source are licensed under the MIT-0 license.

Files that are licensed differently from above:

  • The files in this directory are licensed under the CC BY-NC 3.0 license. The images are interpretations of someone else's original material, and the CAD files are extruded from the interpreted images - obviously commercial use is out of the question. Unfortunately, Alberto Favaretto did not comply with the license terms, but you should.
  • Any Blender scripts - you'll find a lot of them in this directory - are licensed under the GPL-3.0-only license (you think).
  • Any Blender add-ons you downloaded to this directory are licensed under GPL-2.0-or-later license (you think).
  • Any bd_warehouse files you downloaded, included, and modified are licensed under the Apache-2.0 license license (you think).
  • Any dl4to4ocp files you downloaded, included, and modified are licensed under the MIT license (you think).
  • Any sdftoolbox files you downloaded, included, and modified are licensed under the MIT license (you think).
  • Licenses for materials in this directory and this directory are licensed under RF licenses (you think).
  • You are not sure what license the beso-fea files you downloaded, included, and (maybe?) modified are. FIXME

Licensing can be particularly complicated, and you may or may not not be in compliance ¯_(ツ)_/¯. If you (you here referring to any reader, not just future you) see a discrepancy, please create an issue on Github. Thanks!

About

My blog

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors