hyperreal.coffee

My current self-hosted Bluebuild solution

It just feels wrong/off for me to have my Bluebuild images hosted and built with GitHub CI. I now have a self-hosted solution for this that is, I believe, less failure-prone than using ordinary CI/CD.

The basic ingredients are as follows:

The container registry is configured to allow pushing without authentication. I don’t need to worry about authentication because it is not open to the public internet. The bluebuild build command is simply supplied with the registry URL on my Tailnet.

Here is the Bash script I wrote to build Bazzite and Bluefin images and push them to the container registry:

 1#!/usr/bin/env bash
 2
 3set -x
 4
 5DATE=$(date '+%Y%m%d_%H%M%S')
 6LOGFILE="${HOME}/bluebuild-logs/${DATE}.log"
 7NTFY_SERVER="https://nas-aux.carp-wyvern.ts.net"
 8
 9if [ ! -d "${HOME}/bluebuild-logs" ]; then
10    mkdir -p "${HOME}/bluebuild-logs"
11fi
12
13notify_fail() {
14    curl \
15        -H prio:urgent \
16        -H tags:warning \
17        -d "${1}: failed to build. See logs for details." \
18        "${NTFY_SERVER}/bluebuild"
19}
20
21print_header() {
22    echo "===================================================================="
23    echo "Start build of ${1}"
24    echo "Date: $(date '+%Y-%m-%d %H:%M:%S')"
25    echo "Commit: $(git log --oneline --no-decorate -n 1)"
26    echo "===================================================================="
27}
28
29print_footer() {
30    echo "===================================================================="
31    echo "End build of ${1}"
32    echo "Time elapsed: ${2}"
33}
34
35if [ ! -f "$(pwd)/cosign.key" ]; then
36    echo "cosign.key not found. Copying it from backup."
37    cp -fv "${HOME}/.cosign.key" "$(pwd)/cosign.key"
38fi
39
40for recipe in $(/bin/ls ./recipes/recipe-*.yml); do
41    print_header "$(basename -s .yml "$recipe")" >>"$LOGFILE"
42    start_time=$(date '+%s')
43    if ! bluebuild build \
44        -p \
45        --registry aux-remote.carp-wyvern.ts.net \
46        -B docker \
47        -S cosign \
48        "${recipe}" >>"$LOGFILE" 2>&1; then
49        notify_fail "$(basename -s .yml "$recipe")"
50    fi
51    end_time=$(date '+%s')
52    runtime=$((end_time - start_time))
53    time_elapsed=$(date -d @$runtime -u '+%H:%M:%S')
54    print_footer "$(basename -s .yml "$recipe")" "$time_elapsed" >>"$LOGFILE"
55done
56
57# vim: ts=4 sts=4 sw=4 et ai ft=bash

Source: https://tildegit.org/hyperreal/bluebuild-hyperreal

This script is then run with a systemd service and timer every day at 01:00 (CST). In terms of results, it is very similar to running it in a CI/CD pipeline. Each recipe is slated to run, and if one fails, a notification is sent to my ntfy server (also on my Tailnet), and the script continues to build the next recipe. Build output is redirected to logfiles. Everything is run as an unprivileged user from the repo’s root as the working directory.

git pull is run by systemd before the execution of the script, so when updates are pushed to the repo, they are reflected in the next scheduled build. If I need to build them immediately, I can just run systemctl --user start bluebuild_build.service manually to trigger a build. Nothing in the script changes the state of the git repository during the build process, so each run will be ensured a clean git repo.

So, yeah, that’s it. I think it’s kinda neat.

#bluebuild #bootc #fedora #atomic #ublue #selfhosted

Reply to this post by email ↪