Category: Gaming

  • Gitops for the Arkade

    In the last post I had built up some Makefiles and docker tooling to create nginx based images to serve the game emulators. Now it’s time to deploy the images to the pi-cluster.

    Github Actions

    First I setup a couple self-hosted action runners to use with Github Actions one on my main build machine, and another that’s on the laptop connected inside the cluster.

    Then I setup a maker workflow and tested out it out.

    name: Make
    on:
    workflow_dispatch:

    jobs:
    builder:
    environment: Games
    runs-on: www
    steps:
    - uses: actions/checkout@v4
    - name: Make the mamebuilder
    run: |
    make mamebuilder
    docker tag mamebuilder docker-registry:5000/mamebuilder
    docker push docker-registry:5000/mamebuilder
    games:
    environment: Games
    needs: [builder]
    runs-on: www
    steps:
    - uses: actions/checkout@v4
    - name: Make the game images
    run: |
    make
    echo done

    After triggering the make workflow on github, the build runs and emulator images are pushed to my private docker registry – checking that from inside the cluster’s network shows:

    sandy@bunsen:~/arkade/.github/workflows$ curl -X GET http://docker-registry:5000/v2/_catalog 
    {"repositories":["1943mii","20pacgal","centiped","circus","defender","dkong","gng","invaders","joust","mamebuilder","milliped","pacman","qix","robby","supertnk","tempest","topgunnr","truxton","victory"]}

    Helm Chart

    Next I worked on a helm chart for the emulators. I started with the basic helm chart:

    mkdir helm
    cd helm
    helm create game

    On top of the default chart, I added a Traefik IngressRoute next to each deployment and a volume mount for the /var/www/html/roms directory. The final layout is like this:

    sandy@bunsen:~/arkade$ tree helm roms-pvc.yaml 
    helm
    └── game
    ├── charts
    ├── Chart.yaml
    ├── templates
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── hpa.yaml
    │   ├── ingressroute.yaml
    │   ├── ingress.yaml
    │   ├── NOTES.txt
    │   ├── serviceaccount.yaml
    │   ├── service.yaml
    │   └── tests
    │   └── test-connection.yaml
    └── values.yaml
    roms-pvc.yaml [error opening dir]

    Most customization is in the values.yaml a diff of that against the default looks like this:

    sandy@bunsen:~/arkade/helm/game$ diff values.yaml  ~/test/game/
    25c25
    < create: false
    ---
    > create: true
    76,78d75
    < ingressroute:
    < enabled: true
    <
    110,113c107,111
    < volumes:
    < - name: roms
    < persistentVolumeClaim:
    < claimName: roms
    ---
    > volumes: []
    > # - name: foo
    > # secret:
    > # secretName: mysecret
    > # optional: false
    116,118c114,117
    < volumeMounts:
    < - name: roms
    < mountPath: /var/www/html/roms
    ---
    > volumeMounts: []
    > # - name: foo
    > # mountPath: "/etc/foo"
    > # readOnly: true

    That is the ingress is disabled by default, added an IngressRoute (default on) and the volume mount. I added a couple targets in the Makefile that can package the chart and deploy all the emulators:

    package:
    $(HELM) package --version $(CHART_VER) helm/game

    install:
    @for game in $(GAMES) ; do \
    $(HELM) install $$game game-$(CHART_VER).tgz \
    --set image.repository="docker-registry:5000/$$game" \
    --set image.tag='latest' \
    --set fullnameOverride="$$game" \
    --create-namespace \
    --namespace games ;\
    done

    upgrade:
    @for game in $(GAMES) ; do \
    $(HELM) upgrade $$game game-$(CHART_VER).tgz \
    --set image.repository="docker-registry:5000/$$game" \
    --set image.tag='latest' \
    --set fullnameOverride="$$game" \
    --namespace games ;\
    done

    The initial install for all the emulators is make package install. Everything is installed into a namespace called games so it’s relatively to clean up the whole mess with kubect delete ns games. There was one adjustment I had to make in the helm chart…kubernetes enforces RFC 1035 for service names. I was using the game name as the service name, but games like 1943mii don’t conform. So I updated the service definitions in the chart like this: name: svc-{{ include "game.fullname" . }} to get around it.

    ArgoCD Setup


    I setup ArgoCD in the cluster based on the getting started guide. Ultimately, I made a little script argocd.sh to do the setup on repeat:

    #!/bin/bash

    ARGOCD_PASSWORD='REDACTED'

    kubectl create namespace argocd
    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

    sleep 30
    kubectl -n argocd patch configmap/argocd-cmd-params-cm \
    --type merge \
    -p '{"data":{"server.insecure":"true"}}'

    kubectl -n argocd apply -f argocd_IngressRoute.yaml

    sleep 30
    INITIAL_PASSWORD=$(argocd admin initial-password -n argocd 2>/dev/null | awk '{print $1; exit}')
    argocd login argocd --username admin --insecure --skip-test-tls --password "${INITIAL_PASSWORD}"
    argocd account update-password --account admin --current-password "${INITIAL_PASSWORD}" --new-password "${ARGOCD_PASSWORD}"

    That installs ArgoCD into it’s own namespace and sets up an IngressRoute using Traefik. The ConfigMap patch is necessary to avoid redirect loops (Traefik proxies https at the LoadBalancer anyway). Then it goes on and logs in the admin user and updates the password. I should make a point of going back and replacing the sleep calls with wait loops – some of the actions of the script depend on k8s objects that are being asynchronously created.



    Then added a couple more targets to the Makefile

    argocd_create:
    $(KUBECTL) create ns games || true
    $(KUBECTL) apply -f roms-pvc.yaml
    @for game in $(GAMES) ; do \
    $(ARGOCD) app create $$game \
    --repo https://github.com/simsandyca/arkade.git \
    --path helm/game \
    --dest-server https://kubernetes.default.svc \
    --dest-namespace games \
    --helm-set image.repository="docker-registry:5000/$$game" \
    --helm-set image.tag='latest' \
    --helm-set fullnameOverride="$$game" ;\
    done

    argocd_sync:
    @for game in $(GAMES) ; do \
    $(ARGOCD) app sync $$game ; \
    done

    And added a workflow in GitHub actions:


    Ultimately I get this dashboard in ArgoCD with all the emulators running.

    Sync is manual here, but the beauty of ArgoCD is that it tracks your git repo and can deploy the changes automatically.

    Try It Out

    To use the IngressRoute, I need to add DNS entries for each game. The pi-cluster is behind an old ASUS router running FreshTomato so I can add those using dnsmasq host-records:



    Let’s try loading https://victory

    Oh that’s right no game roms yet. To test it out I’d need a royalty free ROM – there’s a few available for personal use on the MAME Project page. To load victory.zip I can do something like this:

    sandy@bunsen:~/arkade$ kubectl -n games get pod -l app.kubernetes.io/instance=victory 
    NAME READY STATUS RESTARTS AGE
    victory-5d695d668c-tj7ch 1/1 Running 0 12m
    sandy@bunsen:~/arkade$ kubectl -n games cp ~/roms/victory.zip victory-5d695d668c-tj7ch:/var/www/html/roms/@sandy

    Then from the bunsen laptop that lives behind the router (+- the self-signed certificate) I can load the https://victory and click the launch button…

    On to CI/CD?

    So far, this feels like it’s about as far as I can take the Arkade project. Is it full on continuous delivery? No – but this is about all I’d attempt with a personal github account running self-hosted runners. It’s close though – the builds are automated, ArgoCD is configured and watching for changes in the repo. There are github actions in place to run the builds. I’d still need some versioning on the image tags to have rollback targets and release tracking…maybe that’s next

    -Sandy

  • Automating the Arkade Build

    Last entry I got my proof of concept arcade emulator going. I’m calling the project “arkade”. First thing was I setup a new github repo for the project. Last time I’d found that archive.org seems to use MAME version 0.239 and emsdk 3.0.0 – after some more trial and error I found that they actually use emsdk v3.1.0 (based on matching the llvm hash in the version string that gets dumped.

    So I locked that into a Dockerfile:

    FROM ubuntu:24.04

    ENV EMSDK_VER=3.1.0
    ENV MAME_VER=mame0239

    RUN apt update \
    && DEBIAN_FRONTEND=noninteractive \
    apt -y install git build-essential python3 libsdl2-dev libsdl2-ttf-dev \
    libfontconfig-dev libpulse-dev qtbase5-dev qtbase5-dev-tools qtchooser qt5-qmake \
    && apt clean \
    && rm -rf /var/lib/apt/lists/*

    # Build up latest copy of mame for -xmllist function
    RUN git clone https://github.com/mamedev/mame --depth 1 \
    && make -C /mame -j $(nproc) OPTIMIZE=3 NOWERROR=1 TOOLS=0 REGENIE=1 \
    && install /mame/mame /usr/local/bin \
    && rm -rf /mame

    #Setup to build WEBASSEMBLY versions
    RUN git clone https://github.com/mamedev/mame --depth 1 --branch $MAME_VER \
    && git clone https://github.com/emscripten-core/emsdk.git \
    && cd emsdk \
    && ./emsdk install $EMSDK_VER \
    && ./emsdk activate $EMSDK_VER

    ADD Makefile.docker /Makefile

    WORKDIR /

    RUN mkdir -p /output

    The docker image also has a full build of the latest version of MAME (I’ll use that later). The last command sets up the MAME and Enscripten versions that seemed to work.

    That put the tools in place I wanted to move onto building up little nginx docker images one for each arcade machine. To get that all going the image needs a few bits and pieces:

    • nginx webserver and basic configuration
    • the emularity launcher and support javascript
    • the emulator javascript (emulator.js and emulator.wasm)
    • the arcade rom to playback in the emulator

    That collection of stuff looks like this as a Dockerfile:

    FROM arm64v8/nginx

    ADD nginx/default /etc/nginx/conf.d/default.conf

    RUN mkdir -p /var/www/html /var/www/html/roms

    ADD build/{name}/* /var/www/html

    Couple things to see here. I’m using the arm64v8 version of nginx because I’m gonna want to run the images in my pi cluster. The system will buildup a set of files in build/{name} where name is the name of the game.

    So I setup a Makefile that creates the build/<game> directory populated with all the bits and pieces. There’s a collection of meta data needed to render a game:

    • The name of the game
    • The emulator for the game
    • the width x height of the game

    MAME can actually output all kinds of metadata about the games it emulates. To get access to that, I build the full version of the emulator binary so that I can run that mame -listxml <gamelist>. There’s a target in the Makefile that runs mame on the small list of games and outputs a file called list.xml. From that, there’s a couple python scripts that parse down the xml to use the metadata.

    Ultimately the build directory for a game looks something like this:

    sandy@www:~/arkade/build$ tree joust
    joust
    ├── browserfs.min.js
    ├── browserfs.min.js.map
    ├── es6-promise.js
    ├── images
    │   ├── 11971247621441025513Sabathius_3.5_Floppy_Disk_Blue_Labelled.svg
    │   └── spinner.png
    ├── index.html
    ├── joust.json
    ├── loader.js
    ├── logo
    │   ├── emularity_color.png
    │   ├── emularity_color_small.png
    │   ├── emularity_dark.png
    │   └── emularity_light.png
    ├── mamewilliams.js
    └── mamewilliams.wasm
    
    3 directories, 14 files

    And the index.html file looks like this:

    sandy@www:~/arkade/build/joust$ cat index.html 

    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <title>Joust (Green label)</title>
    </head>
    <body>
    <canvas id="canvas" style="width: 50%; height: 50%; display: block; margin: 0 auto;"></canvas>
    <script type="text/javascript" src="es6-promise.js"></script>
    <script type="text/javascript" src="browserfs.min.js"></script>
    <script type="text/javascript" src="loader.js"></script>
    <script type="text/javascript">
    var emulator =
    new Emulator(document.querySelector("#canvas"),
    null,
    new MAMELoader(MAMELoader.driver("joust"),
    MAMELoader.nativeResolution(292, 240),
    MAMELoader.scale(3),
    MAMELoader.emulatorWASM("mamewilliams.wasm"),
    MAMELoader.emulatorJS("mamewilliams.js"),
    MAMELoader.extraArgs(['-verbose']),
    MAMELoader.mountFile("joust.zip",
    MAMELoader.fetchFile("Game File",
    "/roms/joust.zip"))))
    emulator.start({ waitAfterDownloading: true });
    </script>
    </body>
    </html>

    The metadata shows up here in a few ways. The <title> field in the index.html is based on the game description. The nativeResolution is a function of the game display “rotation” and width/height. Pulling that information from the metadata helps get the game viewport size and aspect ratio correct. The name of the game is used to set the driver and rom name. There’s a separate driver field in the metadata which is actually the emulator name. For instance in this example, joust is a game you emulate with the williams emulator. Critically the emulator name is used to set the SOURCE= line in the mame build for the emulator.

    There’s a m->n relationship between emulator and game (e.g. defender is also a williams game). That mapping is handled using jq and the build/<game>/<game>.json files.

    Once the build directory is populated, it’s time to build the docker images. There’s a gamedocker.py script that writes a Dockerfile.<game> for each game. After that, it runs docker buildx build --platform linux/arm64... to build up the images.

    I do the building on my 8-core amd/64 server so I needed to do a couple things to get the images over to my pi cluster. First I had to setup docker multiplatform builds:

     docker run --privileged --rm tonistiigi/binfmt --install all

    I also setup a small private docker registry using this docker-compose file

    sandy@www:/sim/registry$ ls
    config data docker-compose.yaml
    sandy@www:/sim/registry$ cat docker-compose.yaml
    version: '3.3'

    services:
    registry:
    container_name: registry
    restart: always
    image: registry:latest
    ports:
    - 5000:5000
    volumes:
    - ./config/config.yml:/etc/docker/registry/config.yml:ro
    - ./data:/var/lib/registry:rw
    #environment:
    #- "STANDALONE=true"
    #- "MIRROR_SOURCE=https://registry-1.docker.io"
    #- "MIRROR_SOURCE_INDEX=https://index.docker.io"

    I also had to reconfigure the nodes in the pi cluster to use an insecure registry. To do that I added this bit to my cloud-init configuration:

    write_files:
    - path: /etc/cloud/templates/hosts.debian.tmpl
    append: true
    content: |
    192.168.1.1 docker-registry www
    - path: /etc/rancher/k3s/registries.yaml
    content: |
    mirrors:
    "docker-registry:5000":
    endpoint:
    - "http://docker-registry:5000"
    configs:
    "docker-registry:5000":
    tls:
    insecure_skip_verify: true

    Then I reinstalled all the nodes to pull that change.

    Ultimately, I was able to test it out by creating a little deployment:

    kubectl create deployment 20pacgal --image=docker-registry:5000/20pacgal --replicas=1 --port=80

    sandy@bunsen:~$ kubectl get pods 
    NAME READY STATUS RESTARTS AGE
    20pacgal-77b777866c-d4dhf 1/1 Running 0 86m
    sandy@bunsen:~$ kubectl port-forward --address 0.0.0.0 20pacgal-77b777866c-d4dhf 8888:80
    Forwarding from 0.0.0.0:8888 -> 80
    Handling connection for 8888
    Handling connection for 8888


    Almost there. The rom file is still missing – I’ll need to setup a physical volume to hold the roms…next time.

    -Sandy

  • Building up MAME JS (Was – Deploy ?Something?)

    You have a k8s cluster setup – now what?

    I’ve had a lot of fun retro gaming with a retropie system I setup last year. Maybe I could setup a virtual arcade in my little cluster…

    My RetroPie setup in the garage.

    I’m thinking I’ll try to get a build of the Multi Arcade Machine Emulator (MAME) working. I went and got the code for MAME and built up galaxian and pacman emulators and bippidy, boppady, blah, blah, blah!

    Not!

    Building a copy of MAME to run pacman went fine, but I wanted the javascript build and that was much harder to get going – which was frustrating because it’s already out there on the Internet Archive working! I guess I could just go grab a copy of their javascripts, but I want some sort of repo to build off of so that I’ll be able to demo some gitops – like maybe ArgoCD.

    Not sure, the javascript build seems like the ugly step child of MAME but the instructions didn’t work for me. Anyway, – not bitter – here’s what I did to get it working. It’s a pretty power hungry build, so I do it on my main server.

    Start out by following the Compiling MAME instructions.

    sudo apt-get install git build-essential python3 libsdl2-dev libsdl2-ttf-dev libfontconfig-dev libpulse-dev qtbase5-dev qtbase5-dev-tools qtchooser qt5-qmake
    git clone https://github.com/mamedev/mame#
    cd mame
    make SUBTARGET=pacman SOURCES=src/mame/pacman/pacman.cpp TOOLS=0 REGENIE=1 -j16

    That built up the emulator off the master branch – and it worked.

    To get the js/wasm build going you are supposed to do something like:

    It’s been a couple days since I started working on this…I’m pretty sure that initial build finished fine. But there’s a big difference between a finished build and a working build.

    You then have to setup a webserver and copy in most of the files from Emularity and create a html page to point to it all. Emularity is some javascript that makes it easier to launch the MAME emulator.

    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <title>example arcade game</title>
    </head>
    <body>
    <canvas id="canvas" style="width: 50%; height: 50%; display: block; margin: 0 auto;"></canvas>
    <script type="text/javascript" src="es6-promise.js"></script>
    <script type="text/javascript" src="browserfs.min.js"></script>
    <script type="text/javascript" src="loader.js"></script>
    <script type="text/javascript">
    var emulator =
    new Emulator(document.querySelector("#canvas"),
    null,
    new MAMELoader(MAMELoader.driver("pacman"),
    MAMELoader.nativeResolution(224, 256),
    MAMELoader.scale(3),
    MAMELoader.emulatorWASM("emulator/pacman.wasm"),
    MAMELoader.emulatorJS("emulator/pacman.js"),
    MAMELoader.mountFile("emulator/pacman.zip",
    MAMELoader.fetchFile("Game File",
    "/roms/pacman.zip"))))
    emulator.start({ waitAfterDownloading: true });
    </script>
    </body>
    </html>

    Great but it didn’t load… It just crashed out with an Uncaught (in promise) error (with some random addr).


    There isn’t a ton of info on this build system. I did find this post that implied that there’s some version matching that has to go on. The javascript version is basically the binary build of a C++ application massaged into web assembly by emscripten.

    I hacked around for a couple days trying to add symbols and debug in the code and trying to get a break point in the javascript console. Ultimately, I kinda cheated and just tried to have a look at what the Internet Archive had done.

    If you look in the console on a successful run you can see that the MAME emulator is run with -verbose on so you get a dump of a couple things:

    Critically, they’re running the build on version 0.239 of MAME. Figuring out the emsdk version was a little harder – but I could see they are running Clang 14.0.0 from llvm tools. You can run ./emsdk list to list the emscripten versions. Ultimately by playing with it a bit (sort of loop over testing different versions of emcc – ./emsdk install 2.0.0; ./emsdk activate 2.0.0; source ./emsdk_env.sh; emcc -v) I settled on version 3.0.0 which had Clang 14.0.0. There’s tags in the MAME repo for each version so to get my build working I did this:

    cd ~/emsdk
    ./emsdk install 3.0.0
    ./emsdk activate 3.0.0
    source ~/emsdk_env.sh
    emcc -v
    emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.0.0 (3fd52e107187b8a169bb04a02b9f982c8a075205)
    clang version 14.0.0 (https://github.com/llvm/llvm-project 4348cd42c385e71b63e5da7e492172cff6a79d7b)
    Target: wasm32-unknown-emscripten
    Thread model: posix
    InstalledDir: /home/sandy/emsdk/upstream/bin

    cd ~/mame
    git checkout mame0239
    ./buildpacman.sh REGENIE=1

    Where my buildpacman.sh script looked like this:

    #!/bin/bash 

    subtarget=pacman

    emmake make -j 16 SUBTARGET=$subtarget SOURCES=src/mame/drivers/pacman.cpp ARCHOPTS=" -s EXCEPTION_CATCHING_ALLOWED=[instantiateArrayBuffer,instantiateAsync,createWasm] " WEBASSEMBLY=1 OPTIMIZE=s $@

    There’s a couple things to mention here:

    • First is that in this older version of MAME, the SOURCES are in a different place src/mame/drivers/pacman.cpp instead of src/mame/pacman/pacman.cpp
    • Next, the EXCEPTION_CATCHING_ALLOWED clause was required. To get a list of functions where catching is allowed, I had to enable DEBUG=1 and SYMBOLS=1 and OPTIMIZE=0 to get a better trace of the stack on that crash. That bit is probably the biggest part of the fix. It seems like there’s some async loading of the webassembly going on. That exception needs to be caught (and likely ignored) so that a retry can be attempted
    • The default compiler optimization level in the MAME build is OPTIMIZE=3 – I found pacman a little choppy at that level, so I increased it to OPTIMIZE=s – it runs smooth and makes the download smaller too.


    So now it’s working at least as a proof of concept. Hooray.

    In the image I’m just running nginx on my pink macbook air the web site files look like this:

    Then the final pacman.html that runs the loader looks like this:

    The extraArgs call was a good find too. You can pass args like -showconfig in there which help you trouble shoot where to put the rom file etc.

    I know I started by saying I was gonna deploy something into a kubernetes cluster – but I’m a little fried from the hacking I’ve done already. All in all, it was a lot of fun hacking on a C++ / WebAssembly build. Next time I’ll go onto automating the build for a couple games and dockerizing the results.

    -Sandy