BeastCoding
By Tobias Kriebisch
on

Keeping container images secure is a critical task in any modern DevOps workflow. Two popular tools that help achieve this are Grype and Syft. With these tools, you can automatically generate a Software Bill of Materials (SBOM) and perform vulnerability scans against the CVE Database for your container images. However, a common challenge arises when running these tools inside GitLab CI: the containers provided by their vendors are distroless and cannot be used by GitLab Runner (which often requires a full Linux distribution in order to run Docker-in-Docker or other tasks).

What Are Grype & Syft?

The Challenge

When running scans in GitLab CI, you might try to pull the official Grype or Syft containers only to discover they’re minimal “distroless” images, which are incompatible with the default GitLab Runner. Also by default they try to use docker or podman. Typically, using containers that require certain privileges (like Docker-in-Docker) or a full Linux distribution can be problematic or insecure in a Gitlab Runner.


The Solution: Nix

By using Nix, you can simply use Grype and Syft in a plain, unprivileged environment. Furthermore, by using a “special prefix,” you can pull images—even from private registries—without needing special setups for Docker or Podman.

Below are two example jobs in GitLab CI that will illustrate how to automatically create SBOMs and scan for vulnerabilities, all without privileged containers.


Example Job: Generating an SBOM with Syft

syft-container-scaning:
  stage: build
  image:
    name: nixos/nix:2.25.3
    entrypoint: [""]
  variables:
    SYFT_REGISTRY_AUTH_USERNAME: ${CI_REGISTRY_USER}
    SYFT_REGISTRY_AUTH_PASSWORD: ${CI_REGISTRY_PASSWORD}
  script:
    - mkdir reports
    - nix-shell -p syft --run "syft scan registry:your-container-image:vx.x.x --output cyclonedx-json=reports/container-sbom.json --output cyclonedx-xml=reports/container-sbom.xml"
  artifacts:
    paths:
      - reports/**.json
      - reports/**.xml
    when: on_success
    expire_in: "30 days"
  only:
    - tags

Example Job: Vulnerability Scanning with Grype

grype:
  stage: build
  image:
    name: nixos/nix:2.25.3
    entrypoint: [""]
  needs:
    - job: syft-container-scaning
      artifacts: true
  artifacts:
    paths:
      - reports/container-vulnerability-report.json
    when: always
    expire_in: "30 days"
  script:
    - nix-shell -p grype --run 'grype --fail-on High sbom:reports/container-sbom.json -o cyclonedx-json=reports/container-vulnerability-report.json -o table'
  only:
    - tags
``

## Conclusion

By leveraging **Nix** to install Grype and Syft, I can seamlessly integrate container scanning into my GitLab CI pipeline without resorting to privileged containers. This approach helps maintain security best practices while providing all the benefits of generating an SBOM and detecting vulnerabilities in your images.

With these two example jobs, I can now confidently automate my container scanning. As soon as your images are built and tagged, Syft generates an SBOM, Grype checks for vulnerabilities, and the pipeline fails (or warns you) when critical issues arise. This helps shift left on security by catching problems early—giving you peace of mind that your container images are secure and up to date.

Ideally I should introduce a maybe daily job to repeat the scanning to find newly discovered vulnerabilites.
By Tobias Kriebisch
on

When developing with PHP in a Nix environment, you may encounter inconsistencies between the PHP versions used by the CLI and Composer. This issue typically arises due to the way Nix manages dependencies and packages, which can result in Composer and the CLI referring to slightly different PHP paths, even when they’re intended to use the same version. In my case I need the composer PHP to have the redis extension so it can be used by a post-update script.

Problem: Inconsistent PHP Paths in Nix

The initial setup for PHP in our shell.nix file specified PHP 8.3 with several extensions.

let
  pkgs = import <nixpkgs> { };
  php = pkgs.php83.withExtensions (
    { enabled, all }:
    with all;
    [
      ctype
      dom
      fileinfo
      filter
      mbstring
      openssl
      pdo
      session
      tokenizer
      zlib
      curl
      imagick
      redis
      opcache
      pdo_pgsql
      gd
      xdebug
      pcntl
      zip
    ]
  );
  packages = pkgs.php83Packages;
in
pkgs.mkShell {
  nativeBuildInputs = [
    php
    packages.composer
    packages.phpinsights
    packages.phpmd
  ];
}

However, upon inspecting the environment, it was apparent that Composer and the CLI were using different PHP binaries, even though both pointed to PHP 8.3. This led to potential confusion, especially in managing the redis extension.

For instance:

$ which php
/nix/store/dl7q2888a2m0b32mzy9qs5hmjh992jiy-php-with-extensions-8.3.12/bin/php
$ composer -V
PHP version 8.3.12 (/nix/store/n7zg8vq3gf10s0jjkw5vv7f55iyck2mc-php-with-extensions-8.3.12/bin/php)

Here, which php and the Composer PHP path led to different binaries and the Composer PHP binary does not have the redis extension for some reason.

Solution: Unified PHP Configuration in `shell.nix`

To solve this, we needed a way to specify PHP in shell.nix so that both the CLI and Composer use the same binary.

Updated shell.nix:

Basicly we reuse the configured PHP from the Composer package as the main PHP package

let
  pkgs = import <nixpkgs> { };
  php = (pkgs.php83.withExtensions (
      { all, enabled }:
      enabled
      ++ (with all; [
        redis
      ])
    ));
in
pkgs.mkShell {
  nativeBuildInputs = [
    php.packages.composer
    php
  ];
}

This approach to defining PHP and Composer in shell.nix ensures consistent, predictable behavior in your PHP development setup. Happy coding!

By Tobias Kriebisch
on

If you've developed a Bevy project in Rust for Windows, you may have noticed that when you run your program, a console window pops up alongside your game window. This can be distracting and unprofessional-looking, and you might prefer to have only the game window display without any extra console window.

Fortunately, there's a simple solution to this issue: using a linker argument when building your project.

To do this, you'll need to open a terminal and navigate to your Bevy project's directory. Then, run the following command:

cargo rustc --release -- -Clink-args="/SUBSYSTEM:WINDOWS /ENTRY:mainCRTStartup" 

What this command does is add a linker argument to your project, which tells Windows to run your program without creating a console window. The /SUBSYSTEM:WINDOWS argument sets the program to run as a Windows application rather than a console application, and the /ENTRY:mainCRTStartup argument specifies the entry point for your program.

With this command, you can now build your Bevy project for Windows without any distracting console windows popping up. Happy coding!

By Tobias Kriebisch
on

We have evaluated a matrix server as our new internal asynchronous communication medium. Currently we use the Slack with a free plan.

To understand the topic, while Slack is a finished product, Matrix is a protocol. So for matrix there are manifold clients and servers, from free and paid.

We choose to use the synapse server with the Element client. The Element client is probably the most popular client for matrix. Synapse is a server, written in python, that has been around for a while now.

The Arguments

Slack has been working great for us in the past. The free plan has very few limitations for a smaller software development company. Transitioning to Synapse a self-hosted solution increases maintenance load on the team.

Upsides

Downsides

By Tobias Kriebisch
on

To improve the security of your k3s cluster you might want to set traefiks tls support to allow for tls v1.2 or greater.

This can be achived by deploying a TLSOption to the default namespace with the name default.

apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
  name: default
  namespace: default
spec:
  minVersion: VersionTLS12  
  cipherSuites:
    - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 # TLS 1.2
    - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305  # TLS 1.2
    - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384   # TLS 1.2
    - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305    # TLS 1.2
    - TLS_AES_256_GCM_SHA384                  # TLS 1.3
    - TLS_CHACHA20_POLY1305_SHA256            # TLS 1.3
    - TLS_FALLBACK_SCSV                       # TLS FALLBACK

This will give you an A rating on ssllabs.com tecbeast.com SSLLabs Report by the time of writing this post.

By Tobias Kriebisch
on

Nuxt offers a better solution out of the box. Nuxt Docs

I wanted to use the package v-calendar currently at 3.0.0-alpha.8, in my nuxt 3 project. Somewhere in the package the window object is used, that is not available in the server part of the SSR process.

To fix this, without any hydration error the v-calendar needs to be enabled only, when the app has finished rendering in the browser.

Luckily nuxt has a hook for that app:mounted this allows code to execute, when the app has finished rendering in the browser. This can be warped in a small plugin.

npx nuxi add pugin hydrated

import { ref, type Ref } from "vue";

const hydrated: Ref<boolean> = ref(false);

export default defineNuxtPlugin((nuxtApp) => {
  nuxtApp.provide('hydrated', hydrated);
  nuxtApp.hook("app:mounted", () => {
    hydrated.value = true;
  });
});

This can be added to the component like so

<script lang="ts">
  const { $hydrated } = useNuxtApp();
</script>

<template>
  <VCalendar v-if="$hydrated">
    // some code
  </VCalendar>
  <some-placeholder-optional v-else>
</template>

Now nuxt can render in its node environment in peace and the browser will add the package as needed.

By Tobias Kriebisch
on

Sometimes you have to call multiple APIs for a nuxt page to render. If called like this

const response1 = await useFetch("url1");
const response2 = await useFetch("url2");

Due to the nature of await these request go out sequentially, delaying the page load. On the other hand, with this approach, you do not have to handle the state where the data is not yet available to vue.

To solve this, we can group the request and do a Promise.all to have on single await that waits for all request in parallel.

const responses = Promise.all([useFetch("url1"), useFetch("url2")]);
const url1Data = responses[0].data;
const url2Data = responses[1].data;

Now requests load in parallel and we can use the goodies that nuxt 3 gives us. Like refresh or error.

This works with anything that needs to be awaited.

By Tobias Kriebisch
on

For our SaaS application PURMA we need at least 7 services that are running. To move them to Kubernetes and offer a self hosted version that way, I to got myself familar with writting helm charts.

My first impression

Helm is like a specialied template engine that can talk to kubernetes to me. It is extremly straight forward to build a small template and just publish it. Set requirments for the user of the chart to fullfill and limit the amount of tinkering to protected against common traps. Of course if someone wanted to they could modify the resulting kubernetes yaml any way they want. I think it is the job of the chart providier to make sure these thing are not needed, but edge cases do exist of course.

By Tobias Kriebisch
on

On my quest for relentless optimization and efficiency, I came across new tools like gh or hub for GitHub that looked promising. Since we are mainly using GitLab, I decided to integrate glab which is a GitLab alternative of gh for a test.

I am especially interested in getting updates for Issues and the CI from the CLI.

CI

glab offers a nice overview of the pipeline triggered by your current commit. Just run glab ci view and you will receive something like:

╔════════════════════ Pipeline #20432 triggered about 2 days ago by Tobias Kriebisch ═════════════════════╗
║  ┌────────────────────┐           ┌────────────────────┐           ┌────────────────────┐               ║
║  │      Install       │           │        Lint        │           │        Test        │               ║
║  └────────────────────┘           └────────────────────┘           └────────────────────┘               ║
║                                                                                                         ║
║  ╔═════✔ composer═════╗           ┌──────✔ phpcs───────┐           ┌─────✘ phpunit──────┐               ║
║  ║                    ║           │                    │           │                    │               ║
║  ║             00m 37s║═══════════│             00m 09s│═══════════│             02m 22s│               ║
║  ╚════════════════════╝           └────────────────────┘           └────────────────────┘               ║

The view auto updates in real time. Very nice :).

If you have an error like in my example, you can get the output of the job with glab ci trace. It will ask for the specific job you want to read. Output has colors and looks like it was run in my own shell. Very pleasing for my eyes.

Issues

This is really awesome you can do almost everythin you can do from the web ui. Create issues (edit them with vim so cool), close, delete, list. It is pretty nice.

Conlusion

I will try to move everything I do with gitlab to the tool. Lets see how long this will work

By Tobias Kriebisch
on

Since PHP is morphing more and more into a static typed language, even if it is optional. I wanted to try out something that embraces types and performance as a first class citizen. Rust also has a pretty good reputation, so I went with it.

To get started, I wanted to write a simple, a few lines long program that can calculate the Fibonacci number in a certain position in the Fibonacci sequence.

So I came up with:z

fn fibonacci_u64(number: u64) -> u64 {
    let mut last: u64 = 1;
    let mut current: u64 = 0;
    let mut buffer: u64;
    let mut position: u64 = 1;

    return loop {
        if position == number {
            break current;
        }

        buffer = last;
        last = current;
        current = buffer + current; 
        position += 1;
    };
}

Being able to return something directly from a loop is a nice feature. It is pretty easy to read too.

Testing

For a PHP/Laravel developer, testing is almost a requirement to trust what you wrote, so I naturally wanted to write a test. Conveniently, rust has built in support for testing.

In the same file, you can add a module dedicated to testing:

#[cfg(test)]
mod tests {
    use super::*; // load also functions from the actuall code

    #[test]
    fn u64() {
       assert_eq!(fibonacci_u64(1), 0); 
       assert_eq!(fibonacci_u64(2), 1); 
       assert_eq!(fibonacci_u64(12), 89); 
       assert_eq!(fibonacci_u64(30), 514229); 
    }
}

With cargo test this will be compiled with extra safety checks like overflow tests and complain or give you green. It is a really seamless experience

Benchmarking

For fun I also tested an experimental feature of rust the cargo bench command that will benchmark your application. It is almost the same as writing tests.

As of writing you have to enable rust nightly features for this

#[cfg(test)]
mod tests {
    use super::*;
    use test::{Bencher,black_box};

    #[bench]
    fn bench_u64(b: &mut Bencher) {
        b.iter(|| {
            for i in 1..20 {
                black_box(fibonacci_u64(i));
            }
        });
    }

    #[bench]
    fn bench_u128(b: &mut Bencher) {
        b.iter(|| {
            for i in 1..20 {
                black_box(fibonacci_u128(i));
            }
        });
    }
}

This will output something like:

test tests::bench_u128 ... bench:           8 ns/iter (+/- 0)
test tests::bench_u64  ... bench:           4 ns/iter (+/- 0)

It looks like u128 takes double the time that u64 does on my machine. Seems correct, but it might be totally wrong and I missed something.

Conclusion

Rust looks nice and extremely thought through, for this little program. The cargo features are so far amazing. I am looking forward to a little bigger project.

By Tobias Kriebisch
on

Understand Tailwind

Tailwind offers highly flexible layouting and styling with a very small footprint and you don't have to write CSS. This is achieved by providing a hugh amount of sematic css classes that can be used throught a project.

To get a small footprint over the wire it cleans all classes during build time by searching html and other sourcecode. Only found classes will be included in the final build, reducing the code to a few tens of KB or even less.

Understand the markdown plugin

The markdown plugin for Laravel does not support tailwind out of the box. To make them work we have to do two things.

If we can do both of the above we get a javascript free website with modern styling and very small footprint.

Style our site with tailwind and markdown

Markdown consists of different parts. E.g. a part to make code monospaced. The underlying markdown plugin allowes us to register custom renderer for different markdown parts.

So I added a new Service Provider to Laravel and added the follwing code to its boot method.

		app('markdown')
            ->getEnvironment()
            ->addBlockRenderer(Heading::class, new HeadlineRenderer);

        app('markdown')
            ->getEnvironment()
            ->addBlockRenderer(Paragraph::class, new ParagraphRenderer);

        app('markdown')
            ->getEnvironment()
            ->addBlockRenderer(FencedCode::class, new FencedCodeRenderer);

The renderers used here are my own which basicly do the same as the original provided by the plugin, but with the tailwind classes.

the underlying mardown render package https://github.com/thephpleague/commonmark is preparing a version 2.0 at the time of writing in this article i used v1.5 which comes with the laravel markdown package

Inform the build process about our used css classes

To achive a nice gruvey look, i configured tailwind with the colors from gruvbox and added all files in app/Markdown to the purge array. This is where I stored my rendereres.

Example tailwind.config.js

  purge: [
    './resources/**/*.blade.php',
    './resources/**/*.js',
    './app/Markdown/**/*.php', // this is new
    './resources/**/*.vue'
  ],
  theme: {
    colors: {
        white: '#f4f4f4',
        red: {
            DEFAULT: '#cc241d',
            light: '#fb4934',
        },
        green: {
            DEFAULT: '#98971a',
            light: '#b8bb26',
        },
        yellow: {
            DEFAULT: '#d79921',
            light: '#fabd2f',
        },
        blue: {
            DEFAULT: '#458588',
            light: '#83a598',
        },
        purple: {
            DEFAULT: '#b16286',
            light: '#d3869b',
        },
        aqua: {
            DEFAULT: '#689d6a',
            light: '#8ec07c',
        },
        gray: {
            DEFAULT: '#a89984',
            light: '#928374',
        },
        orange: {
            DEFAULT: '#d65d0e',
            light: '#fe8019',
        },
        bg: '#282828',
        bg0_h: '#1d2021',
        bg0_s: '#32302f',
        bg0: '#504945',
        bg1: '#3c3836',
        bg2: '#584945',
        bg3: '#665c54',
        bg4: '#7c6f64',
        fg: '#ebdbb2',
        fg0: '#fbf1c7',
        fg1: '#ebdbb2',
        fg2: '#d5c4a1',
        fg3: '#bdae93',
        fg4: '#a89984',
    },
    extend: {},
  },

This is all that is needed. At the time of writing this puts a little less than 2KB over the wire for beastcoding.de.

By Tobias Kriebisch
on

Setup nextcloud like a static ip without a static ip

Your ISP probably offers no static ip in your typical consumer plan, but with wireguard we can solve this problem.

We can buy a cheap virtual server online. I found one for 2,89 € at hetzner.de with 20 TB traffic included per month. It is there CX11 cloud offering. This is plenty for basicly everything a private person wants to do online. You could even stream movies from home with that kind of traffic. I just want to host a nextcloud and maybe this blog at home.

What we need:

Setup a connection between the home server and the online server

we use an ubuntu 20.04 server in this example

First we install wireguard on a fresh server

apt udpate
apt upgrade
apt install wireguard wireguard-tools

We need to generate key pairs for the server and our home server. The following need to be executed one both machines.

cd /etc/wireguard/
wg genkey | tee privatekey | wg pubkey > publickey

We need to add a file /etc/wireguard/wg0.conf with the following content. The privatekey needs to be filled in.

Server

[Interface]
PrivateKey = <insert-server-private-key-here>
ListenPort = 55107
Address = 192.168.4.1

[Peer]
PublicKey =  <insert-public-key-for-client-here>
AllowedIPs = 192.168.4.2/32

Client

[Interface]
PrivateKey = <insert-client-private-key-here>
Address = 192.168.4.2/32

[Peer]
PublicKey =  <insert-public-key-for-server-here>
AllowedIPs = 192.168.4.1/32
Endpoint = <ip-of-the-server>:55107
PersistentKeepalive = 25

Now we want to start and autostart on reboots the wireguard connection

systemctl start wg-quick@wg0
systemctl enable wg-quick@wg0

To verify if the connection works as expected you can try to ping the server from the client.

ping 192.168.4.1

Setup routing so the server moves traffic to the home server

Lets assume we want to route traffic for http (port 80) from the server to our home server.

We need to modifiy the routing table with iptables.

iptables -P FORWARD DROP
iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -i eth0 -o wg0 -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i wg0 -o eth0 -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

First we disable any forwarding that might exist.

Second we set incoming packages on eth0 to be forwarded to wg0 for port 80 for any package that tries to make a new connection

Third with the last two lines we allow any traffic from wg0 and eth0 to pass through that is related to a connection

We still have one problem. The actually network package contains faulty ip address since the internet does not know that we have a home server behind the server. So we need to adjust the ip addresses.

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.3.4

This will transform the ip for any incoming package to the online server. Note that we set the destination ip to the ip we gave it in the wireguard setup.

iptables -t nat -A POSTROUTING -o wg0 -p tcp --dport 80 -d 192.168.3.4 -j SNAT --to-source 192.168.3.1

Now any package from our homeserver will get sent back to our online server which handles the rest.

Persist rules during reboot

Iptables forgets during a shutdown any custom rules. There is a package netfilter-persistent which solves this problem

apt install iptables-persistent netfilter-persistent
systemctl enable netfilter-persistent
netfilter-persistent save

If there are any changes to the routing we need to call netfilter-persistent save again to store the new rules on disk.