Last week I had the opportunity to speak at KCD Italy, a Kubernetes Community Days event. I delivered a talk titled “How to leverage and extend CEL for your cluster security”. The talk gives an overview about the Common Expression Language (CEL), Kubernetes ValidatingAdmissionPolicy, and Kubewarden.
While the talk has been delivered in Italian, the slides are in English and can be found here.
Hackweek 22 took place last week. During this week all the SUSE employees are free to hack on whatever they want. This one of the perks of working at SUSE 😎.
This time my personal project has been about building a unikernel that runs WebAssembly.
I wanted this blog post to contain all the details about this journey. However I realized this would have been too much for a single post.
Common Expression Language (CEL) is an expression language created by Google. It allows to define constraints that can be used to validate input data.
This language is being used by some open source projects and products, like:
Google Cloud Certificate Authority Service Envoy There’s even a Kubernetes Enhancement Proposal that would use CEL to validate Kubernetes’ CRDs. I’ve been looking at CEL since some time, wondering how hard it would be to find a way to write Kubewarden validation policies using this expression language.
A long time passed since the last time I wrote something on this blog! 😅 I haven’t been idle during this time, quite the opposite… I kept myself busy experimenting with WebAssembly and Kubernetes.
You probably have already heard about WebAssembly, but there are high chances that happened in the context of Web application development. There’s however a new emerging trend that consists of using WebAssembly outside of the browser.
Note well: this blog post is part of a series, checkout the previous episode about running containerized buildah on top of Kubernetes.
Quick recap I have a small Kubernetes cluster running at home that is made of ARM64 and x86_64 nodes. I want to build multi-architecture images so that I can run them everywhere on the cluster, regardless of the node architecture. My plan is to leverage the same cluster to build these container images.
Recently I’ve added some Raspberry Pi 4 nodes to the Kubernetes cluster I’m running at home.
The overall support of ARM inside of the container ecosystem improved a lot over the last years with more container images made available for the armv7 and the arm64 architectures.
But what about my own container images? I’m running some homemade containerized applications on top of this cluster and I would like to have them scheduled both on the x64_64 nodes and on the ARM ones.
Developers are used to express the dependencies of their programs using semantic versioning constraints.
For example a Node.js application relying on left-pad could force only certain versions of this library to be used by specifying a constraint like >= 1.1.0 < 1.2.0. This would force npm to install the latest version of the library that satisfies the constraint.
How does that translates to containers?
Imagine the following scenario: a developer deploys a containerized application that requires a Redi database.
As part of SUSE Hackweek 17 I decided to work on a fully fledged docker registry mirror.
You might wonder why this is needed, after all it’s already possible to run a docker distribution (aka registry) instance as a pull-through cache. While that’s true, this solution doesn’t address the needs of more “sophisticated” users.
The problem Based on the feedback we got from a lot of SUSE customers it’s clear that a simple registry configured to act as a pull-through cache isn’t enough.
In case you missed the openSUSE images for Docker got suddenly smaller.
During the last week I worked together with Marcus Schäfer (the author of KIWI) to reduce their size.
We fixed some obvious mistakes (like avoiding to install man pages and documentation), but we also removed some useless packages.
These are the results of our work:
openSUSE 13.2 image: from 254M down to 82M openSUSE Tumbleweed image: from 267M down to 87M Just to make some comparisons, the Ubuntu image is around 188M while the Fedora one is about 186M.
One of the perks of working at SUSE is hackweek, an entire week you can dedicate working on whatever project you want. Last week the 12th edition of hackweek took place. So I decided to spend it working on solving one of the problems many users have when running an on-premise instance of a Docker registry.
The Docker registry works like a charm, but it’s hard to have full control over the images you push to it.