Categorieën
Geen categorie

Thinking about AI

Thinking about AI and both the success of it and the competition in that market I got an idea.
So I wrote done the bullet points of my thoughts and ask Claude to write a coherent text about it.

AI Data Portability: The Case for Conversation Portability

Artificial intelligence is arguably the most transformative technology of our era. Some analysts and historians are already drawing comparisons to the Industrial Revolution — a moment when the rules of productivity, creativity, and human potential were fundamentally rewritten. This time, the machinery is cognitive, and the pace of change is staggering.

With that transformation comes a responsibility to get the rules right. It is genuinely encouraging that legislators, ethicists, and legal scholars around the world are beginning to build frameworks around AI — addressing questions of accountability, transparency, and fairness. These conversations are not a brake on innovation; they are the foundation that makes innovation trustworthy.

A Market at a Tipping Point

The current AI landscape is intensely competitive. Dozens of players — from tech giants to nimble startups — are racing to capture users, data, and market share. Yet underneath that apparent diversity lies a real danger: winner-takes-all dynamics. Network effects, data advantages, and the stickiness of user habits tend to consolidate markets around a single dominant player. We have seen it happen with search, with social media, and with e-commerce. There is no reason to assume AI will be different — unless we act intentionally to prevent it.

The Hidden Lock-In Problem

Every time a user interacts with an AI product, something valuable is being built: a history. A growing, layered record of questions asked, ideas explored, documents drafted, and problems solved. Over time, this history becomes context — and context is what transforms a generic AI assistant into something that feels genuinely useful and personal.

Here lies the subtle trap. Users are not just choosing a product; they are investing in it. And the more they invest, the harder it becomes to leave — not because competitors are worse, but because starting over means losing everything that has been built. This is lock-in, and it is one of the most powerful tools an incumbent can use to suppress competition.

Portability as a Principle

All major AI assistants are, in important ways, different. They have distinct personalities, strengths, knowledge bases, and reasoning styles. A response from one will never be identical to a response from another. And that is fine — even desirable. Diversity in AI approaches benefits users and society.

But different does not have to mean incompatible. Just as email can be exported and moved between providers, just as medical records can be transferred between doctors, conversation history should be portable.

The ask is straightforward: users should have the ability to export the complete history of their prompts and responses — and import that history into a new AI product of their choosing. The new AI will inevitably respond to future questions in its own way. No one expects otherwise. But the context, the thread of past conversations, the accumulated knowledge of what a user has worked on — that should travel with the user, not be held hostage by a platform.

Why This Matters

This is not merely a convenience feature. It is a question of fairness, competition, and user rights. Portability would:

  • Lower barriers to switching, keeping the market genuinely competitive and preventing any single player from monopolizing users through accumulated data rather than ongoing merit.
  • Respect user ownership of their own intellectual and conversational history — content that users themselves created.
  • Encourage better products, because providers who cannot rely on lock-in must continuously earn their users’ loyalty through quality.

Legal frameworks around AI are still being written. This is the moment to get ahead of the problem — to establish data portability as a baseline right for AI users, before the market consolidates and the window for meaningful regulation narrows. The Industrial Revolution took generations to regulate well. We do not have that luxury this time.

So, what do you think? Just send me an email. Are you looking for an experienced IT consultant to discuss this further, definitely send me an email!

Categorieën
Geen categorie

In memoriam – Rutger van Sleen

Ik ken Rutger spreekwoordelijk al honderd jaar. Rond 2000 gaf ik voor mijn werk voorlichting over Linux en open source, en in mijn herinnering dook hij toen al snel op: de stille, vaste motor achter de Nederlandse Linux Gebruikers Groep. Rutger was er gewoon altijd. Niet om zichzelf op de voorgrond te zetten, maar om te zorgen dat het werk gedaan werd.

Toen ik in 2006 samen met Jean-Paul T-DOSE startte, stonden Rutger en zijn lieve vrouw Susanne er ook weer. Typisch Rutger: geen grote woorden, wel grote daden. Kabels die nog moesten, stoelen die rechtgezet moesten worden, een lastige taak waar niemand de vinger voor opstak—Rutger had het al geregeld voordat je er erg in had. Zo werkte hij, zo leefde hij.

We woonden niet naast elkaar—zij in Groningen, ik veel zuidelijker—dus we liepen de deur bij elkaar niet plat. Maar bij de grote dingen in het leven vonden we elkaar. In blijdschap en in zorg. En toen die nare ziekte ons scherp liet zien hoe kwetsbaar alles is, bleef Rutger precies wie hij was: bescheiden, praktisch, liefdevol. Samen met zijn gezin haalde hij, ondanks alles, nog zoveel vreugde en kwaliteit uit de jaren die volgden. Dat blijft me diep raken.

Op zijn rouwkaart staat een citaat dat precies bij hem past:

“Some people live more in twenty years than others do in eighty.
It’s not the time that matters, it’s the person.”
The Doctor

Dat was Rutger. Het ging hem nooit om tijd of om titels, maar om mensen. Om samen iets moois, functioneels, werkends neer te zetten. Daarom laat hij een forse leegte achter—thuis, in de vriendenkring en in de open-source gemeenschap waar hij zóveel voor betekend heeft.

Rutger zal als mens enorm gemist worden. Ik wens zijn gezin en alle nabestaanden heel veel kracht toe in de tijd die komt. En ik zal hem blijven zien zoals hij was: rustig, met een twinkeling in zijn ogen, al aan het werk voordat iemand erom gevraagd had—en precies daarom onvergeetbaar.

Jeroen Baten
Frankrijk, 26 augustus 2025

Categorieën
Geen categorie

Monitoring filesystem growth with Zabbix

Introduction

Like many people I use Zabbix for monitoring. I love the web GUI to configure stuff and the API to automate its configuration when I need to.

And although Zabbix comes packed with a lot of usable templates, they are more a starting point for your own infrastructure than a 100% ready solution.

Recently I had the need to start monitoring filesystem usage growth so I would get warned in time when a system was nearing its boundaries. It turns that for a really long time Zabbix has a timeleft function just for this occasion. But how, and where to use it?

Well, usually there is already a template available that does filesystem usage numbers. Those keep track on used space and used inodes. Within the template is a LLD, a Low Level Discovery rule. The result of that rule is a list of stuff found. And, together with ‘prototype’ items and triggers it can automatically add items and triggers to your host.

My setup

If you search in the available templates for ‘Linux filesystems’ you will easily find it. One is called ‘linux filesystems by Zabbix agent’ and the other one is ‘linux filesystems by Zabbix agent active’ (for if you are using active instead of passive checks). In the ‘Discovery’ column you can see that it has (in my case) 1 LLD.

If you click on the ‘Discovery’ you will see the list of LLD’s (in my case a single rule) with the following info:

List of discovery rules

Usually you will see four (4) item and trigger prototypes but my list shows 5 of each. Let’s start with the list if item prototypes:

List of item prototypes

As you can probably guess, the first one is the subject of this blog post. Let’s have a close look at it:

My item definition

Item definition to get timeleft information

If we analyse this item we see the following settings:

  • Name: Since this item will expand (because it is part of a LLD!) it is important to add a macro (#FSNAME) to the name. This allows you to distinguish what filesystem this item is talking about later. Also, without it you would try to create multiple items with the same name and Zabbix would raise an error.
  • Type: We are going to perform a calculation, so the item type is ‘calculated’
  • Key: Since we are performing a calculation, this means there is an input variable. That is what is meant here. We are using the input value of vfs.fs.size.timeleft[{#FSNAME},pused]. This is one of the items already gathered in this LLD.
  • Type of information: Since we are performing a calculation the result will be a number. That is why we select ‘Numeric (float) here.
  • Formula: this is where it’s all about. ((((timeleft(//vfs.fs.size[{#FSNAME},pused],7d,95)/60)/60)/24)/30). This means: Calculate how much time is left for this filesystem to become 95% full, based on the last seven days of data. Since the result is in seconds we have to do some divisions to get to a number of month.

My trigger definition

Nice, but does this give us an alert when it becomes time to have a look at it? No, it doesn’t. For that we have to define a trigger. So we create a trigger prototype in the trigger section of the LLD. Mine looks like this:

My trigger prototype definition

Again, let’s take a closer look at the individual settings:

  • Name: Like with the item name we need to add the #FSNAME macro in our descriptive text
  • Severity: For me, I set the severity to high because an alert like this definitely deserves attention!
  • Expression. This is the expression for when to trigger an alert: last(/Linux filesystems by Zabbix agent/vfs.fs.size.timeleft[{#FSNAME},pused])<3. This means: As soon as the last measured value of the item for this host with key vfs.fs.size.timeleft[{#FSNAME},pused] becomes below three (month) I get an alert.

After having configured this all correctly I head over to the ‘lastes data’ section of Zabbix to see how I’m doing:

Listing with latest timeleft data

Few! As you can see I am in the clear for now but one system will need disc size usage growth attention in one and a half (21-3=18 month) time.

If you found this helpfull please reward my work of researching and writing this. Please go to GitHub or Patreon and show your appreciation.

Categorieën
Geen categorie

Is open source (finally) growing up?

I have been working in the field of open source software since 1997-ish. Being a tech geek myself and loving computers I never understood back then why people would choose to use inferior buggy software.

I do now, and it’s called marketing. Still, in the early two thousands the term ‘open source’ caught on and popularity started to grow. Sure, at some points it was an uphill battle but after some twenty years were passed we could all look each other in the eye and say that we did it!

Look around you today. Just about every embedded system runs Linux, from TV’s to media centers. Heck, even the top 500 supercomputers in the world run mostly Linux.

In the realm of databases PostgreSQL rules big time. Every payment done through Facebook is stored on a PostgreSQL database on the site of the payment provider Adyen.

Have you taken a look at open source ERP lately? The open source Odoo system, programmed in Python, is really gaining momentum against the usual suspects. Today’s speed of doing business and optimizing business processes asks for better, faster and more agile software tools. Hence the choice for the Python programming language.

In December of 2021 a blog post caught my eye. “Open source is broken”. In it, the writer makes some valid points, pointing at underfunding of sometimes vital infrastructure. But is it all doom and gloom around the future of open source as we know it? Of course not! It is just a logical evolutionary step in its growth. Actually there was another major event surrounding this problem back in 2014.

On 7 April 2014, the “Heartbleed” bug in the OpenSSL software library was publicly disclosed and fixed. At the same time if became apparent that OpenSSL (the library that makes the difference between “http” and “https“) was extremely underfunded. Quickly the Linux Foundation stepped in and started the Core Infrastructure Initiative (CII) to fund crucial Internet infrastructure projects. But what about all these other projects? The very nice applications that are not vital to a working Internet?

Well, the response to these problems are starting to appear. Let me give you some examples:

The very popular Ardour software (to run a recording studio) only allows payed downloads of ready to install packages. Mind you, the software is still a 100% open source, but if you want to download an installer package, you have to pay. Or download and compile it yourself, that is also still an option. The developer does not ask for a lot. You can already subscribe for $1 per month. But with the popularity of the product, the numbers do add up. For him, that is. It is still nowhere near the numbers that larger companies are getting for their software.

The open source network analysis tool Ntop, uses a similar strategy. Still open source, but after installation you see a very prominent “Upgrade to Pro/Enterprise version” at the top of the screen. And some add-ons only work like demo software after which you need to buy a license (or compile stuff from scratch of course). Their source code is also still 100% open source.

The financial struggle in the open source world is no different than in the rest of the job world. People like to get paid a decent wage for their work. And it seems that offering paid packages as a convenience to the intended user is a promising direction for the future sustainability of open source projects.

Categorieën
Geen categorie

The complete guide to setting up a multi-peer WireGuard VPN

Let’s start with a description of my needs. I have two remote systems and I want to be able to connect to them both. Both systems are behind a standard NAT firewall (like a home router). And I want to be able to copy files between them easily. I am not a vpn or network whizz but know my way around IP addresses. I know that besides WireGuard there are more options, like OpenVPN, but I prefer an easy setup with enough security. So I got hacking the other day and found a few small pitfalls. To help others setting this up I decided to write a small “The complete guide to setting up a multi-peer WireGuard VPN network”

Getting started is the easy part. There are enough guides on the Internet by now on how to get some initial setup. The thing is, after following those directions you are probably only half way there.

So, let’s get started.

First, take a piece of paper and draw the network you want to setup. Draw all hosts, and assign them all a unique IP-address in a new network that you are not already using. So, in my case, I choose 10.10.1.0/24. This means that my network is 10.10.1 and the last digit is for each systems IP address.

Since both systems are behind a firewall this means I can not access them from the outside world. This also means I need to have at least one system in my network that is (accessible from the outside world). For this I choose to instantiate a very cheap cloud virtual machine at some supplier. This will be my vpn-router-vm system. All it will do is route all traffic within my 10.10.1.0/24 vpn network. Of course it has a public IP address that is visible to the outside world.

Now that I have decided on all the above I can assign IP addresses to my two systems. It makes common sense to assign 10.10.1.1 to my vpn-router-vm. This also means my other nodes will be 10.10.1.2 and 10.10.1.3.

I instantiate the vpn-router-vm, and choose Ubuntu 20.04 for the OS. I do an apt update and apt upgrade to make sure I am using the latest patches. I install the UFW firewall tool and make it only accessible over SSH from my home server. No need for script kiddies to do dictionary attacks, right?

# apt install ufw
# ufw default deny incoming
# ufw default allow outgoing
# ufw allow from <my.home.ip.address> to any port 22 proto tcp comment 'ssh access from home'

Since this vpn-router-vm needs to be accessible from the outside world, the default port for WireGuard is 41194 and one of my systems does not have a fixed IP address I need to allow all WireGuard traffic:

# ufw allow 41194/udp

Now, on all our systems we are going to do exactly the same commands to install WireGuard, make a configuration directory, generate private key, and, based on this private key, a public key. So any system that connects with a public keys is checked whether or not this public key is based on our (hidden!) private key before it is allowed access.

# apt install wireguard
# mkdir -m 0700 /etc/wireguard/
# cd /etc/wireguard
# umask 077; wg genkey | tee privatekey | wg pubkey > publickey
# cat privatekey
# cat publickey

Okay, now, all our systems will get a new network interface with the name ‘wg0’. So we need to create a wg0.conf file in every /etc/wireguard directory. We will start with a skeleton configuration (yes, you will add stuff to this later, and for good reason) for our vpn-router-vm system. Note that in our interface definition we use /24 because we define our vpn network here.

## Set Up WireGuard VPN on Ubuntu By Editing/Creating wg0.conf File ##
[Interface]
## My VPN server private IP address ##
Address = 10.10.1.1/24
 
## My VPN server port ##
ListenPort = 41194
 
## VPN server's private key i.e. /etc/wireguard/privatekey ##
PrivateKey = private-key-of-von-router-vm

[Peer]
## Desktop/client VPN public key ##
PublicKey = public-key-of-my-first-peer-system
 
## client VPN IP address (note  the /32 subnet) ##
AllowedIPs = 10.10.1.2/32

On my first remote node I also create a wg0.conf file, but with slightly different contents:

[Interface]
## This Desktop/client's private key ##
PrivateKey = my-systems-private-key
 
## Client ip address ##
Address = 10.10.1.2/24
 
[Peer]
## Ubuntu 20.04 server public key ##
PublicKey = the-public-key-of-my-vpn-router-vm
 
## set ACL ##
AllowedIPs = 10.10.1.0/24
 
## Your Ubuntu 20.04 LTS server's public IPv4/IPv6 address and port ##
Endpoint = the-public-ip-address-of-my-vpn-router-vm:41194
 
##  Key connection alive ##
# This is needed because we are behind NAT firewall
PersistentKeepalive = 15

One thing to note in the text above is the last line. Since the system is behind a NAT firewall it is not accessible from the outside world. I like it that way. But it also means that this node has to ‘ping’ the vpn server from time to time.

Okay. All that is left now is to start WireGuard on the vpn-router-vm and on my first peer:

# systemctl enable wg-quick@wg0
# systemctl start wg-quick@wg0
# systemctl status wg-quick@wg0

The status should show something like/similar to this:

wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0
     Loaded: loaded (/lib/systemd/system/wg-quick@.service; enabled; vendor preset: enabled)
     Active: active (exited) since Sat 2022-03-12 12:35:01 CET; 23h ago
       Docs: man:wg-quick(8)
             man:wg(8)
             https://www.wireguard.com/
             https://www.wireguard.com/quickstart/
             https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8
             https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8
   Main PID: 1316620 (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 38309)
     Memory: 0B
     CGroup: /system.slice/system-wg\x2dquick.slice/wg-quick@wg0.service

mrt 12 12:35:01 inzicht systemd[1]: Starting WireGuard via wg-quick(8) for wg0...
mrt 12 12:35:01 inzicht wg-quick[1316620]: [#] ip link add wg0 type wireguard
mrt 12 12:35:01 inzicht wg-quick[1316620]: [#] wg setconf wg0 /dev/fd/63
mrt 12 12:35:01 inzicht wg-quick[1316620]: [#] ip -4 address add 10.10.1.2/24 dev wg0
mrt 12 12:35:01 inzicht wg-quick[1316620]: [#] ip link set mtu 1420 up dev wg0
mrt 12 12:35:01 inzicht systemd[1]: Finished WireGuard via wg-quick(8) for wg0.

Assuming they are active and working properly on both systems you should now be able to ping one another:

# ping 10.10.1.1
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=13.3 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=12.4 ms
^C
--- 10.10.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 12.441/12.875/13.309/0.434 ms

All rejoice for you have a working VPN!! \0/. Unfortunately your not done yet. Sorry.

So add a second remote system to your vpn setup by adding another “peer” section to the wg0.conf file on the vpn-router-vm and configure the second remote system like you did before, taking care of course of the other peer’s private key.

If all goes according to plan then that server is capable of pinging your vpn-router-vm. Again, we all rejoice \o/.

Now try to ping one of the remote systems from the other remote system. I am guessing it doesn’t work. That’s a bummer but can easily be fixed. The thing is that usually a default Linux system does not automatically do forwarding of IP packets. To enable IP forwarding you need two command:

# cat /proc/sys/net/ipv4/ip_forward  # <- probably this is zero/0
# sysctl -w net.ipv4.ip_forward=1    # store setting in startup config file
# echo 1 > /proc/sys/net/ipv4/ip_forward  # enable IP forwarding on running system

So, can you now ping the remote system from the other remote system? Yes, you can! Again, we all rejoice \o/. Surely you can now also ssh into a remote system from the other remote system? And, again, bummer, you can’t. Something is prohibiting access to the ssh port from the remote system. What can it be? Yes, the firewall on the vpn-router-vm system of course! So, you add a few lines to the wg0.conf on the vpn-router-vm to enable traffic to all ports in the vpn network (All credits to user ‘dddma’ on Reddit for this). Your wg0.conf file on the vpn-router-vm will now look like this (Both PostUp and PostDown are very long single lines!):

## Set Up WireGuard VPN on Ubuntu By Editing/Creating wg0.conf File ##
[Interface]
## My VPN server private IP address ##
AddreThe complete guide to setting up a multi peer WireGuard VPN networkss = 10.10.1.1/24
 
## My VPN server port ##
ListenPort = 41194
 
## VPN server's private key i.e. /etc/wireguard/privatekey ##
PrivateKey = private-key-of-von-router-vm

#Allow forwarding of ports
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
## Desktop/client VPN public key ##
PublicKey = public-key-of-my-first-peer-system
 
## client VPN IP address (note  the /32 subnet) ##
AllowedIPs = 10.10.1.2/32

[Peer]
## Desktop/client VPN public key ##
# dskdesk
PublicKey = public-key-of-my-second-peer-system
 
## client VPN IP address (note  the /32 subnet) ##
AllowedIPs = 10.10.1.3/32

Don’t forget to do a “systemctl restart wg-quick@wg0” when you change a config file. Anyway, that’s it. Your done. Enjoy! I hope you enjoyed this “Complete guide to setting up a multi-peer WireGuard VPN network”. No likes needed. Have a nice day.

If you found this helpfull please reward my work of researching and writing this. Please go to GitHub or Patreon to show your appreciation.