Introduction

This is the place where I dump technical content until my main blog gets revamped. (it'll never be revamped)

I tried a redesign but its nowhere close to even acceptable.

So for the time being, I'll use this mdbook to organize my stuff.

Pull requests are welcome on GitHub and GitLab.

Using FUSE to create filesystems (Talk)

This talk was presented during FOSS United Delhi meetup at Whizdom Club (Tapasya One, Gurugram).

This is divided in 2 parts:

Resources

Introduction to FUSE

FUSE is an interface that allows users to implement filesystems without needing to touch the bulky kernel code. While we might have heard about ext4 or btrfs, we too can develop our own filesystems!

Get the slides here

About FUSE

FUSE = Filesystem in USErspace

  • filesystem implementation runs in userspace
    • implementation written in a language of user's choice (eg. Python)
    • implementation runs as a normal script or application
  • FUSE module provides a bridge from filesystem implementation to kernel interface

FUSE has 3 major components:

  • kernel module fuse.ko
  • userspace library libfuse
  • a mount utility

FUSE support on your system

The relevant kernel module, fuse.ko is shipped since Linux version 2.6.14, so any FUSE based filesystem implementation can be assumed to work across virtually all Linux systems as of now.

The presence of FUSE kernel module can be checked on any Linux system.

[lain@wired ~]$ lsmod | grep fuse
fuse                  212992  5
[lain@wired ~]$ pkg-config --list-all | grep ^fuse
fuse3                          fuse3 - Filesystem in Userspace
fuse                           fuse - Filesystem in Userspace

How does a virtual filesysetm with FUSE work?

  • FUSE library knows about the basic list of operations on filesystems
  • the implementation is expected to provide definition for these basic methods
  • whenever system performs one of the filesystem operations, FUSE module refers the implementation code for knowing how to service request

Why userspace?

  • ability to utilize the userspace stack
    • python libraries
    • interact with desktop environment
    • interact with network resources
  • don't touch the kernel if not needed

Where is FUSE being utilized?

  • NTFS-3G uses FUSE to mount and read NTFS drives on Linux, FreeBSD, macOS and other operating systems
  • gocryptfs provides an encrypted overlay filesystem
  • google-drive-ocamlfuse lets you mount your Google Drive onto a local folder and browse/edit files
  • WikipediaFS let you view and exit MediaWiki articles from your local file browser and text editor

Using FUSE to write a YouTube channel browser

The main idea here is to return data from the YouTube API instead of our SQLite database. Once you decide upon the way this filesystem should function, you can proceed to implementing the relevant functions.

The complete code is on GitHub: flyingcakes85/fuse-yt.

How would this filesystem work?

For our very simple demonstration, lets assume the follewing

  • at the root, there shall be multiple directories
  • each of these directories will have names corresponding to the id of the channel they should be connected to
  • each directory should list videos from the corresponding channel

As a rough guideline, we can chart the following process

  • when user opens a directory, we can extract the channel id from the directory path
  • this channel id is then used to fetch videos via the YouTube API
  • readdir() returns these videos

Enumerating channels

This is the simplest part. All you've to do is to yield a list of strings which match channel names.

def readdir(self, path: str, _offset: int):
    contents = [".", ".."]
    if path == "/":
        contents.extend(self._channel_list())

    for r in contents:
        yield fuse.Direntry(r)

Listing video files in channel folders

We extend our earlier readdir() function to return "video files" if the path isn't at root. We append the names of videos to content array, which is finally returned by the function.

def readdir(self, path: str, _offset: int):
    # --- snip ---
    else:
        channel_name = path.split("/")[1]
        videos = self._get_videos(channel_name)
        for v in videos:
            contents.append(
                v["snippet"]["resourceId"]["videoId"]
                + "_"
                + v["snippet"]["title"].replace("/", " ")
                + ".desktop"
            )
    # --- snip ---

But do we download the videos?

Not really! It will be a huge wastage of bandwidth to download each video when opening the directory. Moreover, this will make filesystem operations very slow.

Videos can be played via mpv by using yt-dlp as a provider

mpv https://www.youtube.com/watch?v=dQw4w9WgXcQ

This can be shortened to

mpv ytdl://dQw4w9WgXcQ

So, we could return "files", which are just shell scripts with the following content:

#!/usr/bin/env bash
mpv ytdl://$VIDEO_ID

Sufficient?

Well, nope! While executing these scripts may play the corresponding video, but they're NOT resembling video files AT ALL!

So how do you get icons to a file?

Presenting video files to user

You create what's called a desktop entry. These a way to create shortcuts to commands and these files can specify their own icons too.

Here's what a simple desktop entry for the above shell script should look like:

[Desktop Entry]

Type=Application

Name=Rick Astley - Never Gonna Give You Up (Official Music Video)
Exec=mpv --ytdl-raw-options=paths=/tmp ytdl://dQw4w9WgXcQ
Icon=/tmp/dQw4w9WgXcQ.jpg

Comment=

Categories=Video;
Keywords=youtube;

NoDisplay=false

Now it looks much better

Finally, we use this as file contents for each video, replacing the video id and title everytime.

Out read() function now extracts video name from path

def read(self, path: str, _size: int, _offset: int) -> bytes:
    try:
        video_name = path.split("/")[2]
        video_id = video_name[:11]
        file_contents = f"""[Desktop Entry]

Type=Application

Name={video_name[12:-8]}
Exec=mpv --ytdl-raw-options=paths=/tmp ytdl://{video_id}
Icon={self.CACHE_FOLDER}/{video_id}.jpg

Comment=

Categories=Video;
Keywords=youtube;

RunInTerminal=true
NoDisplay=false
"""
        return bytes(file_contents, "utf-8")
    except ValueError:
        return -errno.ENOENT

Adding new "channels" (i.e. directories)

By now, we have everything implemented that will fetch videos given the channel id. So, logically speaking, in order to add a new channel, all we need to do is to add that channel id to our array CHANNEL_LIST.

We will need to implement two functions: mkdir and rename. While you can create a folder of your preferred name via mkdir command at the shell, many file explorers create folder with a default name and then rename it to your desired one. Thus, keeping in mind our use case (i.e. usability from file explorers), its important to implement both mkdir and rename.

def mkdir(self, path: str, mode: str):
    parent_dir, new_channel = os.path.split(path)

    # sanity checks
    if parent_dir != "/":
        return -errno.ENOENT

    if new_channel in self._channel_list():
        return errno.EEXIST

    # append to channel list
    self.CHANNEL_LIST.append(new_channel)

def rename(self, pathfrom: str, pathto: str):
    parent_dir, old_name = os.path.split(pathfrom)

    # sanity checks
    if parent_dir != "/":
        return -errno.ENOENT

    parent_dir, new_name = os.path.split(pathto)

    if parent_dir != "/":
        return -errno.ENOENT

    # rename
    for i in range(len(self.CHANNEL_LIST)):
        if self.CHANNEL_LIST[i] == old_name:
            self.CHANNEL_LIST[i] = new_name
            break
        i = i + 1

A Guide to Packaging

This is the reference chapter for my workshop about packaging, hosted by FOSS United at Red Hat, Powai.

I've put main focus on Arch Linux, but its is only an example. The core concepts apply to ALL packaging systems; not just on Linux distributions, but even on macOS (Homebrew) and Windows (Chocolatey). They can even be extended to language library resgistries like npm, pip, cargo etc.

This is presented in 6 parts:

Resources

What is packaging and package manager?

Packaging is the process of bundling a software along with the needed extra files (icons, desktop files, configurations etc) needed to install/run it on a user's computer.

Pretty dense, right?

Yes and no!

Installers bundle the software AND and extra tool which copies the software components at the right locations, sets up configurations, ensures the dependencies are fulfilled and of late, even downloading the software itself. Each software brings its own installer.

With packages however, the archive that is downloaded contains ONLY the software. Extracting and copying it to your system locations is done by a package manager.

Why shoud I use a package manager when installer can do everything?

So an installer can do everything? Right?

installers can do everything

Thats what you don't want. Its conveniently easy to download a malicious installer from the internet and let it run with administrator privileges on your system, wrecking havoc.

A package manager on the other hand is a trusted software which often (but not always!) comes bundled with your operating system. The externally downloaded software (i.e. packages) are never "run" with superuser privileges. It is the package manager which does all the dirty work.

Is there anything else a package manager does, or is that all?

Much more!

  • package managers usually have a list of "sources" from where they will download the relevant package
    • so the chances of user doing to a shady website and downloading malware instead of the actual application is greatly reduced
  • package managers often include a way to check integrity of downloaded packages
    • this safeguards you from mirror highjacking, MITM or simply damaged packages
  • package managers can compute the dependency chain, meaning the user does not need to manually install all the pre-requisite libraries for an application
  • in the same vein, package managers can find out orphan packages (i.e. libraries which no software is using); this is helpful for system cleanup
  • they may also support rollbacks/snapshots
    • eopkg on Solus natively supports rolling back package operations

How do package managers "install" a software?

The common procedure is to use a package archive and extract it to the relevant system directory. As said before, installers on Windows bundle the logic to copy the files. With package managers however, the work of extracting and copying lies on the package manager.

The package archive contains files organized in proper directories, along with some metadata. We'll see this in depth in further sections. For a very basic package manager, these files need to be extracted to the system directories. However, a lot many times, there must be some post extraction steps that need to be performed. For example, upon upgrading the kernel, initramfs needs to be regenerated. The package manager takes care of these tasks.

What exactly is in a package?

Continuing from last section, what exactly is in a package?

Package contains a compiled version of the software itself along with some necessities. So the next question is...

...What goes around a "software" anyway?

Honestly, I wish the answer were simple. I can compile down my C code into a single binary and that pretty much is the software.

But real world ain't that straightforward yet.

Say, your software makes API calls over the internet. It will need an http library, right? Do you start writing your own http library? Nope. You make a web search for http libraries for C language and end up at the iconic LibCurl website. What now? Do you bundle LibCurl along with your software itself?

Well, you can. And thats called static linking. But often, you don't.

What you do is to dynamically link to the library (in our example, libcurl.so) and then expect it to be there on user's system during runtine. No need to include it with your application.

But hey, who ensures that libcurl.so will exist on user's system. Thats where the package manager says hi! It will see that curl is a dependency of your application and automatically pull it in when a user installs your application.

These are called dependencies. These are just the list of other packages which are required for your software to run.

The major block (dependencies) aside, what else goes into a package?

  • manual pages are an aboslute must for most CLI utilities
  • similarly, desktop files are important for GUI applications to show up in app launchers
  • if an icon is showing up in the app launcher, it must need an app icon
  • application must have some configurations, right?
  • if it runs as a service, then service files are necessary
  • for CLI applications, often shell completions need to be bundled
  • scripts
    • pre install
    • post install
    • pre remove
    • post remove

Depending on the packaging format, the components might be more or less, but the idea remains same:

\[ \text{Package} = \text{Software} + \text{Necessary components} \]

Lets look inside an Arch Linux package

For the purpose of this demonstration, I'm going to be showing contents of 2 packages:

  • first, we see mdbook, which is the tool I'm using to build this reference website
    • I've chosen it, beacuse the package for mdbook is very simple
  • second is btop, a system resource monitor
    • I've chosen this to show what extra items a GUI application usually bundles

tl;dr

The purpose of this page is to show that a package contains files necessary for the software to work, placed nicely in relevant directories. Along with these files, package has some metadata files, which are required by package manager.

If you can appreciate these two facts, then you may skip this page.

Look inside: mdbook

Download the package

I'm downloading mdbook version 0.4.43, the latest as of composing this document. Depending on when you're reading, this version could be outdated or even removed from mirrors. You'll always find the latest version here: https://archlinux.org/packages/extra/x86_64/mdbook/download/

cd /tmp
wget https://london.mirror.pkgbuild.com/extra/os/x86_64/mdbook-0.4.43-1-x86_64.pkg.tar.zst

Extracting the package

mkdir mdbook
tar -C mdbook -xf mdbook-0.4.43-1-x86_64.pkg.tar.zst

At this point, you can see the contents of folder mdbook

cd mdbook
ls -alh mdbook

It should give the following output

lain@wired /tmp/mdbook
$ ls -alh
total 16K
drwxr-xr-x  3 lain lain  120 Dec 12 15:27 .
drwxrwxrwt 19 root root  560 Dec 12 15:26 ..
-rw-r--r--  1 lain lain 5.3K Nov 26 01:30 .BUILDINFO
-rw-r--r--  1 lain lain  597 Nov 26 01:30 .MTREE
-rw-r--r--  1 lain lain  409 Nov 26 01:30 .PKGINFO
drwxr-xr-x  4 lain lain   80 Nov 26 01:30 usr

Those familier with Filesystem Hierarchy Standard will already recognize the usr folder. We'll come to that later.

Exploring the package

Lets first explore the usr folder.

lain@wired /tmp/mdbook
$ tree usr
usr
├── bin
│   └── mdbook
└── share
    ├── bash-completion
    │   └── completions
    │       └── mdbook
    ├── doc
    │   └── mdbook
    │       └── README.md
    ├── fish
    │   └── vendor_completions.d
    │       └── mdbook.fish
    └── zsh
        └── site-functions
            └── _mdbook

Normally, you'd find more folders under /usr on your system, but since this package needs to place files in only two of them (bin and share), it has only those two folders.

The main binary

Under usr/bin we can see the binary, which is our software itself.

lain@wired /tmp/mdbook
$ file usr/bin/mdbook
usr/bin/mdbook: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=7f0d0368fa7949be803463b0b426fb8655febfb6, for GNU/Linux 4.4.0, stripped

Explaining this output is beyond the scope of this reference. All you need to be convinced of as of now is that the file I just discussed is actually a binary, because file reports it to be of ELF format.

Documentation

Under usr/share/doc/mdbook you see a bundled README.md file which has a basic starting point to using mdbook.

Documentations need not be markdown files. HTML files as documentation are very common too. Note that these are different from manpages, which are supposed to be stored in /usr/share/man.

Shell completions

Finally, we're left with 3 folders

  • usr/share/bash-completion
  • usr/share/zsh
  • usr/share/fish

These folders bundle shell completions for bash, zsh and fish respectively. These are not necessary to be bundled, but nice to have nonetheless.

Metadata files

We see 3 files not discussed yet: .BUILDINFO, .MTREE and .PKGINFO. These are metadata files, which aren't directly required by the software itself, but are needed nonetheless for installing and managing the package.

.MTREE

This file contains hashes and timestamps of all files which are in the package. This is used by package manager to verify integrity of package.

Since its a compressed file, you can't directly cat it.

lain@wired /tmp/mdbook
$ file .MTREE
.MTREE: gzip compressed data, from Unix, original size modulo 2^32 1584

file reports it to be gzip compressed. Use zcat to print out contents.

lain@wired /tmp/mdbook
$ zcat .MTREE
#mtree
/set type=file uid=0 gid=0 mode=644
./.BUILDINFO time=1732564809.0 size=5338 sha256digest=39d286ee40b577d2386b97104cb14b0ce01c98a42ae031300017202237f81872
./.PKGINFO time=1732564809.0 size=409 sha256digest=adc6a6de09619057285d9d76cbdb8ee9f3e99fd4e39ce41b0bdd924dbe1eab60
/set mode=755
./usr time=1732564809.0 type=dir
./usr/bin time=1732564809.0 type=dir
./usr/bin/mdbook time=1732564809.0 size=12027696 sha256digest=885ad1dd57886aea81f9d1b3fb0e56a4202a4a82b11481d4ea3259591dc4ac8f
./usr/share time=1732564809.0 type=dir
./usr/share/bash-completion time=1732564809.0 type=dir
./usr/share/bash-completion/completions time=1732564809.0 type=dir
./usr/share/bash-completion/completions/mdbook time=1732564809.0 mode=644 size=12922 sha256digest=4693cccac54ccf2c98f1ba4595a89028454c0222f568b7db1461ccc48ce7f868
./usr/share/doc time=1732564809.0 type=dir
./usr/share/doc/mdbook time=1732564809.0 type=dir
./usr/share/doc/mdbook/README.md time=1732564809.0 mode=644 size=1046 sha256digest=44e4086a85125c978b6c56be6beeb5d5b24c7394c96e281c1c589217eff4ca5f
./usr/share/fish time=1732564809.0 type=dir
./usr/share/fish/vendor_completions.d time=1732564809.0 type=dir
./usr/share/fish/vendor_completions.d/mdbook.fish time=1732564809.0 mode=644 size=7822 sha256digest=19374afc581279c2a31579171096e3fb03615b417431466c829e39a7a8d3b9cb
./usr/share/zsh time=1732564809.0 type=dir
./usr/share/zsh/site-functions time=1732564809.0 type=dir
./usr/share/zsh/site-functions/_mdbook time=1732564809.0 mode=644 size=10795 sha256digest=86db03752e01a5090041d67ff5b9824eaf59b02b93e2eb5bb7e00ef784bac13e

Just for example, you can get hash of a file and compare with the value in .MTREE.

lain@wired /tmp/mdbook
$ sha256sum usr/bin/mdbook
885ad1dd57886aea81f9d1b3fb0e56a4202a4a82b11481d4ea3259591dc4ac8f  usr/bin/mdbook

The hash does indeed match with the value in .MTREE.

.BUILDINFO

This file contains information about the build environment where this package was built. This is necessary for reproducible builds. It contains list of packages that existed in the build environment and specific build flags.

I am intentionally not copying the contents here, since the file is too long.

.PKGINFO

This file contains metadata which the package manager needs to install/manage the package.

lain@wired /tmp/mdbook
$ cat .PKGINFO
# Generated by makepkg 7.0.0
# using fakeroot version 1.36
pkgname = mdbook
pkgbase = mdbook
xdata = pkgtype=pkg
pkgver = 0.4.43-1
pkgdesc = Create book from markdown files, like Gitbook but implemented in Rust
url = https://github.com/rust-lang/mdBook
builddate = 1732564809
packager = Caleb Maclennan <alerque@archlinux.org>
size = 12060281
arch = x86_64
license = MPL2
depend = gcc-libs
makedepend = cargo

You'll notice this includes a key named depends. This is where the dependencies are listed and pacman knows which other packages need to be installed prior, in order for the current one to work. In this case, mdbook needs gcc-libs in order to work.

More metadata files

The three metadata files we discussed are compulsory to be included with every package. However, there are two more files which are optionally included.

  • .INSTALL: this is used to bundle commands which need to be run after install, upgrade or removal of the package
  • .Changelog: an optional file kept by package maintainer documenting changes of the package

Libraries

Since mdbook is written in Rust, it is almost a statically linked executable. All dependencies are statically linked (i.e. part of the binary itself) and the only library it links dynamically is GNU LibC. (and hence the external dependency on gcc-libs above) Had it bundled shared libraries, they would have been placed in usr/lib.

Look inside: btop

Similar to the last section, lets get through this quick.

Download and extract

The latest version of btop is always available at https://archlinux.org/packages/extra/x86_64/btop/download/

cd /tmp
wget https://london.mirror.pkgbuild.com/extra/os/x86_64/btop-1.4.0-4-x86_64.pkg.tar.zst
mkdir btop
tar -C btop -xf btop-1.4.0-4-x86_64.pkg.tar.zst

Exploring the package

The usr folder for btop looks like this

lain@wired /tmp/btop
$ tree usr
usr
├── bin
│   └── btop
└── share
    ├── applications
    │   └── btop.desktop
    ├── btop
    │   ├── README.md
    │   └── themes
    │       ├── adapta.theme
    │       ├── adwaita.theme
    │       ├── ayu.theme
    │       ├── dracula.theme
    │       ├── dusklight.theme
    │       ├── elementarish.theme
    │       ├── everforest-dark-hard.theme
    │       ├── everforest-dark-medium.theme
    │       ├── flat-remix-light.theme
    │       ├── flat-remix.theme
    │       ├── greyscale.theme
    │       ├── gruvbox_dark.theme
    │       ├── gruvbox_dark_v2.theme
    │       ├── gruvbox_light.theme
    │       ├── gruvbox_material_dark.theme
    │       ├── horizon.theme
    │       ├── HotPurpleTrafficLight.theme
    │       ├── kyli0x.theme
    │       ├── matcha-dark-sea.theme
    │       ├── monokai.theme
    │       ├── night-owl.theme
    │       ├── nord.theme
    │       ├── onedark.theme
    │       ├── paper.theme
    │       ├── phoenix-night.theme
    │       ├── solarized_dark.theme
    │       ├── solarized_light.theme
    │       ├── tokyo-night.theme
    │       ├── tokyo-storm.theme
    │       ├── tomorrow-night.theme
    │       └── whiteout.theme
    ├── icons
    │   └── hicolor
    │       ├── 48x48
    │       │   └── apps
    │       │       └── btop.png
    │       └── scalable
    │           └── apps
    │               └── btop.svg
    └── man
        └── man1
            └── btop.1.gz

Binary

The binary is again located inside usr/bin.

lain@wired /tmp/btop
$ file usr/bin/btop
usr/bin/btop: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=112225f945ec5d5c28f69c1de22117f957a926f9, for GNU/Linux 4.4.0, stripped

We can run ldd to check which all libraries it is likned to.

lain@wired /tmp/btop
$ ldd usr/bin/btop
	linux-vdso.so.1 (0x00007ffd9e7fd000)
	libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x000070b648c00000)
	libm.so.6 => /usr/lib/libm.so.6 (0x000070b648b11000)
	libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x000070b648ae3000)
	libc.so.6 => /usr/lib/libc.so.6 (0x000070b6488f2000)
	/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x000070b649026000)

Manpage

Manpage for btop is in the Troff format. You can read the bundled manpage file using the man command.

man usr/share/man/man1/btop.1.gz

Desktop file and icons

You'll find a desktop file at usr/share/applications/btop.desktop which contains the desktop entry.

lain@wired /tmp/btop
$ cat usr/share/applications/btop.desktop
[Desktop Entry]
Type=Application
Version=1.0
Name=btop++
GenericName=System Monitor
GenericName[it]=Monitor di sistema
Comment=Resource monitor that shows usage and stats for processor, memory, disks, network and processes
Comment[it]=Monitoraggio delle risorse: mostra utilizzo e statistiche per CPU, dischi, rete e processi
Icon=btop
Exec=btop
Terminal=true
Categories=System;Monitor;ConsoleOnly;
Keywords=system;process;task

The specific icon mentioned here is then searched for inside /usr/share/icons. The relevant icons are bundled in package at usr/share/icons/hicolor/48x48/apps/btop.png and usr/share/icons/hicolor/scalable/apps/btop.svg.

Btop specific themes

Finally, btop also supports loading different theme definitions. The default set of bundled themes is placed inside a dedicated folder at usr/share/btop/themes.

Packaging on Arch Linux

The official wiki page (highly recommended to read) can be found here: Creating packages - Arch Wiki. My page gives a brief overview of the process.

Package build process

Packages on Arch Linux are built using makepkg tool. It takes a PKGBUILD as input and follows it to output a package archive.

The PKGBUILD is a recipe which lists ingredients (sources) and process (build steps) to build a package.

makepkg looks at the sources and downloads them. It then computes hashes of downloaded sources with the checksums provided in PKGBUILD. Once the verification is complete, sources are extracted and built using the recipe provided. The said recipe must be executable in bash. Finally, upon successful build, the build outputs are archived along with the metadata into a package. This package is what the package manager (pacman) later extracts in order to "install" the software.

PKGBUILD

Arch Linux provides a starting point with bundled protoype for a PKGBUILD.

lain@wired ~
cat /usr/share/pacman/PKGBUILD.proto
# This is an example PKGBUILD file. Use this as a start to creating your own,
# and remove these comments. For more information, see 'man PKGBUILD'.
# NOTE: Please fill out the license field for your package! If it is unknown,
# then please put 'unknown'.

# Maintainer: Your Name <youremail@domain.com>
pkgname=NAME
pkgver=VERSION
pkgrel=1
epoch=
pkgdesc=""
arch=()
url=""
license=('GPL')
groups=()
depends=()
makedepends=()
checkdepends=()
optdepends=()
provides=()
conflicts=()
replaces=()
backup=()
options=()
install=
changelog=
source=("$pkgname-$pkgver.tar.gz"
        "$pkgname-$pkgver.patch")
noextract=()
sha256sums=()
validpgpkeys=()

prepare() {
	cd "$pkgname-$pkgver"
	patch -p1 -i "$srcdir/$pkgname-$pkgver.patch"
}

build() {
	cd "$pkgname-$pkgver"
	./configure --prefix=/usr
	make
}

check() {
	cd "$pkgname-$pkgver"
	make -k check
}

package() {
	cd "$pkgname-$pkgver"
	make DESTDIR="$pkgdir/" install
}

There are a lot of parameters, although, its not necessary to fill in all of them. You would fill in the file with relevant values. Refer an existing PKGBUILD (for example, maptool-bin) to see what values should go in.

PKGBUILD functions

prepare()

Steps to prepare sources for building should go here. Normally, this normally includes patches and distribution specific changes.

pkgver()

This step fetches package version. While package version is usually defined in the variable outside, the version for VCS sources needs to be fetched via shell commands that go in this function.

build()

Here comes the actual build step. A very simple build() function can look like this:

build() {
    ./configure --prefix=/usr
    make DESTDIR="$pkgdir/" install
}

This is source specific and packager needs to know if there are any build systems needed and what pre build steps need to be ensured. For example, build dependencies must be fetched before building.

check()

The test suite invocation goes here. Classically, this could be as simple as:

check() {
    make check
}

However, modern test suites are more sophisticated and this function needs to be adapted accordingly.

package()

This is the final step, which is also compulsory for all packages to include. Here, the built files are copied into $pkgdir to ensure they're copied into the final package archive.

Sample PKGBUILD

Lets build a simple PKGBUILD for a hello world application. The source for our application is going to be a GitHub repository, sepcifically memsharded/hello.

Filling up the PKGBUILD variables, it should look like this:

pkgname=hello-git
_pkgname=hello
pkgver=0.0.1
pkgrel=1
pkgdesc='A hello world application'
url='https://github.com/memsharded/hello'
source=("git+https://github.com/memsharded/hello")
arch=('x86_64')
licence=('MIT')
makedepends=('cmake')
sha256sums=(SKIP)

We have skipped checksum, because when we're using git as the source, git is doing its own checksum and ensuring integrity.

This application doesn't need any pre-build steps, so we can skip that. The application has a git source, so we do need pkgver() function to generate a version. In our case, we will use the short commit hash as the version.

pkgver() {
    cd "$srcdir/$_pkgname"
    git log --oneline | head -n 1 | cut -d' ' -f1
}

Lets move to writing the build() function.

The source has provided CMakeLists.txt file, which is a clear indication that its using CMake. So, lets put the relevant CMake build step in here:

build() {
    cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr hello
    make
}

By now, we've built the binary from source. Finally, we must copy over binary to $pkgdir so that it ends up in package archive. This is done inside package() function.

package() {
    install -Dm755 "$srcdir/hello/bin/greet" "$pkgdir/usr/bin/greet"
}

The final PKGBUILD should look like this:

pkgname=hello-git
_pkgname=hello
pkgver=a707ee9
pkgrel=1
pkgdesc='A hello world application'
url='https://github.com/memsharded/hello'
source=("git+https://github.com/memsharded/hello")
arch=('x86_64')
licence=('MIT')
makedepends=('cmake')
sha256sums=(SKIP)

pkgver() {
    cd "$srcdir/$_pkgname"
    git log --oneline | head -n 1 | cut -d' ' -f1
}

build() {
    cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr hello
    make
}

package() {
    install -Dm755 "$srcdir/hello/bin/greet" "$pkgdir/usr/bin/greet"
}

Building from PKGBUILD

As mentioned before, the tool to build from PKGBUILD is makepkg.

Lets build the PKGBUILD. Change into the directory which has PKGBUILD and run:

makepkg -s

This will fetch source, build and package it. Sample output is shown below

==> Making package: hello-git 0.0.1-1 (Sat 14 Dec 2024 04:01:21 PM IST)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
  -> Updating hello git repo...
==> Validating source files with sha256sums...
    hello ... Skipped
==> Extracting sources...
  -> Creating working copy of hello git repo...
Reset branch 'makepkg'
==> Starting pkgver()...
==> Updated version: hello-git a707ee9-1
==> Removing existing $pkgdir/ directory...
==> Starting build()...
CMake Warning (dev) at CMakeLists.txt:1 (PROJECT):
  cmake_minimum_required() should be called prior to this top-level project()
  call.  Please see the cmake-commands(7) manual for usage documentation of
  both commands.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Deprecation Warning at CMakeLists.txt:2 (cmake_minimum_required):
  Compatibility with CMake < 3.10 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value.  Or, use the <min>...<max> syntax
  to tell CMake that the project requires at least <min> but has been updated
  to work with policies introduced by <max> or earlier.


-- Configuring done (0.0s)
-- Generating done (0.0s)
-- Build files have been written to: /mnt/hd1/home/lain/.cache/aur/hello-git/src
[ 50%] Built target hello
[100%] Built target greet
==> Entering fakeroot environment...
==> Starting package()...
==> Tidying install...
  -> Removing libtool files...
  -> Purging unwanted files...
  -> Removing static library files...
  -> Stripping unneeded symbols from binaries and libraries...
  -> Compressing man and info pages...
==> Checking for packaging issues...
==> Creating package "hello-git"...
  -> Generating .PKGINFO file...
  -> Generating .BUILDINFO file...
  -> Generating .MTREE file...
  -> Compressing package...
==> Leaving fakeroot environment.
==> Finished making: hello-git a707ee9-1 (Sat 14 Dec 2024 04:01:23 PM IST)

Your working directory should have a file named hello-git-a707ee9-1-x86_64.pkg.tar. This is the built package. Note that the package name contains version, which in this case is the latest commit hash. Hence, this might be different when you build the package.

A summary of useful flags for makepkg:

  • -c, --clean: clean up leftover files after successful build
  • -e, --noextract: this will force makepkg to not extract source files or run prepare(). Useful when manually making changes in $srcdir to test.
  • -f, --force: rebuild the package even if its already built
  • --skipchecksums: do not verify checksum of source files
  • -i, --install: install package after successful build
  • -s, --sycdeps: install missing build/run dependencies

Installing and running the built package

Installing the built package is very straightforward.

sudo pacman -U hello-git-a707ee9-1-x86_64.pkg.tar

After installation, we can run the application and query pacman (the package manager) for details.

lain@wired /tmp/build
$ greet
Hello World!
lain@wired /tmp/build
$ pacman -Qo /usr/bin/greet
/usr/bin/greet is owned by hello-git a707ee9-1

You can even ask pacman to uninstall the package.

lain@wired /tmp/build
$ sudo pacman -R hello-git
checking dependencies...

Package (1)  Old Version  Net Change

hello-git    a707ee9-1     -0.01 MiB

Total Removed Size:  0.01 MiB

:: Do you want to remove these packages? [Y/n] y
:: Processing package changes...
(1/1) removing hello-git                           [------------------------] 100%
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...

Extracting our built package

As a final exercise, we shall do what we did in the last page: extracting a package and observing its contents.

mkdir hello_ext
tar -C hello_ext -xf hello-git-a707ee9-1-x86_64.pkg.tar
cd hello_ext

Lets see the directory tree

lain@wired /tmp/build/hello_ext
$ tree -a
.
├── .BUILDINFO
├── .MTREE
├── .PKGINFO
└── usr
    └── bin
        └── greet

We can see the three metadata files, along with the one binary that we built from source.

Since we have lot less files in this package, compared to mdbook, contents of .MTREE are lesser too.

lain@wired /tmp/build/hello_ext
$ zcat .MTREE
#mtree
/set type=file uid=0 gid=0 mode=644
./.BUILDINFO time=1734184494.0 size=55438 sha256digest=bfc13d6c81cf88b6033753fc080fddbfb02fedd3047b0a325fdb88b84c21ddaa
./.PKGINFO time=1734184494.0 size=313 sha256digest=720c516ff2c14fa965a6b1ed893507622181ad87ba1a011b3ffa5d6e12d70517
./usr time=1734184494.0 mode=755 type=dir
./usr/bin time=1734184494.0 mode=755 type=dir
./usr/bin/greet time=1734184494.0 mode=755 size=15480 sha256digest=0348f619559c5913f255159583c733aaf0799cf27b8d6130e8ee41c11ba8191e

Further resources

Packaging the manual way

We just saw makepkg do its magic. But, can you do all that manually? What exactly is makepkg doing behind the scenes?

Come to think of it, you're already specifying the build steps. The extra steps by makepkg are

  • generating metadata files
    • .MTREE
    • .PKGINFO
    • .BUILDENV
  • creating an archive from all the files

Lets do this manually!

Our application binary

We'll use a very simple hello world program in C language.

#include<stdio.h>

int main(){
    printf("hello world\n");
    return 0;
}

Save it as, let's say, hello.c. Compile it.

gcc -O2 hello.c -o hello
chmod +x hello

This gives you a hello binary which prints hello world.

Create a directory pkg where we shall copy files that eventually will go into our final package. Inside it, create usr/bin directory where our binary shall reside.

mkdir pkg
cd pkg
mkdir -p usr/bin
cp ../hello ./usr/bin/hello

At this point of time, directory tree for pkg should look like this:

.
└── usr
    └── bin
        └── hello

Writing .PKGINFO

This is rougly what .PKGINFO should look like. Save it to the file.

pkgname = hello-c
pkgbase = hello-c
xdata = pkgtype=pkg
pkgver = 0.1.1-1
pkgdesc = home cooked hello world in c
url = https://github.com/flyingcakes/hello-world-c
builddate = 1735307439
packager = flyingcakes
size = 14424
arch = x86_64
makedepend = gcc
  • pkgname and pkgbase can be set to your package name
  • pkgver is of the format <version>-<revision>
  • url could be anything
  • builddate is the unix timestamp when package was built
    • run date +%s to find present timestamp and copy paste that
  • to get the size (in bytes), simply run ls -l usr/bin/hello

Package it all together!

In pkg directory, run tree -a and ensure the hierarchy is same as following.

.
├── .PKGINFO
└── usr
    └── bin
        └── hello

If you're followed correctly till here, use tar to make an archive out of the files.

tar -cjvf hello.tar $(ls -A)

This finally results in hello.tar package. This can now be installed using pacman.

$ sudo pacman -U hello.tar
loading packages...
resolving dependencies...
looking for conflicting packages...

Package (1)  New Version  Net Change

hello-c      0.1.1-1        0.01 MiB

Total Installed Size:  0.01 MiB

:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring                     [------------------------] 100%
(1/1) checking package integrity                   [------------------------] 100%
(1/1) loading package files                        [------------------------] 100%
(1/1) checking for file conflicts                  [------------------------] 100%
(1/1) checking available disk space                [------------------------] 100%
:: Processing package changes...
(1/1) installing hello-c                           [------------------------] 100%
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...

This can now be run.

$ hello
hello world

You can verify the binary being called is indeed from the package we installed.

$ which hello
/usr/bin/hello

$ pacman -Qo /usr/bin/hello
/usr/bin/hello is owned by hello-c 0.1.1-1

Generating .BUILDINFO

Notice how we didn't have this file already. As per the Arch wiki, this is used only for reproducible builds and pacman won't check this file during installation itself. Thats why, we could install our hello-c package without .BUILDINFO file. But for the sake of completeness of this tutorial, lets see how this is generated.

First is some metadata, which we can manually write down.

format = 2
pkgname = hello-git
pkgbase = hello-git
pkgver = 2
pkgarch = x86_64
pkgbuild_sha256sum = eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
packager = flyingcakes
builddate = 1735304913
builddir = /tmp/c/pkg
startdir = /tmp/c
buildtool = makepkg
buildtoolver = 7.0.0

pkgbuild_sha256sum should have been set to an actual value. But since we aren't using any PKGBUILD at all, I've set it to an obviously fake string.

Next comes buildenv. This is defined in /etc/makepkg.conf. You can get these by grepping and then format printing.

eval "$(grep '^BUILDENV' /etc/makepkg.conf)"
printf "buildenv = %s\n" "${BUILDENV[@]}"

This should print the following.

buildenv = !distcc
buildenv = color
buildenv = !ccache
buildenv = check
buildenv = !sign

You can redirect it to your existing .BUILDINFO file.

printf "buildenv = %s\n" "${BUILDENV[@]}" >> .BUILDINFO

OPTIONS too are defined in /etc/makepkg.conf and can be extraced in a similar manner.

eval "$(grep '^OPTIONS' /etc/makepkg.conf)"
printf "options = %s\n" "${OPTIONS[@]}" >> .BUILDINFO

For the final thing - installed packages - you can get those via pacman.

pkglist=($(pacman -Q | sed "s# #-#"))
printf "installed = %s\n" "${pkglist[@]}" >> .BUILDINFO

Generating .MTREE

bsdtar -czf .MTREE --format=mtree --options='!all,use-set,type,uid,gid,mode,time,size,md5,sha256,link' usr .PKGINFO .BUILDINFO

This will generate .MTREE file. You can verify contents using zcat.

$ zcat .MTREE
#mtree
/set type=file uid=1000 gid=1000 mode=644
./.BUILDINFO time=1735327948.61170402 size=45046 md5digest=85d3dc8d2df6cfc10a51e2e73d8b3ade sha256digest=6a5730f309470f8253a2e33d1d6284cc55bf64a1d621f1910177b1e87ea64e1c
./.PKGINFO time=1735318522.194493505 size=239 md5digest=d1d0ded3b9f3459f63714f4d0ec858f9 sha256digest=2a35e7b8b6a0c93d22f180666a20352a9d5a0953991a7199e4c5d545858cdc55
./usr time=1735317631.312675756 mode=755 type=dir
./usr/bin time=1735317634.369344816 mode=755 type=dir
./usr/bin/hello time=1735318539.567874863 mode=755 size=14424 md5digest=e33063b83acf1c5270197ba18a504209 sha256digest=67b23e0fb3a79c0a0fb9ec312c8885c51445509a311a34b8fc28eb38f7f6629b

The command i just used to generate mtree file is from an older version of makepkg. Newer version has a slightly heavier command. Nonetheless, result is same.

You can now package it again. Delete hello.tar if it exist from older step. Then run the tar command again.

tar -cjvf hello.tar $(ls -A)

Packaging: The broader look

How does everything tie together?

Its the goal: giving users an easy way to pull in a software easily. How you achieve that goal is a design preference. How you define "easy" is a use case preference.

But the goal is to bring in packages to a user's system in an efficient and quick manner.

Language package registries

Most of us are familiar with NPM or Pip. These are no different in philosophy than the case of Arch Linux we just discussed. Practically, they differ in their target system. While Arch packages cater to installing software on a system running Arch Linux, NPM and Pip cater to installing language dependencies for NodeJS or Python projects respectively.

But the goal remains same: bring in that package easy and quick.

The packaging concepts too remain the same. Arch packages use a shell file based recipe to define its package. NPM uses a JSON file and Pip uses a text file.

Windows and MacOS

While these operating systems don't come out with a CLI based package manager out of the box, they can be loaded with third party package managers - Chocolatey and Homebrew respectively.

Chocolatey uses a nuspec file and powershell install script to define its packages. Homebrew uses an easy to read "formula" file which defines the package.

Other Linux distributions.

  • Solus uses a Yaml file to define its packages, later installed via its own package manager, eopkg.
  • Void Linux uses xbps-src to build packages, defined in simple shell scripts, similar to Arch Linux
  • ....every sane distribution has some or the other packaging system

Newer packaging systems

Flatpak

Claiming to be the "future of apps on Linux", this runs applications inside a soft sandbox, using ostree and bubblewrap. This makes the package distribution independent (package once, run everywhere).

By using ostree as its base, it lets each app specify its own dependency and independent versions, while also ensuring deduplicacy for conserving disk space.

Snap

Nix

Static site URL shortener with Linky

This is a short documentation of the thought and functioning behind Linky, a simple URL shortener which I'm presently using on x.snehit.dev.

You can find the script at:

Background

For over 2 years now, I've been running Suri for my static site URL shortener. Recently while updating the links, I noticed that Suri builds were failing. I had to re-clone the upstream repo for Suri and add my short links as JSON.

While it did work, I wondered why should building a simple static site with 99% stuff predefined be so cumbersome. I stopped giving thoughts to this, since now my site was working. But few days back, Netlify sent me an email about their changed pricing tiers. Being too lazy to see the changes, I decided to write and host my own URL shortener, which doesn't need a lengthy build / deploy process.

How does a static site URL shortener work?

First, understand how a server side HTTP redirect would work. RFC 9110 defines HTTP semantics. As per this spec, the HTTP/1.1 302 response along with a Location header should cause a redirect.

But for static site hosting like GitHub Pages, generating such an HTTP response is not possible. So what do we do?

The solution is to use meta refresh to ceate an instant client side redirect.

In short, adding the following to your HTML page will cause the browser to redirect to https://snehit.dev.

<!doctype html><meta http-equiv="refresh" content="0; url=https://snehit.dev" />

Extending this to create a static site URL shortener

I'll use a links.json file to list out routes and redirects, exactly the same way Suri does it. So, for example, my links.json could look like this:

{
  "/": "https://snehit.dev",
  "tw": "https://x.com/flyingcakes85",
  "gh": "https://github.com/flyingcakes85"
}

Now, all I need to do is generate HTML pages for each route, which contains the meta refresh code along with corresponding URL. A simple Python script can achieve that.

output_dir = "build"
for link in links:
    output_file = Path(output_dir + "/" + link + "/index.html")
    output_file.parent.mkdir(exist_ok=True, parents=True)
    output_file.write_text(f"<!doctype html><meta http-equiv=\"refresh\" content=\"0; url={links[link]}\" />")

Loading the JSON in links, this loop will create an HTML file for all the individual route specified.

I've wrapped the above loop in a neat function and taken command line arguments using argparse in order to make this script easy to use. Refer the repo flyingcakes85/linky on GitHub to see the full code.

Now all we need to do is to run this script with out complete links.json.

python linky.py links.json

I've also added -o/--output-dir flag to change the output folder from build to something else.

python linky.py links.json --ouptut-dir public

Adding a 404 page

Anything in static folder will be copied as it is into the build output directory. One nifty use case for this is to add a 404.html which will be served by GitHub Pages or Cloudflare Pages whenever needed. Just put the following in static/404.html

<p>Not found</p>

This can, of course, be as simple or as lengthy of a 404 page as you'd like.

Hosting it

You can host this on GitHub Pages. However, my choice is Cloudflare Pages, because I find it easier to configure build on Cloudflare dashboard as compared to GitHub Actions.

Go to your Cloudflare dashboard and select your site. On the left bar, go to "Workers & Pages". Hit the "Create" button. Under "Pages" tab, connect your GitHub or GitLab repository which has linky.py and your links.json file.

For build settings, select "None" for the framemwork preset. Build command should be python linky.py links.json. Set build output directory to build. Hit save and deploy and voila, you link shortener is up and running!

Adding your custom URL

There is one last step: adding your domain. This assumes you already have a domain purchased, otherwise your base URL will be project-name.pages.dev. If you don't have a URL, you can skip this, but note that the links this way won't actually be short.

On the "Custom domains" tab in your new project, click "Set up a custom domain". Save it and Cloudflare should do the rest if your DNS is managed by Cloudflare itself. Otherwise you'll have to manually update the CNAME on your DNS.

Software Verification Notes

Notes for software verification.

Introduction to I&C software

Major Activities

  • Requirements Definition
    • What do we want to achieve?
    • define requirements (specifications)
  • Design
    • How to achieve?
    • define DSA
    • breakdown into components and their architecture
  • Implementation
    • convert design to code
  • Verification
    • Is our implementation correct?
    • wrt requirements & design

Arranging activities

eg. Waterfall model

flowchart TB

A[Requirements
Definition] --> B[Design] --> C[Implementation] --> D[Verification]

Freeze an activity before going to next step. Can be used for critical systems, since requirements unlikely to change while developing.

Instrumentation and control systems

flowchart LR

A[Physical
Process] --"temperature
neutron flux"--> B[Sensors] --"voltage/
current"--> C[Controller
PLC] --"voltage/
current"--> D[Actuators] --> A

Program pseudocode

every 10 ms :
    input_signal <-- read from board
    regulating_signal := compute(input_signal)
    regulating_signal --> output to board

Features

  • reactive
    • never terminate
  • real time performance requirements
  • important to safety
    • failure not acceptable

Testing and failure

Ensuring program correctness

Conventional method: testing

How to do testing?

  • obtain test cases
    • inputs
    • outputs
  • apply to implementation
  • record outputs
  • check if o/p match expected o/p

Limitations with testing

  • can't do exhaustive search
    • eg. for 4 input, each 16bit, number of testcases is \(2^{4\times16}\), i.e. \(2^{64}\)
    • real programs will have much more and broader inputs
  • \(\therefore\) infeasible in practice
  • testing shows presence of bugs, but not their absence

For I&C systems however, we are also interested in showing absence of bugs.

Past examples of failure in I&C systems

  • Ariane-5: failed launch
    • 64bit signed integer overflow
    • report said: Software should be assumed to be faulty until current best practice methods can prove it to be correct.
  • Therac-25: 100 times radiation to patients

Rust language content dump for talks

Inspirations (that nobody cares about)

  • SML, OCaml: algebraic data types, pattern matching, type inference, semicolon statement separation
  • C++: references, RAII, smart pointers, move semantics, monomorphization, memory model
  • ML Kit, Cyclone: region based memory management
  • Haskell (GHC): typeclasses, type families
  • Newsqueak, Alef, Limbo: channels, concurrency
  • Erlang: message passing, thread failure, linked thread failure, lightweight concurrency
  • Swift: optional bindings
  • Scheme: hygienic macros
  • C#: attributes
  • Ruby: closure syntax, block syntax
  • NIL, Hermes: typestate
  • Unicode Annex #31: identifier and pattern syntax

RAII example

std::mutex m;
 
void bad() 
{
    m.lock();                    // acquire the mutex
    f();                         // if f() throws an exception, the mutex is never released
    if(!everything_ok()) return; // early return, the mutex is never released
    m.unlock();                  // if bad() reaches this statement, the mutex is released
}
 
void good()
{
    std::lock_guard<std::mutex> lk(m); // RAII class: mutex acquisition is initialization
    f();                               // if f() throws an exception, the mutex is released
    if(!everything_ok()) return;       // early return, the mutex is released
}

Immutability

from doc

Vars immutable by default. Compiler warning if variable not mutated after declaring mut. Error if immutable variable reassigned.

fn main() {
    let x = 5;
    println!("The value of x is: {x}");
    x = 6; // error
    println!("The value of x is: {x}");
}

Working code:

fn main() {
    let mut x = 5; // <-- added mut
    println!("The value of x is: {x}");
    x = 6;
    println!("The value of x is: {x}");
}

Shadowing

refer doc

Ownership

Passing ownership from one function to another


#![allow(unused)]
fn main() {
let s1 = String::from("hello");
let s2 = s1; // s1 moved; can't be used again :(
let s3 = s2.clone(); // s2 copied; both can be used independently
}
  • Move
  • Copy

(Move and copy are one of the many traits)

Passing ownership

String type can take arbitrary time to Copy. Hence default action is to pass its ownership (i.e. Move it). On the other hand, integers are easy to copy and take deterministic time. Hence they are cloned by default (i.e. Copy)

fn main() {
    let s = String::from("hello");  // s comes into scope

    takes_ownership(s);             // s's value moves into the function...
                                    // ... and so is no longer valid here

    let x = 5;                      // x comes into scope

    makes_copy(x);                  // x would move into the function,
                                    // but i32 is Copy, so it's okay to still
                                    // use x afterward

} // Here, x goes out of scope, then s. But because s's value was moved, nothing
  // special happens.

fn takes_ownership(some_string: String) { // some_string comes into scope
    println!("{}", some_string);
} // Here, some_string goes out of scope and `drop` is called. The backing
  // memory is freed.

fn makes_copy(some_integer: i32) { // some_integer comes into scope
    println!("{}", some_integer);
} // Here, some_integer goes out of scope. Nothing special happens.

Function can return ownership

If function returns a value, the calling function can capture return data and gets ownership.

References

refer doc

Use & to pass a reference

fn main() {
    let s1 = String::from("hello");

    let len = calculate_length(&s1);

    println!("The length of '{}' is {}.", s1, len);
}

fn calculate_length(s: &String) -> usize {
    s.len()
}

But using & doesn't mean you can edit the variable

fn main() {
    let s = String::from("hello");

    change(&s);
}

fn change(some_string: &String) {
    some_string.push_str(", world"); // <-- Error
}

Mutable References to other functions

refer doc

fn main() {
    let mut s = String::from("hello");

    change(&mut s);
}

fn change(some_string: &mut String) {
    some_string.push_str(", world");
}

Mutable References within scope


#![allow(unused)]
fn main() {
    let mut s = String::from("hello");

    let r1 = &s; // no problem
    let r2 = &s; // no problem
    let r3 = &mut s; // BIG PROBLEM

    println!("{}, {}, and {}", r1, r2, r3);

}

Dangling references

Situation where a pointer is pointing to data which has been dropped

fn main() {
    let reference_to_nothing = dangle();
}

fn dangle() -> &String {
    let s = String::from("hello");

    &s
} // After function exits, the string data is dropped; 
	// and ownership of reference s is given back to main

This code won't compile.

Error handling philosophy

There is only ONE error which halts your program / crashes it. That is panic!(). No other error shall arbitrarily halt your program. If a program panics, then run the program with env var RUST_BACKTRACE=1 to print backtrack from where error occured.

panic!() is called only when the error is NOT recoverable.

  • trying to open a non existent file without telling Rust what to do if file doesn't exist
  • trying to send network packet when there is no network connection and not telling Rust what to do when no network

Most of the panics occur because an external requirement is not satisfied. (like existence of data on filesystem, network connectivity etc).

But the program will still NOT panic arbitrarily. If you access a file that doesn't exist, Rust won't crash the program. In the code itself YOU, the programmer, has to explicitly define what should happen if file doesn't exist. Handle the error or crash program.

This approach to error handling makes the code verbose. There is no default behaviour. There is only explicit behaviour which YOU have defined. This verbosity comes at a benefit that if your program is crashing, then its because you ASKED it to crash. Program doesn't crash because "something went wrong".

Result type

Refer Recoverable Errors with Result

Result


#![allow(unused)]
fn main() {
enum Result<T, E> {
    Ok(T),
    Err(E),
}
}

Functions don't return the value you expect. It returns a Result.

use std::fs::File;

fn main() {
    let file_handler_result = File::open("hello.txt");
		// file_handler_result is of type Result<std::fs::File, std::io::Error>
}

Now YOU decide what to do with this result.

Handling it by panicking

use std::fs::File;

fn main() {
    let file_handler_result = File::open("hello.txt");

    let file_handler = match file_handler_result {
        Ok(file) => file,
        Err(error) => panic!("Problem opening the file: {:?}", error),
    };
}

Similar to

use std::fs::File;

fn main() {
    let greeting_file = File::open("hello.txt").unwrap();
}

Or panic with error message

use std::fs::File;

fn main() {
    let greeting_file = File::open("hello.txt")
        .expect("hello.txt should be included in this project");
}

In all these cases, its YOU, who asked the program to crash. Of course, this isn't any better than C++ yet.

Matching on various errors

use std::fs::File;
use std::io::ErrorKind;

fn main() {
    let greeting_file_result = File::open("hello.txt");

    let greeting_file = match greeting_file_result {
        Ok(file) => file,
        Err(error) => match error.kind() {
            ErrorKind::NotFound => match File::create("hello.txt") {
                Ok(fc) => fc,
                Err(e) => panic!("Problem creating the file: {:?}", e),
            },
            other_error => {
                panic!("Problem opening the file: {:?}", other_error);
            }
        },
    };
}

Same thing with closures

use std::fs::File;
use std::io::ErrorKind;

fn main() {
    let greeting_file = File::open("hello.txt").unwrap_or_else(|error| {
        if error.kind() == ErrorKind::NotFound {
            File::create("hello.txt").unwrap_or_else(|error| {
                panic!("Problem creating the file: {:?}", error);
            })
        } else {
            panic!("Problem opening the file: {:?}", error);
        }
    });
}

Smart pointer - Box

fn main() {
    let b = Box::new(5);
    println!("b = {}", b);
}

When b goes out of scope, the value stored on heap is also dropped automatically.

further talk

  • why making linked list is a pain
    • Cons type

Traits

Opposite to object orientedness

Traits can be derived

  • Debug for detailed printing of a variable
  • PartialEq for comparing values with == or !=
  • Clone

Season of KDE 2022 Blogs

These are my blogs from the time I worked on packaging Flatpaks for KDE applications. This was done during Season of KDE, 2022.

Beginning with Season of KDE 2022

First Try (ft. failure)

I usually learn something between semesters when I have holidays. During September - October 2021, I tried learning some Qt and looking around codebase for KDE apps. But something just didn't work out. I suspect my leaning style wasn't correct.

Transitioning to 2022

November - December was very busy, with college asking us to appear in person. I had practicals and exams.

Starting Christmas, I was supposed to be free during holidays, and hopefully learn some Qt. Something unexpected happened again. A university society I had joined wanted to push out update to one of their apps. I like Flutter, but the way this team worked was simply not my style. I managed to get it done by 8th Jan, and in a series of crisp conversations made it clear I won't be able to help the society further.

During holidays, the SoK announcement mail arrived in my inbox. Most of the ideas were above my knowledge paygrade, but I almost jumped to the ceiling when I saw the word "packaging". No I've never worked with Flatpak before, but I have basic understanding of packaging, and it is in fact, one of the things I take a lot of interest in.

Flashbacks

While my first touch with Linux was in 2014, I could use it on my machine only in 2016. One of the most striking features was that you could install any software (subject to availability) along with its dependencies with a single command.

Much later in 2019, I finally decided to try and understand how packaging on Linux distros work, because I wanted to package applications for the distro I was using at the time. I failed though.

In March 2020, I switched to Endeavour OS and thats when the picture got a lot clearer.

PKGBUILDS ft. success (yay!)

These are build scripts to package applications for pacman, the package manager for Arch Linux. Its stupidly simple to understand, and easy to work with and test.

Out of interest, I decided to publish a python application to AUR. With some help from a user on Endeavour OS forum, I was able to get it right and publish.

I took some more trivial packages later on. I published my application crabfetch too. I adopted two applications, MapTools and BreakTimer. These two are the ones that require regular updates.

Thanks to Endeavour OS...

Back when I signed up on the Endeavour OS forum in April 2020, I didn't expect I'll one day join them. Till date I haven't taken up any task that requires significant effort from me. But I have taken up multiple tiny tasks here and there. I maintain Bspwm and Openbox community editions. Once in a blue moon write tutorial (2 published till yet; one in draft). I do some testing before ISO releases.

Frankly speaking, these might be trivial tasks, but working with a group of experienced people was certainly a head start for an 18 year old. There's much to be thankful about. I still get friends asking me "how to get started with open source" and similar questions. I realize Bryan served me the opportunity on a silver platter.

On the KDE Flatpak Matrix channel

The KDE Flatpak channel on Matrix is exactly my liking. My submissions usually got reviews within reasonable time. I packaged some applications before the contribution period officially started. With some help from Aleix Pol and Albert Cid, I was even able to send some patches to KDiskFree, which was my first contrubution to a Qt/C++ application. (thanks!)

My mentor is Timothée Ravier. I had a quick chat with him today evening (24th Jan). He briefed me about what all I will be working on during the next two and half months. He's a sweet person to talk with. (thats probably the case with everyone at KDE)

Hoping to learn and contribute over the coming weeks. Big thanks to KDE for giving me the chance to work with the team as a part of Season of KDE 2022!

Link to project on ideas page : https://community.kde.org/SoK/Ideas/2022#KDE_Apps_packaging_as_Flatpak_for_Flathub

Projects announcement: https://dot.kde.org/2022/01/26/season-kde-kicks

Mass packaging and mass learning

Hey!

This is my second post for SoK 2022. Its going to be lot more technical than my last one. Lets dive into all that we did in past few days.

Learning packaging

As I had mentioned in my last post, I already knew the basics of packaging. Flatpak is certainly new for me though. I have used Flatpaks, but never published them.

It was easy to learn packaging and writing manifests. I submitted a couple of easy applications to Flathub before the contribution period started, so as to get a good idea of what I'll be doing during the coming weeks.

Challenges faced

Incomplete Appdata info

Many upstream packages did not have a complete appdata.xml file. Most noticeably, the OARS data was missing. However, I did not get to know about it until I submitted my first package, and it failed CI test.

Logs made it clear the appdata was at fault. Poking around, I fixed the appdata, but I needed a quick way to verify appdata for future packages. I made an alias in my shell, which would do the job for me.

alias ad="flatpak run --env=G_DEBUG=fatal-criticals org.freedesktop.appstream-glib validate \$(find . -name \"*appdata.xml\")"

You need to first cd into your build directory. And then simply run ad. It will find all appdata files, and validate them.

Using right sources

My first three submissions used a git source rather than release tarballs. While using a git source doesn't pose major issues, release tarballs are easier to work with.

Using right permissions

Back when I started, I did not know which permissions I had to use for different packages. All it required was to read the documentation on Flatpak website. Running an app with missing permissions usually printed errors in the console, which could be used to verify permissions. Nearly every GUI application requires --socket=fallback-x11, --socket=ipc, --socket=wayland, and for applications using OpenGL, --device=dri. If application required network access, add --share=network.

For filesystem access, you can use --filesystem=host, but most of the times, you don't need such a broad permission. You have options like --filesystem=home, --filesystem=/path/to/dir and --filesystem=xdg-name, where name is one of the XDG directories. These can be used to limit filesystem access, and be more closer to the idea of sandboxing applications.

Other minor issues

There were a couple of other minor issues with my initial few manifests. Reviewers suggested changes, and I gradually understood them and avoided them.

Portal exists

So, I had zero idea about Portals, which allow a Flatpak application to access files on the user filesystem, without requiring explicit permissions in manifest. I only got to know about it during reviews.

Submissions list

I'm listing only the accepted submissions here.

Submissions to Flathub

  1. KFind
  2. KRuler
  3. Cantor
  4. KDiff3
  5. KColorChooser
  6. KAlgebra
  7. KPhotoAlbum
  8. Falkon
  9. KGeoTag

Upstream code patches

Thanks to help from Albert Cid, I was able to push some minor updates to KDF.

KDF has option to open disk with a custom command from the user. By default, this opens disk in Dolphin file manager. However, this fails when Dolphin is not installed on system. The issue arose when packaging KDF for Flathub required including Dolphin in the manifest, which was not ideal. As a result, KDF was put on hold, and I was suggested to add a feature that would open drives in system default file manager. Thanks to Aleix Pol, I already had an idea about the changes to make in source code. My first pull request was closed because I removed the feature for using custom file manager command.

In my second pull request, I added the feature to open disk in system default file manager, and retained the option to specify a custom file manager command. User can easily switch them in settings. Albert suggested I send a patch to disable option to specify custom command, as they aren't supported in Flatpak environment.

Updates pushed

  1. Falkon update to 3.2 : https://github.com/flathub/org.kde.falkon/pull/2
  2. Exiv2 update for Koko : https://github.com/flathub/org.kde.koko/pull/2
  3. Exiv2 update for Kdenlive : https://github.com/flathub/org.kde.kdenlive/pull/165
  4. Exiv2 update for Krita : https://github.com/flathub/org.kde.krita/pull/51
  5. Exiv2 update for GwenView : https://github.com/flathub/org.kde.gwenview/pull/22
  6. Exiv2 update for KPhotoAlbum : https://github.com/flathub/org.kde.kphotoalbum/pull/1
  7. Exiv2 update to Flatpak master repo : https://invent.kde.org/packaging/flatpak-kde-applications/-/merge_requests/74
  8. Eigen update to Flatpak master repo: https://invent.kde.org/packaging/flatpak-kde-applications/-/merge_requests/72
  9. Boost update to Flatpak master repo : https://invent.kde.org/packaging/flatpak-kde-applications/-/merge_requests/71

Upstream appdata patches

  1. KFloppy : https://invent.kde.org/utilities/kfloppy/-/merge_requests/1
  2. KRuler : https://invent.kde.org/graphics/kruler/-/merge_requests/5
  3. Cantor : https://invent.kde.org/education/cantor/-/merge_requests/37
  4. KFind : https://invent.kde.org/utilities/kfind/-/merge_requests/8
  5. KDiff3 : https://invent.kde.org/sdk/kdiff3/-/merge_requests/37
  6. KAlgebra : https://invent.kde.org/education/kalgebra/-/merge_requests/22
  7. Akregator : https://invent.kde.org/pim/akregator/-/merge_requests/22
  8. KPhotoAlbum : https://invent.kde.org/graphics/kphotoalbum/-/merge_requests/18
  9. KGeoTag : https://invent.kde.org/graphics/kgeotag/-/merge_requests/13
  10. KDF : https://invent.kde.org/utilities/kdf/-/merge_requests/3
  11. Dolphin : https://invent.kde.org/system/dolphin/-/merge_requests/325
  12. Falkon : https://invent.kde.org/network/falkon/-/merge_requests/27

Thanks to the KDE team for being patient with reviews and helping me in the roadblocks I faced!

Completing a milestone

Hey!

This is my third post for SoK 2022. Let's go through what I've done since my last post, and what I plan to do next.

Packaging

I had already submitted a good number of packages to Flathub early on, because I had time, and didn't want my schedule to become too packed all of a sudden later on.

One month into Season of KDE, and almost every high priority and medium priority packages have been submitted. Most have been accepted to Flathub, while some are awaiting review. I'll list down the submissions accepted after my last post.

New packages on Flathub

  1. Ikona
  2. KUIViewer
  3. KRDC

Updates to existing packages

  1. Update Akregator to 21.12.2 : https://github.com/flathub/org.kde.akregator/pull/1
  2. Minor updates to KRename : https://github.com/flathub/org.kde.krename/pull/1

Upstream appdata patches

  1. Skanlite : https://invent.kde.org/graphics/skanlite/-/merge_requests/29
  2. Heaptrack : https://invent.kde.org/sdk/heaptrack/-/merge_requests/10
  3. KWave : https://invent.kde.org/multimedia/kwave/-/merge_requests/8
  4. Ikona : https://invent.kde.org/sdk/ikona/-/merge_requests/8

Pending submissions

There are 5 packages currently under review.

  • Skanlite seems to be in limbo for now, since I don't have a scanner to test it. Arranging for one is possible, but it won't be a long term solution if I am to maintain the package.
  • Zanshin has issues with the migrator, which blocks starting the application. I've put it on the back-burner for the time being.
  • Nota is mostly done. It uses a custom implementation for file picker, which means it can not use portal. We could have it as a beta package for now. I might look into patching the source code to use Qt file picker so that Nota can utilize portals. This again is a task for later.
  • Kwave needs a minor fix to make one of its dependencies play well with aarch64. Otherwise, it is done.
  • Heaptrack needs to be updated with extra permissions. I'll get this done soon.

Rest of the packages

  • Kile isn't detecting Texlive during runtime, which has become a mystery movie now. I'm hopeful to fix it pretty soon.
  • KAlarm relies on KAuth for certain features, which cannot be run inside Flatpak environment. While I could come up with the relevant patches to CMake files, so as to not build KAuth, the application itself needs some patching to disable those features when running inside Flatpak environment.
  • K3b has the same issues as KAlarm. Again, I have made the patches for build, but I'm yet to patch the source code to disable those features.

I plan to work on KAlarm and K3b source code during the second week of March if we decide to make them a priority.

With majority of the packaging work done, I should thank all the reviewers who vetted my submissions to Flathub, especially Hubert Figuière who has been helpful and patient even when I seem to do stupid mistakes :)

Plans for the next milestone

As I mentioned at the beginning, packaging work is almost sorted out, barring a couple of apps here and there. This leaves me with about a month and a half to work on other things I planned to.

I am playing around with Flatpak External Data Checker, which can bring about some automation in the update process.

I'm still in the "exploration phase" for this tool. While there are multiple ways I could use it for KDE Flatpak manifests, I want to take up the one that best suits the release process KDE has.

Taking my leave for this post. I hope to publish more updates and improvements in my next one!

FEDC and Update Automation

Hello!

This is my fourth status update for Season of KDE 2022.

In my last post I told how packaging was almost done and I was looking out to automate updates.

Finalizing the checker data

There are multiple backends that can be used with Flatpak External Data Checker. I initially planned to use the HTML checker, for it seemed logical at that time. But by the time I finalized it, I was already gravitating towards Anitya Release Monitoring. Since Anitya already has version tracking for most packages, it seemed a bit easier than the HTML checker. With Anitya, all I need to know is the project id, which is just a simple search on their website.

The packages that don't exist on Anitya are quite easy to submit. A Fedora account is all it takes. (you can use any Open ID). For example, I added Falkon on Anitya.

Roadblocks

There are certain corner cases where FEDC didn't work as expected. GPGME is the first example. The download server for GPGME is incorrectly setting Content-Encoding as text/*, which FEDC discards, as it could be an error page. The right way out will be to get GPGME developers in the loop and inform them of this issue.

Another package, SQLite, uses the current year in its URL, making it difficult to predict via a script.

Experimenting with GitHub Actions

I made a test repository to demonstrate the working of FEDC via a GitHub action. We were planning to get an access token added to KDE repos on Flathub GitHub organization so that we could utilize our GitHub Action. One of my submissions had erroneous checker data. It went unnoticed, until the Flathub bot made an unexpected pull request on a repo. That made us realise that we don't actually need to add an access token. Flathub runs the updater and makes pull requests automatically.

Happy accident!

So, all I am required to do is to add checker data to as many manifests as possible. The update for 21.12.3 was done manually, but the updates next month should be mostly automated!

Links to all my pull requests are at end of this post.

Next steps

I plan to add checker data to all the remaining applications (~80) in the next 2-to 3 days. I'm also parallelly looking over to use FEDC on KDE Invent to automatically update non KDE dependencies on master manifest repo.

Adding checker data

Updates for 21.12.3

I will see you in next post with more updates!

Adding CI Pipelines on GitLab

Hey there!

This is my fifth status update for Season of KDE 2022.

This time, I have updates on the automation side of things. I got a preliminary version of the required GitLab pipelines working.

Pipeline for Updating Manifests

I got my hands dirty with Docker and added the Flatpak External Data Checker image over on Quay. The image was used over on my Gitlab repositories. This let me run "actions", similar to what we have on GitHub, but on GitLab too. Since we're not relying on the official FEDC image, I can control what all goes into the image and add extra dependencies that would be required. (like curl)

Currently, the script is in bash. I plan to port it to Python, as it will be much easier to maintain and work with.

The script is expected to loop through all manifests in the repository and run FEDC on each one of them while capturing FEDC output to detect which packages have been updated. It then does some cleanup and sorts it to get the final list. With all the changes, it will finally commit and create a merge request over on the repository. To avoid creating multiple merge requests for the same set of updates, the script will check if there is an existing merge request already. To help in this, it adds a short hash in the merge request title, which lets it cross-check the set of updates and decide whether to create a new merge request or not.

The repository can be found here -> https://gitlab.com/flyingcakes/kde-flathub-master. Actual update script is in the updater.sh file in this repo.

Pipeline for Application CI

To goal here is to do a Flatpak build for KDE applications whenever they receive a new commit or merge request. We currently have similar CI pipelines running, but they are only for FreeBSD and OpenSuse. I plan to add a third job there, which will build the application inside Flatpak environment, and provide a way to install and test those builds on a local machine.

The inspiration for this has been taken from similar pipelines that run on Gnome GitLab. In essence, on every run, we fetch the application's Flatpak manifest, replace its source link with the latest commit and try a build. If the build succeeds, we upload its .flatpakref file as an artefact.

My implementation for this is still in an alpha phase and will need some more work to become viable for regular use on KDE applications.

The testing repository for this can be found here -> https://gitlab.com/flyingcakes/kdiff-v2/

Follow up on Checker Data Additions on GitHub

In my last post, I described how I'm adding checker data to manifests hosted on GitHub. I have added the data to most applications. The bigger ones like Krita and KdenLive are still missing data. This should be fixed in due time.

We noticed some issues with it, which seems to be related to FEDC itself. It sometimes creates repeated pull requests for the same package.

Other issues were caused because I had inadvertently provided wrong links in checker data.

None of the issues caused any inconvenience though. The maintainers have been thoroughly verifying pull requests to ensure nothing unexpected goes into the actual application. I fixed those minor issues as and when we got to know about them.

Adding checker data

Other minor fixes

Wrapping Up and Beyond the Event

Hello!

This is my final status update for Season of KDE 2022. There's news on the Flatpak CI builds and my plans for after this event.

Flatpak CI Builds

With help and suggestions from my mentor, Timothée Ravier, I got the CI script finalized from our side. I created a pull request on the CI repository.

I initially didn't know that the images on KDE infra are not ephemeral. As a result, I spent a great deal of time on slimming down the Docker image I was using. I started off with an Arch Linux image, which I later switched to Fedora. The script itself went many changes before it was submitted for review. Initially it didn't have flexibility with regards to location of manifest and could only build apps requiring the latest KDE platform. It was updated over and over until we were satisfied.

After I submitted the script for review, Ben Cooksley gave a lot of useful suggestions. The script is now much more flexible with regards to manifest location. It can dynamically decide which platform and SDK to use.

The pull request hasn't been merged, but I will get it done in due time.

FEDC on KDE Invent

I also planned to have an automatic updater on our master Flatpak repository. While the overall idea of how this is to be accomplished has been clear from the very beginning, the actual implementation however has gone many changes. For now, it is a shell script which I wanted to move to Python. I actually did rewrite it in Python - and I wasn't yet satisfied.

During the review of my pull request for Flatpak CI script, Ben proposed that we have each manifest in individual application repositories. A side effect is that the script will need to be modified a little bit.

I've kept the script on hold for now, although the basic bits are set up. I'll just need to plug in the final stuff once we decide where the manifests are going to be placed.

Beyond SoK

Season of KDE has been a great experience for me. I got to know some very cool people. I got to learn cool things. I did cool things. I've been close to open source and SoK was a great booster to my experience.

I'll be taking a break for ~3 weeks to get done with my exams. When I return, I'll fast track the CI stuff that is currently on draft. I'm still available on the KDE Matrix and email.

Bye bye!

Twentieth Winter and French Konnektion

Ohayo Gozaimosu!

This is going to a slightly informal post about my SoK experience. If you're looking for my status updates, the sixth and final status update was posted here.

My French KonneKt

Back during 2013 - 2015, I studied French at school. That was the first touch with this language for most students, as a result we did not have any serious texts in our coursebook. All texts were related to a French person/ couple/ family going out to the beach or monument or for picnic or something like that. For three years I studied that.

The net learning that a young Snehit had from those year was that the French are really cool people! Had I continued studying French, I would have been exposed to more serious texts, but I simply had to drop the subject when they got more focused on irregular verbs.

My Twentieth Winter

So what happened during the twentieth winter of my life? Of course, I had caught a cold after Christmas, which meant that my laptop and PC could finally get some rest. It also meant that I would be late to the join the KDE folks.

On 30th December 2021, I joined the KDE Flatpak Matrix channel, only to see that some potential contributors for the project have already introduced themselves. My health was in no mood to get to work, so I went back to sleep.

On 9th January 2022, I finally decided I was back in my usual high. Quickly checked the groups and oh my cat! Everyone had already gotten on to work. I scroll up the messages and find that three days prior to that, a cool sounding person had tagged four people and given them some instructions on how they could get started with the work. I felt I was far off, so I decided I could just help out the other contributors if they face roadblocks. That was the tiniest sweet thing a cake could do. And I do think I could help out a fine enthusiastic person on the channel.

French KonneKt Pt. 2

I was casually looking about the cool person I mentioned earlier, Timothée Ravier. YouTube seemed to have some videos by this person and wait what, he was speaking French???

I really needed to work with this cool person. I mean I wasn't expecting to virtually visit Le Louvre with him, but at least I could flex that I can pronounce the French 'r'!

So I speed run learning how to package as Flatpak. As I have mentioned many times already, I have experience in that, so it was rather smooth for me.

Timothée has been very helpful during our chats. Patiently listens to everything I say, while I go haywire telling him I did this, I broke that, I'll do this, I won't do that. I might just switch to an AZERTY keyboard, because I type his name too often these days and have to copy paste that é every time.

Getting to Work

Since I did not have classes in first 2-3 weeks of SoK, I speed ran my project (typical of me). A good number of applications were submitted to Flathub.

Next I tried reinventing the wheel, because I have a history of programming in Rust. As usual, I fell back to using a Python script. Flatpak External Data Checker got the work done, even though it did produce unexpected output at times.

Over on to the CI stuff, I wrote and rewrote^7 the build script. It not yet perfect, but I'll get to it very soon.

One potentially good thing to happen with me was that I started using Twitter. Now I'm more in sync with the abbreviations people use these days, thanks to the character limit. I hope they bring out the edit feature soon, and make the character limit 100x times longer so that long worded people like me feel at home there, but first I'm gonna petition to rename KDE Connect to KDE KonneKt.

What I Learnt

  • Packaging
  • Not running rm -rf out of context (I once deleted my build directory and then had to rebuild many KDE apps again)
  • Being on time for online meetings (other's time is important too)
  • Debating why JSON is better than YAML

Conclusions

French people are cool. So is whole of the KDE Community.