Systemd is a collection of programs that aim to unify the service configuration and behavior on *most* modern Linux distributions.
All of the distributions we've used up until now come with systemd and we've been manipulating most of our servers and services via `systemctl` which is the standard command line interface to systemd.
It's worth pointing out that systemd is not just an additional piece of software that is added to your computer.
You should see it as a sort of *glue* that ties the system together as it's responsible for launching and monitoring all services you run on your server.
### Some history
As with most things Linux there are multiple alternatives to systemd and believe it or not, the introduction (around 2015) of systemd to Debian was a controversial moment.
A lot of online debates were had to discuss the pro's and cons and Debian was even [forked](https://www.devuan.org/) to remove systemd all together.
> Devuan GNU+Linux is a fork of Debian without systemd that allows users to reclaim control over their system by avoiding unnecessary entanglements and ensuring Init Freedom.
You can be for or against systemd but the current reality is that it *is* the most widely used `init` system around today.
This can, and probably will, change in the future but for now the world is run by systemd.
## The basics
During the numerous hours you've spent using `htop` you have probably noticed the first process is often `/lib/systemd/systemd --system` on Debian machines.
On Raspberry Pi's that first process is most likely `/sbin/init` but a closer look at this program shows the following.
Every running Linux computer must have a **first** process.
But where does this first process come from?
Below you can see a nice graph of the **boot sequence** of a standard Linux machine (taken from the [Debian system administrator handbook](https://debian-handbook.info/browse/stable/unix-services.html#sect.system-boot)).
By default the Linux kernel will run the `init` program but this can be overridden by passing an argument to the kernel upon boot.
For those who have played around with the [broken machines](./exercise_broken_machines.md) this is probably no real news.
At the last stage of the boot sequence, systemd takes over and launches all services that are `enabled` for the requested `runlevel`.
The runlevel might be new to you but we'll come back to that in a minute.
### Interfacing with systemd
Your main tool to *talk* to systemd is `systemctl`.
It's sort of a **client** to the systemd **server**.
The most used commands, that you probably know by hearth, are:
```
sudo systemctl start sshd.service
sudo systemctl stop sshd.service
sudo systemctl restart sshd.service
sudo systemctl status sshd.service
sudo systemctl enable sshd.service
sudo systemctl disable sshd.service
```
Just knowing these will get you a long way but there are a few more handy commands to push it all a bit further.
## Beyond the basics
### A deeper look into what's available
If you invoke just `sudo systemctl` it lists all the units that are active.
It's actually a shortcut to `sudo systemctl list-units`.
You'll be confronted with an interface, `less`, that you know pretty well so have a look around and maybe search for some keywords.
At the bottom of the pager you'll see a few hints that point you to other commands that show even more output.
When we disable a server such as `sshd` it's configuration files are not changed at all as the server never tries to start itself.
Systemd is responsible for that so if we want to see all servers available on our system we type `sudo systemctl list-unit-files` which gives a clear table, also via `less`, that outlines the current state and vendor state.
We can add more command line arguments to `systemctl` to narrow down the output a bit.
A handy one is `--type service` to only see services.
I advise you to have a read of the `man systemctl` to grasp the full scope of it's capabilities.
### Inspecting a running service
To inspect a running service we can run `sudo systemctl status sshd.service`.
Jul 26 12:14:35 deathstar sshd[576]: Server listening on 0.0.0.0 port 22.
Jul 26 12:14:35 deathstar sshd[576]: Server listening on :: port 22.
Jul 26 12:14:35 deathstar systemd[1]: Started OpenBSD Secure Shell server.
Jul 28 20:13:38 deathstar sshd[175321]: Connection closed by authenticating user waldek 192.168.0.222 port 51542 [preauth]
Aug 14 09:05:36 deathstar sshd[1001518]: Connection closed by authenticating user waldek 192.168.0.33 port 35448 [preauth]
Aug 14 09:05:56 deathstar sshd[1001567]: Connection closed by authenticating user waldek 192.168.0.33 port 35636 [preauth]
Aug 14 09:06:20 deathstar sshd[1001648]: Connection closed by authenticating user waldek 192.168.0.236 port 53346 [preauth]
```
There is quite a bit of interesting information here.
There are two **blocks** of information.
At the top we see some details and links to the help about the service in question and at the bottom we see the last eight lines of the server logs.
To see *how* systemd has the sshd service configured we need to have a look at the second line, the one that sais `Loaded:`.
The path that follows is the service file that systemd uses to know **how**, **when** and **where** to run the service.
As with most things Linux this is a simple text file we can open up with `less`, `vim` or even `nano` but there is a sweet shortcut supplied by systemd itself!
└─1108167 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
Aug 15 20:00:49 deathstar systemd[1]: Starting OpenBSD Secure Shell server...
Aug 15 20:00:49 deathstar sshd[1108167]: Server listening on 0.0.0.0 port 22.
Aug 15 20:00:49 deathstar sshd[1108167]: Server listening on :: port 22.
Aug 15 20:00:49 deathstar systemd[1]: Started OpenBSD Secure Shell server.
```
Yes it did, but the service is still running on port 22.
This is what systemd means by `loaded`.
A configuration file is loaded into memory and used from there.
To take changes to unit files into account we need to reload the files that have changed, sort of like we restart `sshd` when we make changes to it's configuration file but we can't restart `systemd` as that would freeze our computer.
Luckily there is a command to do this and it's written in the warning notice.
Aug 15 20:24:47 deathstar systemd[1]: Starting OpenBSD Secure Shell server...
Aug 15 20:24:47 deathstar sshd[1111233]: Server listening on 0.0.0.0 port 2200.
Aug 15 20:24:47 deathstar sshd[1111233]: Server listening on :: port 2200.
Aug 15 20:24:47 deathstar systemd[1]: Started OpenBSD Secure Shell server.
➜ ~ git:(master) ✗
```
Notice something different here?
The location of the unit file is no longer `/lib/systemd/system/ssh.service` but `/etc/systemd/system/ssh.service`.
This is the actual *preferred* way of modifying unit files supplied by your distribution because if at some point in the future your distro changes it's configuration file and you update, you'll overwrite your custom changes! (see [this](https://serverfault.com/questions/840996/modify-systemd-unit-file-without-altering-upstream-unit-file) post on serverfault)
Think of the similar situation we encountered with `/etc/dnsmask.d/` when installing a pihole.
What if you want to `revert` back to file supplied by Debian?
A quick `sudo systemctl revert sshd.service` should do the trick!
Don't forget to `daemon-reload` when you want to restart the service.
## Writing your own service files
Imagine we want to run a custom server each time the machine boots.
Here systemd comes to the rescue, plus we can run them as *ourselves* and don't need to interfere with the standard system services.
Let's give this a go!
A simple example to a server would be a small python3 webserver.
Let's create a directory in our home called website.
We can do this with `mkdir ~/website`.
In this folder we'll make an `index.html` file where we add the content of our *website* to.
You can write anything you want, in html or plaintext.
To spin up a quick webserver we can use the `http.server` class from the standard library.
I **must** note that it's not a production proof server and should **only** be used for small testing purposes (and for our example).
For those that want to dive deeper into the syntax of the configuration file you should have a look at the output of `systemctl --user show website.service` which list all of the *hidden* settings that are predefined for a service.
To see what you can change them to, have a look [here](https://www.freedesktop.org/software/systemd/man/systemd.service.html).
### Deep dive into the logs
All logs made you systemd go into the `/var/log/daemon.log` file by default.
You can override this but I would highly advise you not to do it as there are special **tools** that come with systemd to inspect the logs, plus all logs in one place is quite handy for grepping.
Have a look at the file and you should see a similar output.
```
Aug 15 20:24:44 deathstar systemd[1]: Reloading.
Aug 15 20:24:47 deathstar systemd[1]: Stopping OpenBSD Secure Shell server...
Aug 15 20:24:47 deathstar systemd[1]: ssh.service: Succeeded.
Aug 15 20:24:47 deathstar systemd[1]: Stopped OpenBSD Secure Shell server.
Aug 15 20:24:47 deathstar systemd[1]: Starting OpenBSD Secure Shell server...
Aug 15 20:24:47 deathstar systemd[1]: Started OpenBSD Secure Shell server.
Aug 15 20:57:38 deathstar systemd[585]: Started VTE child process 1114299 launched by gnome-terminal-server process 1027574.
Aug 15 20:58:02 deathstar systemd[585]: Reloading.
Aug 15 20:58:15 deathstar systemd[585]: Started Our own webserver.
Systemd comes with a specialized program to sift through it's logs called `journalctl`.
Just invoking `journalctl` will give you the output of the log file in less.
A **very handy** argument you'll probably always use is `-e` which scrolls to the end of the logs.
As an alternative you can add `--no-pager` which will not pipe to `less` but just print to STDOUT.
To only view a specific service we can add the `--unit` argument, followed by the service name.
For example:
```
➜ ~ git:(master) ✗ sudo journalctl --unit ssh.service --no-pager --since "1 h 25 min ago"
-- Journal begins at Wed 2021-07-14 22:35:36 CEST, ends at Sun 2021-08-15 21:46:42 CEST. --
Aug 15 20:22:14 deathstar sshd[1110635]: Received signal 15; terminating.
Aug 15 20:22:14 deathstar systemd[1]: Stopping OpenBSD Secure Shell server...
Aug 15 20:22:14 deathstar systemd[1]: ssh.service: Succeeded.
Aug 15 20:22:14 deathstar systemd[1]: Stopped OpenBSD Secure Shell server.
Aug 15 20:22:14 deathstar systemd[1]: Starting OpenBSD Secure Shell server...
Aug 15 20:22:14 deathstar sshd[1110849]: Server listening on 0.0.0.0 port 2222.
Aug 15 20:22:14 deathstar sshd[1110849]: Server listening on :: port 2222.
Aug 15 20:22:14 deathstar systemd[1]: Started OpenBSD Secure Shell server.
Aug 15 20:24:47 deathstar systemd[1]: Stopping OpenBSD Secure Shell server...
Aug 15 20:24:47 deathstar sshd[1110849]: Received signal 15; terminating.
Aug 15 20:24:47 deathstar systemd[1]: ssh.service: Succeeded.
Aug 15 20:24:47 deathstar systemd[1]: Stopped OpenBSD Secure Shell server.
Aug 15 20:24:47 deathstar systemd[1]: Starting OpenBSD Secure Shell server...
Aug 15 20:24:47 deathstar sshd[1111233]: Server listening on 0.0.0.0 port 2200.
Aug 15 20:24:47 deathstar sshd[1111233]: Server listening on :: port 2200.
Aug 15 20:24:47 deathstar systemd[1]: Started OpenBSD Secure Shell server.
➜ ~ git:(master) ✗
```
To understand the `--since` argument I advise you to read the `man systemd.time` pages.
An argument you'll often see suggested online is `-x`.
It adds more verbose output to debug issues.
The manpage documentation is below for reference purpose for reference purposes.
```
-x, --catalog
Augment log lines with explanation texts from the message catalog. This will add explanatory help texts to log messages
in the output where this is available. These short help texts will explain the context of an error or log event,
possible solutions, as well as pointers to support forums, developer documentation, and any other relevant manuals. Note
that help texts are not available for all messages, but only for selected ones. For more information on the message
catalog, please refer to the Message Catalog Developer Documentation[5].
Note: when attaching journalctl output to bug reports, please do not use -x.
```
Last but not least, the `-f` argument does a *live* stream of the log so you can debug on the fly.
This can be very handy in a `tmux` session.
For more information I highly advise the man pages with `man journalctl`!
## A sidetrack into cron
But what if we want to run a quick script or command every day at midnight?
Like an email report of the system status, or a `apt update`?
This can also be done with systemd but the *classic* way of doing this is via `cron`.
As always, have a look at `man cron` and when you're finished you'll know you want to read the `man crontab` as well.
In short, every user can have a crontab which is a list of command to execute at certain intervals.
To inspect your own crontab, just execute `crontab -e` which will open your editor of choice.
Read through the comments, it's quite self explanatory no?
Only the timestamp syntax is quite annoying in my opinion but there is a handy [website](https://crontab.guru/every-1-minute) to help you understand it a bit better.
To have a command executed every minute you add the following.
```
* * * * * echo "helloword" >> /tmp/coucou
```
The `root` user has his own crontab you can edit with `sudo crontab -e`
To do an `apt update` every day at midnight you would add the following.
```
0 0 * * * apt update
```
I must note that this is not really the best way to accomplish automatic update and upgrades.
Have a look [here](https://help.ubuntu.com/community/AutoWeeklyUpdateHowTo) for better alternatives.
## Systemd timers
As you can probably see, `cron` is a very basic but powerful way of scheduling actions.
So people really like the simplicity bit for others a bit more control is desired, hence `man systemd.timer`.
We can list all current timers with the following command.
Once the timer is in place you should `start` the service with `systemctl --user start monitor.service`.
There is no need to start or enable the `monitor.timer` file as the link between them is in the `monitor.service` file via the `Wants=monitor.timer` configuration line.
If you now watch your log in real time with `journalctl -f --user-unit monitor.service` you should see your service executing every minute!
### Pro's and cons
The following advice was taken from the arch [wiki](https://wiki.archlinux.org/title/Systemd/Timers).
#### As a cron replacement
Although cron is arguably the most well-known job scheduler, systemd timers can be an alternative.
##### Benefits
The main benefits of using timers come from each job having its own systemd service. Some of these benefits are:
* Jobs can be easily started independently of their timers. This simplifies debugging.
* Each job can be configured to run in a specific environment (see systemd.exec(5)).
* Jobs can be attached to cgroups.
* Jobs can be set up to depend on other systemd units.
* Jobs are logged in the systemd journal for easy debugging.
##### Caveats
Some things that are easy to do with cron are difficult to do with timer units alone:
* Creation: to set up a timed job with systemd you need to create two files and run systemctl commands, compared to adding a single line to a crontab.
* Emails: there is no built-in equivalent to cron's MAILTO for sending emails on job failure. See the next section for an example of setting up a similar functionality using OnFailure=.
Also note that user timer units will only run during an active user login session by default. However, lingering can enable services to run at boot even when the user has no active login session.
## A sidetrack into runlevels
The world of Linux has a concept called *runlevels* which determines a target state the machine is in, or to which you want the manche to go to.
It's a complicated way of saying fully operational with graphical interface, a root only rescue mode, a reboot, halted etc.
The official specification of the runlevels defines them as such.
* Runlevel 0 or Halt is used to shift the computer from one state to another. It shut down the system.
* Runlevel 1, s, S or Single-User Mode is used for administrative and recovery functions. It has only enough daemons to allow one user (the root user) to log in and perform system maintenance tasks. All local file systems are mounted. Some essential services are started, but networking remains disabled.
* Runlevel 2 or Multi-user Mode is used for most daemons running and allows multiple users the ability to log in and use system services but without networking. On Debian and its derivatives, a full multi-user mode with X running and a graphical login. Most other distributions leave this runlevel undefined.
* Runlevel 3 or Extended Multi-user Mode is used for a full multi-user mode with a console (without GUI) login screen with network services available
* Runlevel 4 is not normally used and undefined so it can be used for a personal customization
* Runlevel 5 or Graphical Mode is same as Runlevel 3 with graphical login _(such as GDN)_.
* Runlevel 6 or Reboot is a transitional runlevel to reboot the system.
You can inspect the runlevel your system is currenty at by ececuting the following command.
```
➜ ~ git:(master) ✗ sudo runlevel
N 5
➜ ~ git:(master) ✗
```
You can change your runlevel with the `sudo telinit`, followed by the level number, command.
You'll probably won't see that much difference between levels but try to change it to level `6` and see what happens.
If you change the runlevel to `1` your machine will probably freeze.
This has to do with the fact we haven't set a `root` password on most of our machines so the single user mode can't be accessed.
Try setting a root password and reset the level to one and see what happens.
## Systemd targets
Systemd take the concept of runlevels a bit further and they are renamed to **targets**.
The mapping of runlevels to targets is as follows.
* poweroff.target (runlevel 0): shutdown and power off the system
* rescue.target (runlevel 1): launch the rescue shell session
* multi-user.target (runlevel 2,3,4): set the system in non graphical (console) multi-user system
* graphical.target (runlevel 5): use a graphical multi-user system with network services
* reboot.target (runlevel 6): shutdown and reboot the system
But, there are a *lot* more targets available on a machine running systemd.
Luckily `systemctl` offers a nice way to inspect them.