Running Laravel Queue Workers with systemd

The official Laravel documentation for Queues suggests that you ensure your worker process remains alive using “a process monitor such as Supervisor”. However, modern linux distributions provide a tool built in that will allow you to do just that, as well as collect the logs, control resource usage, and send off alerts when things go wrong. It’s called systemd, and I’m going to show you how we utilized it to run our queue worker for the Google Nest & Christopher Reeve Foundation promotion.
Background
Modern linux distributions have moved away from the ugly, distro-specific init scripts of the past to the standard systemd. In a word, systemd is a collection of tools that form a sort of shared middleware between the kernel and applications. Not least of those tools is the systemd daemon itself, whose sole purpose is to keep track of and manage all processes that are spawned on the system.
Systemd became the standard in the linux world a few years ago and is used by just about any existing server running Debian, Ubuntu, Redhat, CentOS, OpenSUSE, SUSE Linux Enterprise Server, or even (Fedora) CoreOS. That means you can use this knowledge on just about any server you find yourself on these days.
Interaction with systemd occurs via the systemctl
command, which allows you to manipulate the activation state of a system daemon (called a service), or scheduled tasks (“cron jobs”, called timers) and inspect their status. All services and timers (collectively “units”) are defined in simple ini-like configuration files called unit files. Once created, the systemctl command will know how to interact with it, and the systemd daemon will know how to keep tabs on it.
Before we get into the nitty gritty of setting up our service file, I want to make a note about a flag we will be using with all of our systemctl calls throughout this tutorial — the --user
flag. The systemd daemon itself usually runs as the root user of the operating system, allowing it to control literally everything. However, it is also possible for that daemon to spawn per-user copies of itself that run as the user and have purview over processes that should only run for that user with that users permissions.
When I provision a server, one of the first things I do is to set the php-fpm (or apache) daemon to run as an unprivileged user. Then I set the directory that laravel lives to be owned by that user, which prevents any sort of permission headaches with things like the storage folder. Finally, I set up our CI to deploy as that user, again ensuring that files have the right ownership to run correctly. As such, it makes sense for the queue worker we are about to set up to also run as the correct user.
By logging in as the same user (I’ll use www-user
in our examples), then passing the --user
flag to systemctl
, I ensure that the service has all the correct permissions to write to the storage directory and read the laravel php files. This also prevents me from having to enter in a password to interact with systemd, as the daemon isn’t running as root, and therefore I avoid having to give sudo permissons to the web user (which would be bad).
There is a danger here however. By default, the per-user systemd daemon processes will shut down, along with all of their services, as soon as there is no logged in session for that user remaining. This means if you set up the service in the following sections over ssh, then log out, the service will be stopped! Fortunately there is an easy way to tell systemd to keep alive your specific user daemon even without anyone being logged in over ssh. It does however need to be run from the root user in the version of systemd that ubuntu ships with, so to be safe, drop to root (or sudo from an administrative account) to run this:
root@host % loginctl enable-linger www-user
Setting up the Queue Worker
Let’s consider that we have an email queue on database connection that we want to work on with our worker. The command to run the worker might be like the following:
php artisan queue:work database --queue emails
This is going to be the command that we instruct systemctl
how to run (and keep alive for us). To do that, we’ll want to create a service file which represents the background process. Searching the web for how to do this is not super productive, as there are a lot of manual ways to go about it, so I’m going to save you the hassle and show you how to create it in the right place in a single command.
www-user@host $ systemctl --user edit --force --full queue-worker.service
This will open your terminal editor command with a blank file that will be saved to the correct directory for you on exit. This is helpful because there are a lot of directories that systemd looks for files, and it’s confusing which is which. The acutal contents of the file are simple to write and understand. I’ll show you what we want in there and take you through what it all means.
[Unit]
Description=Runs and keeps alive the artisan queue:work process
OnFailure=failure-notify@%n.service
[Service]
Restart=always
WorkingDirectory=/var/www/html
ExecStart=/usr/bin/php artisan queue:work database --queue emails
[Install]
WantedBy=default.target
The file is split up into three sections:
Unit
This is where we describe attributes that are common to all unit types (recall that a timer is also a kind of unit). This is where we give a human readable description as well as tell systemd what to do when the service fails. I’ll go into that in more detail later.
Service
This is where we set attributes specific to the service type. This is where we tell it how to start the service. In our case this is just the artisan command we identified earlier. We could also define ExecRestart, ExecStop and a few others to tell systemd more specifically how to handle the process properly. However, we know artisan can handle typical process signals like SIGSTOP and the like, so we don’t need to go ham. Besides that, we tell it to try and keep the service up with Restart=always, and tell it to run in a specific directory — where our code lives, so the command can locate artisan and the right env file. Note that if you have multiple environments on the same server, you can set up multiple service files differing only by working directory to run queue workers for each separately.
Install
This we use to help systemd understand when to start the service if it has been enabled. When a service is enabled, it means that it will be started up again after a restart of the server. This is definitely what we want, so we have to include this section. I won’t explain targets here as it is a bit of an esoteric concept, but suffice it to say that the target you select may be different if you are installing for root or in a non-standard configuration. To get it right, you need only run systemctl --user get-default
and the correct value will be reported.
Get it Running
Once we have the service definition, we need to tell the user systemd daemon to start it, and additionally to bring it back up after a restart (“enable”).
www-user@host $ systemctl --user start queue-worker.service
www-user@host $ systemctl --user enable queue-worker.service
That’s all there is to that.
WARNING: Don’t forget to run the
loginctl
command from the Background section before logging out of SSH after starting and enabling, or the user daemon will exit.
Monitor Your Jobs
When we run our worker through systemd, we get a lot of nice things for free. By default, any STDOUT and STDERR emitted by the worker is collected in a binary log format that’s maintained for you. No need to set up a daily rotated log file for your jobs (unless you want to). You can access the very latest log entries (think tail
) by inspecting the status of the worker with systemctl
. You’ll get some nice extras in the output, like the process id, running/failure state, uptime, memory and cpu usage.
www-user@host $ systemctl --user status queue-worker.service
To get the full log, you can use the purpose built journalctl
command, which by default will pipe the entire history up to the point it is called into less
, meaning you can use some basic vim
bindings like /
to search through the logs. To follow it in realtime like tail -f
, we can pass -f
.
www-user@host $ journalctl --user -u queue-worker.service
www-user@host $ journalctl --user -fu queue-worker.service
The --user
flag is needed to get the logs for the user’s daemon, and -u <name_of_service>
gets us just the logs for the service in question.
Tip: Actually, it can be useful to drop that flag and see all of the events happening on the system together when debugging strange behaviour. This was how I happened on the need for the
loginctl
command, in fact.
Failure Notifications
When we took a look through the service file, I skimmed over the OnFailure
directive. Here’s a reminder of what that looked like:
[Unit]
Description=Runs and keeps alive the artisan queue:work process
OnFailure=failure-notify@%n.service
The OnFailure
directive does not call an arbitrary command, but rather calls a particular unit file. So we’re going to set up a one-shot service (read: “call this once but don’t keep it alive”) that can be set up to run an arbitrary script.
[Unit]
Description=OnFailure notification for %i
[Service]
Type=oneshot
ExecStart=/home/www-user/notify-slack.sh %i
Here, I’m calling a custom script that sends a notification to a Slack channel whenever a failure to (re)start the worker fails. If you’re wondering what the %n
and %i
are about, this is just systemd’s way of passing the name of the service into the other unit file.
Further Resources
What we’ve looked at so far should get you as far as you need to run a queue worker for Laravel (or Craft 3 for that matter, with a little modification) on a server you manage. Whether or not you’ve used Supervisor before, I believe it’s worthwhile to learn how to use this as, like grep
and vim
, it’ll be there on almost any server you have to manage yourself and you won’t need to install anything.
If you’re looking to level up beyond just running a queue worker, or want to explore resource-limiting your workers and much more, you’ll probably have a hard time finding good resources. I’ll leave you with a list of docs, articles, and tutorials that I’ve found to be of high quality.
- systemd (Arch Wiki) – Learn about writing unit files and what targets are.
- systemd/Timers (Arch Wiki) – Learn how to replace
cron
andat
with timers. - systemd/Users (Arch Wiki) – Learn how to interact with the user daemon.
- systemd/Journal (Arch WIki) – Learn how to get more out of the built-in logging.
- Systemd Essentials: Working with Services, Units, and the Journal (Digital Ocean) – Get up and running quickly with the basics.
- How To Use Systemctl to Manage Systemd Services and Units (Digital Ocean) – A more thorough look at service management with systemd.
- How To Use Journalctl to View and Manipulate Systemd Logs (Digital Ocean) – A more thorough look at interacting with the built-in logging.
- Linux Academy Courses and Hands-on Labs – Paid (7 day trial available) learning materials that put you at the terminal of real server instances to try things out as you learn.