It’s been a while since I blogged anything here, but that doesn’t mean I haven’t done anything in the past couple years! Over the next months, I’m going to try to make some posts about the various projects I’ve worked on recently.
On my personal servers, I have quite a number of cron jobs that do useful things for me (notably, running gmail-cleaner, which will be the subject of another blog post in the future). These machines send me the output from these jobs via email. But with a large number of jobs — I have something like 10 or 20
gmail-cleaner jobs — this gets noisy. I really only want to get these emails if a job fails, or if it produces some interesting output. One can work around this problem (say, by redirecting standard output but not standard error to
/dev/null), but this only works to a point. Some programs have output behavior that’s incompatible with this solution, and to filter out all but a certain output one needs to add
grep into the shell command, and it gets unwieldy quickly.
runner. This is a pretty simple program which can wrap a command and swallow its output, unless certain conditions are met. It’s written in Go to simplify deployment and cross-compilation.
By default, it only prints the command’s output if the command returns a nonzero exit code (ie. it fails). You can tell it to treat certain other exit codes as healthy, and to print the output if it contains (or doesn’t contain) a specific string. It can write the command’s output to a log directory, regardless of how the command exited; and it can also print the command’s environment as part of the output, which is useful for debugging.
The README covers these features in some detail, so let’s examine a couple real-world uses from my
RUNNER_LOG_DIR=/home/cdzombak/log/runner */30 * * * * runner -work-dir /home/cdzombak/changedetection -- ./env/bin/urlwatch
This example runs
urlwatch every 30 minutes. The output gets printed & emailed to me only if
urlwatch returns with a nonzero exit code. If that happens, the resulting email also contains the environemt in which
urlwatch ran. Regardless of whether
urlwatch ran successfully, the output is always written to a timestamped file in
RUNNER_LOG_DIR=/home/cdzombak/log/runner 04 */6 * * * runner -print-if-not-match "0 entries affected" -hide-env -work-dir /home/cdzombak/scripts/feedbin-auto-archiver -job-name "Feedbin Archiver" -- ./venv/bin/python3 ./feedbin_archiver.py --rules-file /home/cdzombak/Sync/feedbin-archiver-rules.json --dry-run false
This sample demonstrates a few more of
runner’s features! This’ll run my automatic Feedbin archiver (another topic for a future post) a few times per day. Output is emailed to me if
feedbin_archiver.py fails, or if its output doesn’t contain the string
0 entries affected — that is, I get an email when this program does touch anything in my Feedbin account. This email doesn’t contain the environment in which
feedbin_archiver.py ran. Finally, as before, the output is always written to a timestamped file in
I think that covers the most important points. If you find yourself wanting to manage the output from a bunch of cron jobs, consider giving
runner a try.