My upgrade of my home server from Debian 11 ("bullseye") to
Debian 12 ("bookworm") went
almost without a hitch. Yesterday I realized that the Postgres data hadn't
been migrated from the old DB to the Debian package of Postgres 15. But
luckily, the good Pg people provide a Debian
package of 9.6 (the version which held my data) for Debian 12.
I could install that one, fire it up, dump all data into SQL, fire up Pg 15
from Debian and import it there. Now I run such an SQL dump daily, just to have
the data available as SQL files.
I wonder if it would be worthwhile for Perl to provide prebuilt
binaries/packages of old Perl versions for current OSes, but then, there are so
many build options that it's not worth the effort in general.
The only use case I see would be to provide an emergency Perl when your
dist-upgrade
nuked the system Perl [^1] , but some custom XS modules
or XS modules installed via cpan
instead of the package manager
relied on that version. This would reduce the number of build options,
but I'm still not sure if that actually helps anybody.
Maybe simply taking the (Debian) build files for old packages/distributions and
running them for new distributions, with a prefix of /opt/perl5-xx could
already help. People would still need to edit the path of their scripts to
bring things back up.
This only makes sense when also rebuilding all the old CPAN modules
for the new OS version, except under /opt. That's a lot of effort for
little to no gain, except when people really need it.
[^1] : well, not nuked, but replaced with a newer major version that is
not binary compatible
This is largely an elaboration on my other post on using Git for deployment
I like to write small toy programs as web apps, like the
Curl-to-Perl converter or
my weather forecast app.
My current tool of choice for writing such web apps is Mojolicious,
and the local development is quite nice as it comes with a local web server
built in.
XXX screenshot of the weather app
The internet
But obviously a web application is no fun if you can't put it online and use
it from wherever, or show it around. While I have a server on the internet
and it has a webserver, updating my software while it is in development is
inconvenient. When it is inconvenient, I don't do it often, so I want to remove
that inconvenience as far as possible.
Approaches
Copying files to the target machine
It's easy to copy files using scp
or rsync
. My uplink is pretty fast
nowadays so I can conveniently copy 10MB within a second.
I program in Perl, and I would need to copy the needed Perl modules
as well. This fails when I need to use Perl modules with a binary component
like a C library. So copying files alone will not work.
Running programs on the target machine
If I'm only copying the data that can be easily created locally, I need a way
to run programs on the remote machine. This is possible for example through
ssh -c
.
But as most of my webapps are writen with Perl as the backend, I need to run at
least cpanm --installdeps
to install the modules needed. Often I also
want to regenerate other files like manifest.json
and/or compress assets.
Usually, these other jobs are done through a Makefile
.
Using Git as transport and runner
I use Git as my version control system. In Git, I usually check in almost
everything of interest to a project. Instead of writing a shell script that I
run locally which uploads the files and then kicks off a remote build, I
(ab)use the Git post-receive
hook to work as my program runner and the
Git transport mechanism for transferring the data.
Git as transport
Git can download and upload changes from other git Repositories. It can
use a variety of transport mechanisms:
- file copy from/to a local directory
- file copy via the
git
protocol
- file copy via the
ssh
/ scp
protocol
The last protocol is of convenient interest to me, as it means I can simply
have a Git repository on the webserver machine and git push
will upload
my local changes to the remote webserver.
Anatomy of the Git post-receive
hook
Whenever Git receives a complete set of changes in a repository, it will
then kick off the post-receive
hook. The post-receive
hook is a program
intended to be customized by the user to perform tasks whenever the event
arrives.
In my case, I use that post-receive
hook to perform all tasks that I want
to be done on the webserver:
- check out the latest state of my webapp into a directory
- install modules needed by my webapp
- perform other tasks as specified by a Makefile
Setup
Setting up the post-receive
hook is fairly simple:
- Create a remote directory for the repository
mkdir my-webapp.git
- Initialize the directory as Git repository
cd my-webapp.git && git init --bare
- Add the
post-receive
hook
Don't forget to make the file executable.
- Add the machine as a remote on your local repository
git remote add demo corion@that.machine.example:my-webapp.git
Deployment
Deployment now looks like
git push demo
The steps of the post-receive
hook
The steps performed by the hook in detail are:
Check out the latest state of my webapp into a directory
git "--work-tree=${CHECKOUT_DIR}" "--git-dir=${REPO}" checkout -f
From the Git repository, we checkout the current state into a target directory.
Install modules needed by my webapp
I like to install all modules needed by a webapp into a directory local to that
webapp. This means more maintenance, but it also means that changes to one
webapp don't break other webapps. For additional safety, I also reset the
PERL5LIB
environment variable so even if the hook is run manually it won't
install or use modules outside of the app-specific directory.
PERL5LIB=${BASE}/${DIST}/lib /home/corion/perl/bin/cpanm --installdeps "${CHECKOUT_DIR}" -l "${CHECKOUT_DIR}/extlib" --notest
Run post-install steps
Some assets of the webapp might need to be (re)compressed. make
is a
convenient tool to update files based on the date of other files:
cd "${CHECKOUT_DIR}/public" && make deploy
The post-receive
hook in its full glory
#!/bin/sh
unset PERL_MB_OPT PERL_MM_OPT PERL_LOCAL_LIB_ROOT PERL5LIB
REPO=$( cd "$GIT_DIR" || exit; pwd)
BASE=$(cd "${REPO}/.." || exit; pwd)
CHECKOUT_DIR=${REPO%.git}
DIST=$(basename "${CHECKOUT_DIR}")
git "--work-tree=${CHECKOUT_DIR}" "--git-dir=${REPO}" checkout -f
PERL5LIB=${BASE}/${DIST}/lib /home/corion/perl/bin/cpanm --installdeps "${CHECKOUT_DIR}" -l "${CHECKOUT_DIR}/extlib" --notest
cd "${CHECKOUT_DIR}/public" && make deploy
See Also
Git::Hooks -
a Perl program for Git hooks
git-init - create or edit your default Git hooks
Other approaches
A bundler for Perl. This requires you to have the same Perl compiler and compiler
flags locally as you have on the remote machine as it compiles all artifacts
locally.
Time progresses on its own, which manifests itself in timestamps in filenames
increasing. Such files are files from the tax office, bank statements and other
stuff I download from such websites. I like to have them moved from the
Downloads directory into directories that are in my backup run, and already
have a script that reacts to such downloads being completed.
This script previously dumped the files in the root directory for documents
of my bank or the tax office via
mv -i ~/Downloads/bank-statement-2022-01-01.pdf ~/Documents/finance/my-bank/
mv -i ~/Downloads/tax-report-01012022.pdf ~/Documents/finance/taxes/
So from time to time, I went through ~/Documents/finance/my-bank/
and
moved the files into directories according to the year they were in.
But I realized that a small tool can do that automatically and also create
the directories directly. And ideally, I don't need to tell the tool very much
at all:
move-year --create -ymd -i ~/Downloads/bank-statement-2022-01-01.pdf ~/Documents/finance/my-bank/
move-year --create -dmy -i ~/Downloads/tax-report-31112021.pdf ~/Documents/finance/taxes/
This way, the files will automatically land in the directories ~/Documents/finance/my-bank/2022
and ~/Documents/finance/taxes/2021
.
The tool is not yet on CPAN, but it lives
on Github