Why?
I started looking at z.ai as an alternative, as I hit
the Claude Code limits far too quickly and often. GLM 4.7 claims to be on par
with Anthropic while only costing USD 36/year at the time I subscribed.
They've now raised their prices to USD 84/year
( non-referral link ).
The Claude Code client currently is still better than the client provided by opencode. Soon, opencode will have a --yolo mode to auto-override all permission prompts.
Maybe it will even support monochrome mode instead of its fruit salad UI, but I think that the allure of all those TUI tools to their creator is to create a colorful slot machine where people spend their tokens.
Installation
Installing the Claude Code client just follows the default installation. I simply reused my existing CC installation.
Config setup
The configuration is where instead of pointing to the Claude Code models, the client is pointed to the GLM models that have the same API.
- Get your API key from the z.ai API key page
Add the following setting to your environment
export ANTHROPIC_AUTH_TOKEN_HELPER='echo "your_zai_api_key"'
Add the "env" block to your (z.ai) .claude.conf
{
"env": {
"ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
"API_TIMEOUT_MS": "3000000"
}
}
Launch Claude as usual
Containerfile creating a container for CC-with-GLM4.7
The Containerfile
I used with Claude also works with z.ai , as long as you mount the appropriate config directory.
... which are meaningless internet points, but Anthropic only
recognize projects with 5k+ Github stars
for their Open Source program. Of course, you can also apply and try for a manual review.
But maybe we should push the stars of the Perl repository
up a bit in this popularity contest...
Of course, a free 6-month Claude Max subscription isn't all that great, but
if Github stars are a measure...
When I write a blog entry, I usually end in the following loop:
- Write/update the
.markdown file
- Regenerate the HTML using
statocles build; this regenerates the whole site
- Refresh my browser to view the current page
Glueing together Mojo::File::ChangeNotify
and App::Mojo::AssetReloader and a nice default rule using make for changed files makes updating the browser automatic and somewhat instant.
The Makefile
BLOGPOSTS=$(wildcard blog/*/*/*/*/*.markdown)
HTML=$(patsubst %.markdown,.statocles/build/%.html,$(BLOGPOSTS))
all: $(HTML)
%.html: $(patsubst $(patsubst .statocles/build/,,$<),.html,.markdown)
statocles build
The main "problem" now is opening the loong URL in the browser
manually. Having make also work with processes
( via /proc/(pid)/cmdline ) is something for a later day.
It would be great if statocles supported building a single page, but
so far, I've resolved to rebuilding the whole site every time.
Reality imitates art:

Waymo Is Getting DoorDashers to Close Doors on Self Driving Cars
Lethal Trifecta
All AI agents must live in the Lethal Trifecta as coined
by Simon Willison.

For programming assistants, who need to be online to install modules and to run tests
this basically means they cannot have access to private information. So my solution is to run them
in a podman container where they have read/write access to a directory where I also check out
the code the agent should work on.
This is somewhat in contrast to the current meme of letting an
OpenClaw assistant run with your credentials, your
email address and input from the outside world.
Setup
My setup choses to remove all access to private data, since for programming
an agent does not need access to any data that should not be publically known.
- Claude Code within its own Docker container
- Runs as
root there
- Mount
/home/corion/claude-in-docker/.claude as /root/.claude
- Mount working directory as
/claude
- (maybe) mount other needed directories as read-only, but I haven't felt the need for that
Dockerfile
FROM docker.io/library/debian:trixie-slim
# debian-trixie-slim
RUN <<EOF
apt update
# Install our packages
DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y npm perl build-essential imagemagick git apache2 wireguard wget curl cpanminus liblocal-lib-perl ripgrep
# Install claude
curl -fsSL https://claude.ai/install.sh | bash
# Set up our directories to be mountable from the outside
mkdir -p /work
mkdir -p /root/.claude
# Now you need to /login with claude :-/
# claude plugins install superpowers@superpowers-marketplace
EOF
# Add claude to the search path
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin"
ENTRYPOINT ["bash"]
CMD ["-i"]
Script to launch CC
Of course, the first thing an AI agent is used for is to write a script
that launches the AI agent in a container. This script is
very much still under development as I find more and more use cases that
the script does not cover.
Development notes
While developing the script, I found that Claude Code very much needs
example sections to work from. On its own, it comes up with code that is not
really suitable. This mildly reinforces to me that the average Perl code
used for training is not really good.