Setting up z.ai GLM 4.7 with Claude Code

Tags:

Why?

I started looking at z.ai as an alternative, as I hit the Claude Code limits far too quickly and often. GLM 4.7 claims to be on par with Anthropic while only costing USD 36/year at the time I subscribed. They've now raised their prices to USD 84/year ( non-referral link ).

The Claude Code client currently is still better than the client provided by opencode. Soon, opencode will have a --yolo mode to auto-override all permission prompts. Maybe it will even support monochrome mode instead of its fruit salad UI, but I think that the allure of all those TUI tools to their creator is to create a colorful slot machine where people spend their tokens.

Installation

Installing the Claude Code client just follows the default installation. I simply reused my existing CC installation.

Config setup

The configuration is where instead of pointing to the Claude Code models, the client is pointed to the GLM models that have the same API.

  1. Get your API key from the z.ai API key page
  2. Add the following setting to your environment

    export ANTHROPIC_AUTH_TOKEN_HELPER='echo "your_zai_api_key"'
    
  3. Add the "env" block to your (z.ai) .claude.conf

    {
        "env": {
            "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
            "API_TIMEOUT_MS": "3000000"
        }
    }
    
  4. Launch Claude as usual

Containerfile creating a container for CC-with-GLM4.7

The Containerfile I used with Claude also works with z.ai , as long as you mount the appropriate config directory.

My Claude Code setup

Tags:

Lethal Trifecta

All AI agents must live in the Lethal Trifecta as coined by Simon Willison.

Lethal Trifecta

For programming assistants, who need to be online to install modules and to run tests this basically means they cannot have access to private information. So my solution is to run them in a podman container where they have read/write access to a directory where I also check out the code the agent should work on.

This is somewhat in contrast to the current meme of letting an OpenClaw assistant run with your credentials, your email address and input from the outside world.

Setup

My setup choses to remove all access to private data, since for programming an agent does not need access to any data that should not be publically known.

  • Claude Code within its own Docker container
  • Runs as root there
  • Mount /home/corion/claude-in-docker/.claude as /root/.claude
  • Mount working directory as /claude
  • (maybe) mount other needed directories as read-only, but I haven't felt the need for that

Dockerfile

FROM docker.io/library/debian:trixie-slim
# debian-trixie-slim
RUN <<EOF
apt update

# Install our packages
DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y npm perl build-essential imagemagick git apache2 wireguard wget curl cpanminus liblocal-lib-perl ripgrep

# Install claude
curl -fsSL https://claude.ai/install.sh | bash

# Set up our directories to be mountable from the outside
mkdir -p /work
mkdir -p /root/.claude

# Now you need to /login with claude :-/

# claude plugins install superpowers@superpowers-marketplace

EOF

# Add claude to the search path
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin"
ENTRYPOINT ["bash"]
CMD ["-i"]

Script to launch CC

Of course, the first thing an AI agent is used for is to write a script that launches the AI agent in a container. This script is very much still under development as I find more and more use cases that the script does not cover.

Development notes

While developing the script, I found that Claude Code very much needs example sections to work from. On its own, it comes up with code that is not really suitable. This mildly reinforces to me that the average Perl code used for training is not really good.