This directory contains everything needed to run the Manabu Ninja production server: a single DigitalOcean droplet serving three subdomains through containerized services managed by Podman and Quadlet.
| Subdomain | What it serves | Container |
|---|---|---|
| manabu.ninja | Marketing website (Astro static site) | Caddy (bind-mount /srv/site/) |
| app.manabu.ninja | Flutter web app | Caddy (bind-mount /srv/app/) |
| fossil.manabu.ninja | Fossil SCM repository browser | Fossil (reverse-proxied by Caddy) |
🏗️ Architecture
Both Caddy and Fossil run as rootful Podman containers managed by Quadlet (systemd-native .container files). They share a Podman network so Caddy can reach Fossil by container name (fossil:8080). Fossil has no published ports -- it's only accessible through Caddy's reverse proxy.
Container images are built locally on your dev machine and copied to the server via scp (no registry needed).
Container layout
Podman network: manabu
├── caddy (caddy:2-alpine)
│ ├── ports 80, 443 published
│ ├── /etc/caddy/Caddyfile (bind-mount)
│ ├── /srv/site, /srv/app (bind-mount, read-only)
│ └── caddy-data, caddy-config (named volumes)
└── fossil (custom Alpine image)
├── no published ports (internal only)
└── /srv/fossil:/museum (bind-mount)
📁 Files
| File | What it does |
|---|---|
containers/manabu.network |
Quadlet network definition -- shared bridge network |
containers/caddy/caddy.container |
Quadlet unit for Caddy |
containers/caddy/Caddyfile |
Caddy web server configuration |
containers/caddy/caddy-data.volume |
Quadlet volume for TLS certificates |
containers/caddy/caddy-config.volume |
Quadlet volume for Caddy config cache |
containers/fossil/fossil.container |
Quadlet unit for Fossil |
containers/fossil/Dockerfile |
Alpine-based Fossil image |
setup.sh |
Server provisioning -- installs Podman, loads images, deploys Quadlet files |
pulumi/ |
Pulumi program (TypeScript) -- DigitalOcean resources + provisioning |
🧰 Prerequisites
You only need these if you're managing the server infrastructure. Contributors working on the app or website don't need any of this.
- Node.js (LTS 18+) --
winget install OpenJS.NodeJS.LTS/brew install node - Pulumi CLI --
winget install Pulumi.Pulumi/brew install pulumi - Task --
winget install Task.Task/brew install go-task - Podman --
winget install RedHat.Podman/ see docs - doctl (optional) -- DigitalOcean CLI
You'll also need:
- A DigitalOcean account with an API token (Full Access, read+write)
- An SSH key uploaded to DigitalOcean (the same key pair you use locally)
🏁 Initial Setup (First Time Only)
1. Install Pulumi dependencies
cd infra/pulumi
npm install
2. Create a Pulumi stack
pulumi stack init prd
3. Authenticate and configure
# Authenticate with DigitalOcean
doctl auth init --access-token <your-token>
# Or set as an environment variable
# PowerShell: $env:DIGITALOCEAN_ACCESS_TOKEN = "<your-token>"
# Find your SSH key name
doctl compute ssh-key list
# Configure Pulumi
cd infra/pulumi
pulumi config set digitalocean:token <your-token> --secret
pulumi config set sshKeyName "<your-key-name>"
# Only if not ~/.ssh/id_ed25519:
pulumi config set privateKeyPath "C:\Users\you\.ssh\id_rsa"
4. Preview and deploy
pulumi preview # Dry run
pulumi up # Create everything
Wait for DNS to propagate: nslookup manabu.ninja
5. Set up Fossil repository
pulumi up creates an empty Fossil repo on the server. To replace it with your local repo:
ssh root@<ip> "systemctl stop fossil"
scp <path-to-your-local.fossil> root@<ip>:/srv/fossil/repo.fossil
ssh root@<ip> "systemctl start fossil"
Important: Always stop Fossil before overwriting the repo file. Writing while Fossil is running will corrupt the database.
Then create a user account:
ssh root@<ip> "podman exec fossil fossil user new <username> -R /museum/repo.fossil"
ssh root@<ip> "podman exec fossil fossil user capabilities <username> sy -R /museum/repo.fossil"
The s capability is admin/setup. The y capability is required for pushing unversioned files.
6. Configure local Fossil remote
fossil remote https://<username>@fossil.manabu.ninja/repo.fossil
fossil setting uv-sync 1
fossil sync # Test the connection
🔄 Day-to-Day Operations
All task commands run from the project root.
Content deployment
| Command | What it does |
|---|---|
task site:deploy |
Deploy marketing site |
task deploy-app |
Deploy Flutter web app |
task site:deploy && task deploy-app |
Deploy both |
Container and server management
| Command | What it does |
|---|---|
task infra:build |
Build images locally |
task infra:deploy |
Build + push images + restart |
task infra:config |
Push updated Caddyfile |
task infra:quadlet |
Push updated Quadlet files |
task infra:ssh |
SSH into server |
task infra:status |
Check container status |
pulumi up |
Full re-provision |
App release
| Command | What it does |
|---|---|
task build-apk |
Build Android APK |
task build-windows |
Build Windows executable |
task build |
Build all platforms |
task release |
Build + upload to Fossil UV |
🔧 Troubleshooting
Containers not running
task infra:status
# Or manually:
ssh root@<ip> "podman ps"
ssh root@<ip> "systemctl status caddy"
ssh root@<ip> "systemctl status fossil"
Restart if needed:
ssh root@<ip> "systemctl restart caddy"
ssh root@<ip> "systemctl restart fossil"
View logs
ssh root@<ip> "podman logs caddy"
ssh root@<ip> "podman logs fossil"
ssh root@<ip> "journalctl -u caddy --no-pager -n 50"
ssh root@<ip> "podman logs -f caddy" # Follow in real time
HTTP 403 on static files
File permission issue. Deploy recipes fix this automatically, but if you uploaded manually:
ssh root@<ip> "chmod -R o+rX /srv/site/"
ssh root@<ip> "chmod -R o+rX /srv/app/"
HTTPS certificate issues
Caddy auto-provisions Let's Encrypt certificates. If HTTPS isn't working, check:
- DNS resolves to the correct IP (
nslookup manabu.ninja) - Ports 80/443 are open (Pulumi firewall handles this)
- Caddy is running (
podman ps)
ssh root@<ip> "podman logs caddy 2>&1 | grep -i cert"
Fossil assets showing HTTP instead of HTTPS
The Fossil container needs --https and --baseurl flags. Verify fossil.container includes:
Exec=server /museum/repo.fossil --port 8080 --repolist --https --baseurl https://fossil.manabu.ninja
Redeploy if changed: task infra:quadlet
Fossil sync hangs
Large initial syncs may stall through the Caddy reverse proxy. For initial upload, scp the repo file directly (see step 5). Incremental syncs work fine. If a sync hangs, Ctrl+C then ssh root@<ip> "systemctl restart fossil" before retrying.
Fossil database corruption
If logs show SQLITE_CORRUPT (usually caused by writing to the repo file while Fossil was running):
ssh root@<ip> "systemctl stop fossil && podman exec fossil fossil rebuild /museum/repo.fossil && systemctl start fossil"
Redeploying from scratch
pulumi up-- recreates the droplet and provisions it- Wait for DNS propagation
- Upload the Fossil repo (see step 5)
task site:deploy && task deploy-app-- upload the website and app