Aria2 Proxy
Overview
Section titled “Overview”Purpose-built Alpine image for the KubeVirt download-proxy Deployment in the angelscript namespace. Replaces the previous FROM alpine:3.21 + apk add aria2 pattern that added a 30+ second cold start to every pod restart and made the proxy dependent on upstream mirror availability during CI build stages.
Why a custom image
Section titled “Why a custom image”The download-proxy sits between the Windows/macOS builder VMs and the public internet. Its job is to pull large artefacts (UE5 ~40 GB, Visual Studio Build Tools ~5 GB, macOS Xcode .xip ~15 GB) with resume + multi-connection support, drop them on a shared PVC, and serve them back to the VM over plain HTTP to avoid Cilium VXLAN masquerade NAT dropping long-lived TCP connections.
A custom image makes that reliable in three ways:
- No cold-start
apk add— aria2 and python are baked in, so the pod is serving within seconds of scheduling. - Air-gap-safe — the image’s dependencies are pinned via the Alpine release tag; a transient Alpine mirror outage does not block VM provisioning.
- Signal handling — tini runs as PID 1 so kubelet’s
SIGTERMpropagates cleanly, stopping in-flight downloads instead of leaving aria2 hanging through the grace period.
Image layout
Section titled “Image layout”| Binary | Purpose |
|---|---|
aria2c | Downloader with resume, retry, and multi-connection support |
python3 | Serves completed downloads via python3 -m http.server |
tini | PID 1 reaper so SIGTERM shuts down cleanly |
ca-certificates | TLS trust for HTTPS upstream sources |
Runs as uid/gid 1000 (appuser). Workdir is /downloads. Ports 6800 (aria2 RPC) and 8080 (file-server HTTP) are exposed; the Deployment’s pod network namespace is shared across containers so both bind into the same netns.
Used by
Section titled “Used by”apps/kube/kubevirt/download-proxy/deployment.yaml— both containers (aria2+file-server) use this image; the Deployment provides thecommand:per container.