European Go Proxy for EUR 93 per Month
European Go module proxy across three Scaleway regions (Paris, Amsterdam, Warsaw) for EUR 93/month. Full engineering teardown: architecture, cost breakdown, decisions made and rejected, and operational numbers from production.
By Jurg van Vliet
goproxy.eu is a multi-region Go module proxy serving European developers from Paris, Amsterdam, and Warsaw. Total infrastructure cost: EUR 93 per month. No proprietary dependencies. No vendor lock-in. Every line of configuration is in a public Git repository.
This article is the full engineering teardown. Architecture decisions, cost breakdown, what we tried and rejected, and operational numbers after running in production.
Architecture in 30 seconds
Three Kubernetes clusters on Scaleway Kapsule (Paris, Amsterdam, Warsaw). Each runs Athens behind Envoy Gateway with TLS from Let's Encrypt. Module cache in regional S3-compatible Object Storage. Redis for SingleFlight deduplication. Scaleway GeoDNS routes requests to the nearest region. Flux CD reconciles everything from Git.
GOPROXY=https://goproxy.eu,https://proxy.golang.org,direct
goproxy.eu (GeoDNS ALIAS)
│
┌──────┼──────┐
│ │ │
fr-par nl-ams pl-waw ← geo IP routes to nearest
│ │ │
Envoy Envoy Envoy ← TLS, rate limiting (1000 req/min)
│ │ │
Athens Athens Athens ← Go module proxy
│ │ │
Redis Redis Redis ← SingleFlight dedup
│ │ │
S3 S3 S3 ← regional cache
│ │ │
└──────┼──────┘
│
proxy.golang.org ← upstream on cache miss
Cost breakdown
| Component | Per region | 3 regions |
|---|---|---|
| Kapsule control plane | Free | Free |
| 1x DEV1-M node (3 vCPU, 4 GB) | EUR 14.50 | EUR 43.50 |
| Object Storage (50 GB cached modules) | EUR 0.38 | EUR 1.14 |
| Object Storage egress (500 GB/mo) | EUR 4.25 | EUR 12.75 |
| Load Balancer (Envoy Gateway) | EUR 11.50 | EUR 34.50 |
| DNS (GeoDNS) | — | EUR 1 |
| Total | EUR 31 | EUR 93/mo |
For comparison: JFrog Artifactory Cloud starts at $150/month for a single region with 2 GB transfer. A self-hosted Artifactory on AWS with equivalent European coverage (3 regions, ELBs, EBS, data transfer) runs $400-600/month before you count engineering time. Even a single Athens instance on a t3.medium in eu-west-1 with an ALB costs $50-70/month for one region.
EUR 93 buys three European regions with geo routing, automated TLS, GitOps deployment, and centralised observability. Sovereignty carries no cost premium.
Decisions that earned their keep
Per-region caches, no replication. Each region's Object Storage bucket warms independently from local usage. A module popular in Warsaw doesn't pre-populate in Paris. This eliminates cross-region data transfer costs, simplifies the GDPR story (no data moves between regions), and removes replication complexity. The trade-off — cold starts in under-used regions — is acceptable for a cache. A cache miss adds one round trip to upstream, then the module is cached for everyone in that region.
ALIAS record at the apex, not CNAME. DNS zone apex records cannot be CNAMEs (RFC 1034 section 3.6.2). Scaleway supports ALIAS records with geo_ip routing, giving geographic load distribution at the apex. The trade-off: Scaleway's health checks work on A/AAAA records but not ALIAS. A region going down continues receiving traffic from its geo IP match. The Go client's built-in fallback chain handles this — developers see a brief timeout, then fall through to proxy.golang.org. We chose simplicity over DNS-level failover.
Shared-bucket rate limiting, not per-IP. Envoy Gateway applies 1000 requests per minute per proxy instance as a shared bucket. Per-IP rate limiting would require either logging client IPs (breaking our privacy architecture) or running an external Redis rate-limit service (adding cost and complexity). The shared bucket is less precise but maintains the zero-IP-logging guarantee.
DaemonSet proxy, not Deployment. Envoy Gateway runs as a DaemonSet — one proxy pod per node. On a single-node cluster this makes no practical difference, but it guarantees the proxy scales with nodes if we ever scale up, without HPA configuration.
GDPR enforcement at the architecture level
Privacy compliance is not a policy document. It is a configuration choice.
Layer 1: Envoy Gateway access log format. The format string omits %DOWNSTREAM_REMOTE_ADDRESS% and %REQ(X-FORWARDED-FOR)%. Client IPs never appear in access logs because the log format does not include them.
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
Layer 2: Alloy log pipeline. Grafana Alloy's log processing stage regex-replaces any IPv4 or IPv6 address pattern with [REDACTED] before shipping to Loki. If an IP leaks through application logs, it is scrubbed before it reaches storage.
The result: our observability stack (Mimir for metrics, Loki for logs, Grafana for dashboards) contains zero client IP addresses. Not because we delete them after collection, but because they never arrive.
What we tried and rejected
Cross-region cache replication. S3 Cross-Region Replication would pre-warm all caches from a single seed. We rejected it because: (1) it triples storage cost for marginal benefit — most modules are small and upstream latency is acceptable for cold fetches; (2) it creates cross-region data flows that complicate the data residency story; (3) it adds operational complexity for a problem that solves itself through usage patterns.
HTTP-01 ACME challenges. Our initial setup used HTTP-01 for Let's Encrypt certificates. This requires the ACME challenge response to be routable through Envoy Gateway, which creates a circular dependency during initial deployment (no cert → no HTTPS → can't validate). We switched to DNS-01 challenges via a Scaleway cert-manager webhook, which validates through DNS TXT records and works regardless of ingress state.
Prometheus per cluster. A Prometheus server in each cluster would add 500 MB+ of memory per region. Grafana Alloy (DaemonSet, ~100 MB) scrapes all metrics locally and remote-writes to a centralised Mimir instance on Heystaq. One fewer stateful workload per cluster, one fewer thing to monitor.
Operational reality
Deployment. Push to main, Flux reconciles within 60 seconds. Per-region Kustomize overlays handle the differences (S3 bucket names, endpoints, hostnames). Adding a region is: copy a tofu environment, copy a Flux cluster entry, copy a Kustomize overlay, bootstrap Flux, add a geo IP DNS record. We did the third region (Warsaw) in under two hours.
Failover. No automatic DNS failover — Scaleway's geo IP ALIAS doesn't support health checks. When a region goes down, the Go client fallback chain handles it: timeout on goproxy.eu, fall through to proxy.golang.org, builds continue. We've run deliberate failover tests. Client-side recovery takes 5-10 seconds per module fetch during the timeout, then builds proceed normally against Google's proxy.
Secrets. SOPS with age encryption. Two age keys: one for our cluster secrets (API keys, credentials), one for Heystaq's cluster (observability config). Separate trust boundaries, both stored in Git encrypted. No external KMS dependency.
Monitoring. Nine Grafana dashboards, 23 alert rules across 6 groups. Non-critical alerts route to Slack via GoAlert. Critical alerts (region down, certificate expiring, high error rate) escalate to SMS/voice. All metrics carry cluster={fr-par,nl-ams,pl-waw} labels for per-region filtering.
The sovereignty part was free
We did not design for the EU Cloud Sovereignty Framework. We designed for operability: GitOps, open-source components, declarative secrets, European hosting. When we mapped the result against SEAL, it scored well on all eight Sovereignty Objectives — not because we optimised for compliance, but because good engineering practices and sovereignty requirements converge.
The practices that make a system auditable (everything in Git), portable (open-source components), and jurisdictionally contained (encrypted secrets, European infrastructure) are the same practices that make it operable. Sovereignty is a side effect of engineering discipline.
The stack
| Component | Version | License | Role |
|---|---|---|---|
| Athens | 0.15.x | MIT | Go module proxy |
| Envoy Gateway | 1.3.0 | Apache-2.0 | Ingress, TLS, rate limiting |
| Flux CD | 2.7.x | Apache-2.0 | GitOps |
| cert-manager | 1.17.x | Apache-2.0 | TLS automation |
| Redis | 7.x (Bitnami) | RSALv2 | SingleFlight dedup |
| Grafana Alloy | 1.6.x | Apache-2.0 | Metrics/logs collection |
| OpenTofu | 1.10.x | MPL-2.0 | Infrastructure as code |
| Scaleway Kapsule | K8s 1.35 | — | Managed Kubernetes |
No component is irreplaceable. Athens has alternatives (Goproxy, go-mod-proxy) though each has different configuration and storage backends. Envoy Gateway can be swapped for another Gateway API implementation. Flux for ArgoCD. Scaleway for any European cloud with managed Kubernetes and S3-compatible storage. Migration is never zero-effort, but nothing here creates a hard dependency.
goproxy.eu is a Clouds of Europe community project. Source code licensed under Apache-2.0.