From GitHub to GitLab: A Practical Guide to CI/CD Independence

Why we moved our CI/CD pipeline to GitLab—and what we learned about Docker-in-Docker privileges along the way.

By Jurg van Vliet

There's something awkward about building a European digital sovereignty platform on Microsoft-owned infrastructure. GitHub Actions is convenient—deeply integrated, well-documented, generous free tier—but every workflow run happens on Azure machines in regions Microsoft chooses. Your source code, build logs, and deployment secrets all flow through American infrastructure.

For Clouds of Europe, this contradiction is our raison d'être. We migrated our entire CI/CD pipeline to a self-hosted GitLab instance, and this article shares the practical lessons: the security scanning setup, the Docker build detour through Kaniko and back, and the workflow patterns that made selective deployments straightforward.

Why GitLab, Not Just "Not GitHub"

The obvious question: why not Gitea, Forgejo, or another lightweight Git server? The answer is CI/CD maturity. GitLab's pipeline system has evolved for over a decade. The runner ecosystem is stable. The documentation is comprehensive. And critically, you can self-host it on infrastructure you control.

We run our GitLab instance on European infrastructure managed by Aknostic. The runners execute on European servers. Build artifacts stay in European storage. There's no ambiguity about jurisdiction—GDPR applies, and American subpoenas don't.

This isn't paranoia. For organizations handling European citizen data, regulatory clarity matters. "Our CI/CD runs on Microsoft Azure" is a compliance conversation you don't want to have with a German regulator.

The Pipeline Architecture

Our pipeline has three stages:

stages:
  - security
  - build
  - deploy

Each stage is in its own file under .gitlab/ci/, included from the main .gitlab-ci.yml. This modularity makes it easy to understand what each stage does and to modify stages independently.

The main configuration also defines pipeline inputs—dropdown menus in GitLab's "Run Pipeline" UI that let you skip stages or select deployment targets:

spec:
  inputs:
    start_with:
      description: "Stage to start from (skip earlier stages)"
      default: "security"
      options:
        - "security"
        - "build"
        - "deploy"
    deploy_env:
      description: "Environment to deploy"
      default: "test"
      options:
        - "test"
        - "production"

This seemingly minor feature turned out to be valuable. When you need to re-deploy production without rebuilding (maybe Flux got stuck, maybe you're testing deployment scripts), you can select start_with: deploy and deploy_env: production. No waiting for security scans and Docker builds you don't need.

Security Scanning: TruffleHog and SOPS Validation

GitHub Actions has a marketplace of security scanning actions. GitLab has... less of that. But it turns out that building your own security stage isn't hard, and the result is more transparent.

Our security stage runs three jobs:

TruffleHog Secret Scanning

trufflehog-scan:
  stage: security
  image: trufflesecurity/trufflehog:latest
  variables:
    GIT_DEPTH: 0  # Full clone for history scanning
  script:
    - |
      # Determine scan range based on pipeline type
      if [ "$CI_PIPELINE_SOURCE" = "merge_request_event" ]; then
        BASE_SHA="$CI_MERGE_REQUEST_TARGET_BRANCH_SHA"
        echo "Scanning MR changes from $BASE_SHA to HEAD"
      elif [ -n "$CI_COMMIT_BEFORE_SHA" ] && [ "$CI_COMMIT_BEFORE_SHA" != "0000000000000000000000000000000000000000" ]; then
        BASE_SHA="$CI_COMMIT_BEFORE_SHA"
        echo "Scanning push from $BASE_SHA to HEAD"
      else
        BASE_SHA="HEAD~10"
        echo "Scanning last 10 commits"
      fi

      trufflehog git file://. --since-commit="$BASE_SHA" --only-verified --fail --no-update

The key insight is --only-verified. TruffleHog can detect patterns that look like secrets (API keys, tokens, passwords), but many of these are false positives—example configurations, test fixtures, documentation. The --only-verified flag tells TruffleHog to actually test credentials against their services (AWS, GitHub, Slack, etc.) and only fail if they're real and active.

This makes the scan dramatically more useful. You're not wading through hundreds of "this looks like it might be a secret" warnings. You're getting actionable "this is a real credential that works right now" alerts.

Hardcoded Secrets Check

TruffleHog catches committed credentials, but it doesn't catch everything. We have a simpler check that looks for patterns specific to our codebase:

hardcoded-secrets-check:
  stage: security
  image: alpine:latest
  script:
    - |
      FOUND_ISSUES=0

      # Check for plain text appPassword in config files
      if grep -r 'appPassword:.*"[a-zA-Z0-9]' --include="*.yaml" --include="*.yml" . 2>/dev/null | \
         grep -v -E "(template|values|example|gitops|node_modules|\.git|tests|docs|helm)" | grep -q .; then
        echo "Found plain text appPassword"
        FOUND_ISSUES=1
      fi

      # Check for .plain.yaml files
      if find . -name "*.plain.yaml" ! -path "./.git/*" 2>/dev/null | grep -q .; then
        echo "Found plain text secret files (*.plain.yaml)"
        FOUND_ISSUES=1
      fi

      if [ $FOUND_ISSUES -eq 0 ]; then
        echo "No hardcoded secrets detected"
      else
        exit 1
      fi

This catches our specific anti-patterns: YAML files with appPassword that aren't in the expected locations (templates, examples), and .plain.yaml files that someone forgot to encrypt before committing.

SOPS Validation

The third check ensures our encrypted secrets are actually encrypted:

sops-validation:
  stage: security
  image: alpine:latest
  script:
    - |
      # Find YAML files in sops-secrets directories
      ENCRYPTED_FILES=$(find gitops/ -path "*/sops-secrets/*.yaml" ! -name "kustomization.yaml" 2>/dev/null || true)

      for file in $ENCRYPTED_FILES; do
        if ! grep -q "ENC\[AES" "$file" && ! grep -q "sops:" "$file"; then
          echo "File $file in sops-secrets/ doesn't appear to be SOPS encrypted"
          exit 1
        fi
        echo "$file is properly encrypted"
      done

The ! -name "kustomization.yaml" exclusion is important—we learned this the hard way. Kustomization files in sops-secrets/ directories are configuration, not secrets. They shouldn't be encrypted, and checking them was causing false positives.

The Kaniko Detour

Now we get to the interesting part: Docker builds. GitHub Actions runners have Docker available by default. GitLab runners... it's complicated.

The standard approach is Docker-in-Docker (DinD): run a Docker daemon as a sidecar service, and connect to it from your build job. This works, but it requires privileged runners—the Docker daemon needs capabilities that aren't available in unprivileged containers.

Privileged runners are a security concern. A malicious job could potentially escape the container and access the host. For a self-hosted GitLab instance, this means trusting every job that runs on your infrastructure.

We tried Kaniko, Google's tool for building container images without Docker:

build-and-push:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:v1.23.2-debug
    entrypoint: [""]
  before_script:
    - |
      # Create Kaniko config for registry authentication
      mkdir -p /kaniko/.docker
      echo "{\"auths\":{\"${REGISTRY}\":{\"auth\":\"$(echo -n nologin:${SCW_SECRET_KEY} | base64)\"}}}" > /kaniko/.docker/config.json
  script:
    - |
      /kaniko/executor \
        --context="${CI_PROJECT_DIR}" \
        --dockerfile="${CI_PROJECT_DIR}/Dockerfile" \
        --destination="${IMAGE_BASE}:${CI_COMMIT_REF_SLUG}-${SHORT_SHA}" \
        --cache=true \
        --cache-repo="${IMAGE_BASE}/cache"

Kaniko works differently: it unpacks the base image, executes Dockerfile instructions as file operations, and repacks the result. No Docker daemon, no privileged mode.

The problem? Kaniko's authentication handling is awkward. The config file needs to be in /kaniko/.docker/, which is read-only in the executor image. We tried mounting it from the project directory with --kaniko-dir, but that introduced other permissions issues. Registry authentication that worked fine with docker login required careful base64 encoding with Kaniko.

After spending a day fighting Kaniko's quirks, we stepped back and asked: what's the actual risk we're mitigating?

Our GitLab runners are dedicated to our projects. We control what jobs run on them. The privileged mode concern is real for shared runners where untrusted code might execute, but that's not our situation.

We reverted to Docker-in-Docker:

build-and-push:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_HOST: tcp://docker:2375
    DOCKER_TLS_CERTDIR: ""
    DOCKER_BUILDKIT: "1"
  before_script:
    - docker info
    - echo "$SCW_SECRET_KEY" | docker login $REGISTRY -u nologin --password-stdin

The key configuration is DOCKER_HOST: tcp://docker:2375 and DOCKER_TLS_CERTDIR: "". This disables TLS between the build job and the DinD sidecar. In a trusted network (which a Kubernetes pod's localhost is), TLS adds complexity without meaningful security benefit.

The lesson: sometimes the straightforward solution is the right one. Kaniko solves a real problem—building images in environments where you can't run privileged containers—but if you control your runners, the added complexity isn't worth it.

Deployment: OpenTofu and Flux

The deploy stage handles infrastructure provisioning (OpenTofu) and application deployment (Flux GitOps). Here's the test environment deployment:

deploy-test:
  stage: deploy
  image: alpine:latest
  environment:
    name: test
    url: https://test.clouds-of-europe.eu
  script:
    - |
      # Initialize OpenTofu with Scaleway S3 backend
      cd infrastructure/opentofu/environments/test
      tofu init -upgrade \
        -backend-config="bucket=coe-opentofu-state" \
        -backend-config="key=test/terraform.tfstate" \
        -backend-config="region=fr-par" \
        -backend-config="endpoint=https://s3.fr-par.scw.cloud" \
        -backend-config="access_key=${SCW_ACCESS_KEY}" \
        -backend-config="secret_key=${SCW_SECRET_KEY}"

      tofu plan -out=tfplan
      tofu apply -auto-approve tfplan

      # Extract kubeconfig
      tofu output -raw kubeconfig > /tmp/kubeconfig.yaml
      export KUBECONFIG=/tmp/kubeconfig.yaml

      # Bootstrap or reconcile Flux
      if kubectl get deployment source-controller -n flux-system &>/dev/null; then
        flux reconcile source git flux-system -n flux-system
      else
        flux bootstrap gitlab \
          --hostname=gitlab.aknostic.com \
          --owner=clouds-of-europe \
          --repository=clouds-of-europe \
          --branch=main \
          --path=gitops/clusters/test \
          --token-auth
      fi
  artifacts:
    paths:
      - kubeconfig-test.yaml
    expire_in: 7 days

A few things worth noting:

OpenTofu over Terraform: OpenTofu is the open-source fork of Terraform created after HashiCorp's license change. For a sovereignty-focused platform, using truly open-source infrastructure tooling aligns with our values.

Kubeconfig as artifact: After deployment, we save the kubeconfig file as a pipeline artifact. This means you can download cluster access credentials directly from the GitLab UI—useful for debugging, and the 7-day expiration means credentials don't persist forever.

Flux idempotence: The script checks whether Flux is already installed before bootstrapping. This makes the deployment job idempotent—you can run it multiple times without breaking the cluster.

Environment-specific secrets: The SOPS Age key differs between test and production (SOPS_AGE_KEY_TEST vs SOPS_AGE_KEY_PRODUCTION). Each environment can only decrypt its own secrets.

Production Deployment: Automatic When You Want It

Production deployment requires explicit opt-in, but we made it automatic when you've already decided:

deploy-production:
  rules:
    # Run automatically when DEPLOY_ENV=production
    - if: $CI_PIPELINE_SOURCE == "web" && $DEPLOY_ENV == "production"
    # Manual trigger for other web runs
    - if: $CI_PIPELINE_SOURCE == "web"
      when: manual

If you run a pipeline from the web UI and select deploy_env: production, the deployment starts immediately—no clicking through manual gates. But if you run a normal pipeline without that explicit selection, production deployment requires a manual click.

This balances safety with efficiency. You don't accidentally deploy to production, but when you intend to, you're not clicking through unnecessary confirmation dialogs.

What We Gained

Beyond independence, the migration gave us tangible improvements:

Clearer stage separation: Security scanning, builds, and deployments are distinct. A build failure doesn't require re-running security scans. A deployment re-run doesn't require rebuilding.

Better artifact management: Docker images push to Scaleway's registry in Paris. State files live in Scaleway S3. Kubeconfigs save as pipeline artifacts. Everything has a clear location.

Workflow flexibility: The start_with and deploy_env inputs let operators skip stages and target environments without editing pipeline code.

Transparent security: Instead of trusting a marketplace action, we see exactly what TruffleHog and our custom checks do. When something fails, the debug path is clear.

What We Lost

Honesty requires acknowledging the trade-offs:

GitHub Actions' ecosystem: The marketplace has thousands of actions. GitLab's ecosystem is smaller. We wrote more shell scripts.

Documentation and community: Stack Overflow has more GitHub Actions answers. When something breaks, you're more likely to find someone who's seen it before.

Integration convenience: GitHub Actions workflows can reference other repositories, use GitHub's secrets management, trigger on GitHub events. We had to build some of this ourselves.

For Clouds of Europe, independence outweighed these inconveniences. For projects without regulatory requirements or philosophical commitments to European infrastructure, GitHub Actions might be the better choice. The point isn't that GitLab is universally superior—it's that migration is achievable when independence matters.

Key Takeaways

Self-hosted CI/CD is achievable. GitLab's runner model is mature. The pipeline syntax is well-documented. You can migrate from GitHub Actions without heroic effort.

Build your own security scanning. TruffleHog with --only-verified is more useful than marketplace actions that generate noise. Custom checks for your codebase's specific anti-patterns catch what generic tools miss.

Question your assumptions about privileged runners. Kaniko solves a real problem, but if you control your runners, Docker-in-Docker with proper configuration might be simpler.

Use pipeline inputs for workflow flexibility. The ability to skip stages and select deployment targets without editing code makes operations smoother.

Save kubeconfigs as artifacts. Having cluster access available from the pipeline UI is valuable for debugging and emergency access.


This article documents work done on the Clouds of Europe platform in January 2026.