Kaniko to build docker images

My learnings

Posted by Tobias L. Maier on November 01, 2022

When building docker images, one usually uses docker image build to do so. This is great for the local development environment, but we also need to build docker images within our CI/CD pipelines. Such CI/CD pipelines are usually hardened in some sense. This may mean that one cannot run or access the docker daemon from within the CI/CD environment (lack of privileged mode or support for “Docker-in-Docker”).

In such a case, one must look for alternate approaches. At the time of writing, the author sees two alternative tools to build docker images/containers:

Kaniko is also recommended approach by GitLab for GitLab CI, which is heavily used by the author for all his personal projects.

I have been using Kaniko within his GitLab CI pipelines since June 2019 and want to share his learnings in this post.

Lets start directly with the meat and disect it then one-by-one:

.build-image:
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: ['']
  script:
    - mkdir -p /kaniko/.docker
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
    - echo ${BUNDLE_GEMS__MYGEMSERVER__COM?"BUNDLE_GEMS__MYGEMSERVER__COM must be set"} > /kaniko/gemserver_credentials
    - /kaniko/executor
      --cache-repo=$CI_REGISTRY_IMAGE/cache
      --cache=true
      --context $CI_PROJECT_DIR
      --destination $MY_IMAGE
      --destination $MY_IMAGE_LATEST
      --dockerfile $CI_PROJECT_DIR/Dockerfile
      --registry-mirror mirror.gcr.io
      --registry-mirror index.docker.io
      --skip-unused-stages
      --snapshot-mode=redo
      --target $TARGET
      --use-new-run
  dependencies: []

build test Docker image:
  extends: .build-image
  variables:
    MY_IMAGE_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:test-latest
    MY_IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:test-$CI_COMMIT_SHA
    TARGET: test
  stage: build

build production Docker image:
  extends: .build-image
  variables:
    MY_IMAGE_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:latest
    MY_IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA
    TARGET: production
  stage: deploy

Allow access to a private Docker registry

In my case, I store all Docker images within GitLab. This are the test images, the production images, but also the build cache (ref. cache-repo).

For this we need to create the /kaniko folder first, before we create the config.json manually

mkdir -p /kaniko/.docker
echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json

The /kaniko/ is pretty interesting and took me a while until I really understood its value. This folder is available within the ongoing docker build, but is not stored within the image layers to be built.

Access private artifact repositories (e.g. private Gemserver) from within Kaniko build

I am hosting private artifacts, which I need to install within the docker image build. In my case private gems for Ruby, but the same pattern applies for other package managers (e.g. yarn, npm) as well.

The challenge is, how to authenticate against the private artifact repository during the ongoing container build?

Bad Practice: Provide auth credentials via build arguments

One approach to provide the credentials would be to use ARG.

COPY [".ruby-version", "Gemfile", "Gemfile.lock", "/usr/src/app/"]
ARG BUNDLE_GEMS__MYGEMSERVER__COM
RUN true \
  && bundle install -j "$(getconf _NPROCESSORS_ONLN)" \
  && rm -rf /usr/local/bundle/cache \
  && find /usr/local/bundle/ -name ".git" -exec rm -rv {} + \
  && find /usr/local/bundle/ -name "*.c" -delete \
  && find /usr/local/bundle/ -name "*.o" -delete \
  && rm -rf /usr/local/bundle/ruby/*/cache \
  && bundle clean --force

The issue with this approach is, that the build argument is baked into the image and anyone with access to that docker image can just fetch it from that file.

One might be able to mitigate this using multi-stage builds. But in general, this approach is insecure and cannot be recommended.

Best practice: Provide auth credentials via “Docker secrets”

Docker supports to pass through secrets, like credentials, via files mounted within the docker build process.

RUN --mount=type=secret,id=gemserver_credentials,dst=/kaniko/gemserver_credentials \
  BUNDLE_GEMS__MYGEMSERVER__COM="$(cat /kaniko/gemserver_credentials)" \
  && export BUNDLE_GEMS__MYGEMSERVER__COM \
  && bundle install \
  && find /usr/local/bundle/ -name ".git" -exec rm -rv {} + \
  && find /usr/local/bundle/ -name "*.c" -delete \
  && find /usr/local/bundle/ -name "*.o" -delete \
  && rm -rf /usr/local/bundle/ruby/*/cache

--mount=type=secret,id=gemserver_credentials,dst=/kaniko/gemserver_credentials defines what and where to mount the secret to.

The following reads the secret and exposes it temporarily as environment variable. (Note this is necessary, as Ruby’s Bundler only supports this approach load secrets)

BUNDLE_GEMS__MYGEMSERVER__COM="$(cat /kaniko/gemserver_credentials)"
export BUNDLE_GEMS__MYGEMSERVER__COM

Docker secrets not supported by Kaniko

The issue is, that Kaniko does not really support Docker secrets.

However, there is a workaround, which you can see also above: We mount the secret to /kaniko/, which allows us to support Docker secrets with docker CLI for local development and kaniko within the CI/CD pipeline.

For Kaniko, we store the file within the /kaniko/ folder before we invoke the kaniko executor. In my case, the secret is stored as environment variable within GitLab CI. I echo it to /kaniko/gemserver_credentials.

echo ${BUNDLE_GEMS__MYGEMSERVER__COM?"BUNDLE_GEMS__MYGEMSERVER__COM must be set"} > /kaniko/gemserver_credentials

Caching of image layers

To speed up builds, kaniko supports to cache layers for later re-use. I store the cache also within my Docker registry, which works pretty well.

/kaniko/executor --cache-repo=$CI_REGISTRY_IMAGE/cache --cache=true

Use mirror.gcr.io as Mirror for public images

Since the public docker registry (Docker Hub) imposes a rate limit on docker image pulls for some time, I recommend to use an alternate mirror wherever possible.

Luckily Google provides a public Mirror at mirror.gcr.io, which we can also use with Kaniko.

Multi-Stage builds to support test vs. production images

I am using multi-stage builds to build an image for testing and later, right before deployment, I build a production image.

The key difference is, that the test image has all the test packages installed (e.g. rspec) and that the assets ($ rails assets:precompile) has ben ran with the respective environment variables set (e.g. RAILS_ENV=test).

The production image has a very minimal size and attack surface. I only install the bare minimum of Gems - and for example no NodeJS support, since all JavaScript assets have been precompiled.

The Gitlab CI jobs to build the images run in the build and deploy stage.

build test Docker image:
  extends: .build-image
  variables:
    MY_IMAGE_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:test-latest
    MY_IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:test-$CI_COMMIT_SHA
    TARGET: test
  stage: build

build production Docker image:
  extends: .build-image
  variables:
    MY_IMAGE_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:latest
    MY_IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA
    TARGET: production
  stage: deploy

The images reference the branch name and also the specific commit hash within the docker image tag. What is great with Kaniko is, that one can set multiple names at once (ref. --destination).