Build Options with Managed Jenkins Hosts
General
When a customer orders Managed Jenkins Hosts from DevOps-as-a-Service, there are 2 options to run the builds:
- Permanent host-based Jenkins agent → Node label "docker"
- Dynamic build containers → Node label "dind"
The 2nd option is only available, if a customer orders the managed host in the special "Sysbox" flavor and a Cloud agent is configured in the Jenkins master.
Node Labels
Node Label 'docker'
Use this label in your Jenkinsfile node declaration to let the job run on the classic host based Jenkins agent. This is a Linux agent host, permanently available to the Jenkins master, equipped with preinstalled tools like Docker, kubectl, helm, etc. Tools provided by Jenkins plugins like e.g. maven are also available. To learn how to use these tools, see https://prd.sdc.t-systems.net/xwiki/bin/view/Jenkins/Jenkinsfile#Jenkinsfile-Buildingwithtoolplug-ins. Please note that since the agent is shared for all builds, Docker images, Docker containers, workspaces and caches are accessible by all projects configured in the DevOps-as-a-Service instance. To reduce the visibility, clean your workspace after the build. The advantage of this host is the high speed of the build pipelines, since caches provide existing data on the next build run.
Node Label 'dind'
If this label is used in Jenkinsfile, then the build job is performed in an isolated dynamic build container running at a different host. Using a special technology at host level, the started build container is fully isolated from the host and other build containers. By this, the builds from different projects can be fully separated against each other, even when using Docker during the build.
After the build job is finished, the build container is removed, including the workspace. Thus, required build results must be pushed to a repository or archived on Jenkins master.
The build container is based on a Docker image which includes these tools:
- Base image Debian GNU/Linux 11 (bullseye)
- Jenkins agent software to connect to the Jenkins master
- Docker client tools and Docker daemon (Version 20.x)
- kubectl, helm, curl, rsync, yamllint, ansible, rancher, rancher2, terraform, awscli, pylint
Tools provided by Jenkins plugins are also available. To learn how to use these tools, see https://prd.sdc.t-systems.net/xwiki/bin/view/Jenkins/Jenkinsfile#Jenkinsfile-Buildingwithtoolplug-ins
The availability of Docker allows to perform Docker tasks like docker pull/push/build within the build container (Docker-in-Docker or short 'dind'). This can also be used to pull external images to perform special build tasks.
Multiple build containers can run at the same time. The max number of concurrent build containers and the max amount of RAM for each container can be set up to match the available resources.
There is the principle drawback when using such an ephemeral build container, that all data required for the build like public repos, Docker images, etc. have to be pulled into the container on every start of the pipeline. In order to solve this problem to some extent, host based caches for Maven repositories, Dependency Scan libraries and plugin tools have been introduced. The Maven cache is mounted into the build container at the default location, so any Maven build can leverage it without additional parameters. Still this shared caches provides the possibility that a (malicious) project compromises cache files which another projects is using. If a project with strict policies wants to omit even this risk, the Maven build can use an ephemeral cache within the build container by specifying a different cache location in the Jenkinsfile or the Maven configuration. To omit the Jenkins tools cache, the build could exclusively download the tools into the build or use containers to perform the build tasks.