Configuring Continuous Integration for Jenkins & Bitbucket using werf

Flant staff
werf blog
Published in
10 min readFeb 4, 2021

--

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

The werf tool is designed to easily integrate with any CI/CD system. The general approach to this process is provided in the epilogue, while the main part of this article discusses the practical example of organizing a CI process in Jenkins and Bitbucket.

After reading this article, you will learn how to:

  1. create a Jenkins Shared Library to store all CI scripts in one place and edit them via a single commit;
  2. integrate Jenkins with Bitbucket to trigger CI processes by committing to specific branches or by creating a tag.

Great, let’s do this!

Configuring Jenkins

We will use the following components in this article:

* Since werf 1.2 is in its Early Access status already, it’s useful to note that no noticeable difference is expected to use the approach provided below for v1.2 as well.

Note that we use the Multibranch Pipeline project type in Jenkins.

Let’s start by connecting to Jenkins a repository for storing the Shared Library. Shared Library is a uniform library for storing and reusing code for CI execution. It can be defined in its own external repository. This approach simplifies the process of modifying CI pipelines and dealing with them (instead of using the standard Jenkinsfile to define pipelines where you’ll need to copy that file into each project).

Let’s create the Shared Library: go to Manage JenkinsConfigure SystemGlobal Pipeline Libraries.

Here, you have to specify the name and the branch where the SL code is stored. In the Source Code Management section, you need to enter the repository address and credentials (in our case, it is an SSH key for read-only access).

The directory structure of a Shared Library repo

Now, let’s define the library itself. The structure is very straightforward and consists of three directories:

  • The vars directory hosts the global methods that are exposed as variables in pipelines;
  • The src directory also contains scripts and is mainly used for storing the custom code;
  • The resources directory hosts non-script files that might be required during the execution.

In our case, a few methods in the vars directory will be enough because we will configure werf that will do all the actual work.

Besides, it is better to define the entire pipeline within the library and leave only a few deployment parameters for Jenkinsfile (they will be the same in 99.9% of cases).

Implementing the methods

Now we are going to implement two methods.

The method for executing werf<.code> is defined in -runWerf.groovy:

#!/usr/bin/env groovy
def call(String dockerCreds, String werfargs){
// log in to the registry
// the first argument is a url (it is empty since we use DockerHub)
// the second is the name of Jenkins secret where (login, password) are stored
docker.withRegistry("", "${dockerCreds}") {
sh """#!/bin/bash -el
set -o pipefail
type multiwerf && source <(multiwerf use 1.1 stable --as-file)
werf version
werf ${werfargs}""".trim()
}
}

All parameters are conveniently passed to the pipeline library as a Map:

#!/usr/bin/env groovy
def call( Map parameters = [:] ) { // function gets a map containing parameters as an argument
def namespace = parameters.namespace // namespace for rolling out
// default secret key to decrypt parameters (if omitted in the parameter)
def werf_secret_key = parameters.werfCreds != null ? parameters.werfCreds : "werf-secret-key-default"
// default secret to log in to docker registry
def dockerCreds = parameters.dockerCreds != null ? parameters.dockerCreds : "docker-credentials-default"
// get the name of the project from multibranch pipeline
def PROJ_NAME = "${env.JOB_NAME}".split('/').first()
// name of docker hub registry or address to the custom registry
def imagesRepo = parameters.imagesRepo != null ? parameters.imagesRepo : "myrepo"
if( namespace == null ) { // the only mandatory argument; let’s test if it exists
currentBuild.result = 'FAILED'
return
}
pipeline {
agent { label 'werf' }
options { disableConcurrentBuilds() } // disable concurrent builds for the pipeline
environment { // required werf variables
WERF_IMAGES_REPO="${imagesRepo}"
WERF_STAGES_STORAGE=":local"
WERF_TAG_BY_STAGES_SIGNATURE=true
WERF_ADD_ANNOTATION_PROJECT_GIT="project.werf.io/git=${GIT_URL}"
WERF_ADD_ANNOTATION_CI_COMMIT="ci.werf.io/commit=${GIT_COMMIT}"
WERF_LOG_COLOR_MODE="off"
WERF_LOG_PROJECT_DIR=1
WERF_ENABLE_PROCESS_EXTERMINATOR=1
WERF_LOG_TERMINAL_WIDTH=95
PATH="$PATH:$HOME/bin"
WERF_KUBECONFIG="$HOME/.kube/config"
WERF_SECRET_KEY = credentials("${werf_secret_key}")
}
triggers {
// Execute weekdays every four hours starting at minute 0
cron('H 21 * * *')
// cron job for werf cleanup; this command deletes obsolete caches and images in the registry and on the runner host
}
stages {
stage('Checkout') {
steps {
checkout scm // pull code from the repo
}
}

stage('Build & Publish image') {

when {
not { triggeredBy 'TimerTrigger' } // to avoid running stage based on cron
}
steps {
script {
// running our runWerf.groovy method
runWerf("${dockerCreds}","build-and-publish")
}
}
}

stage('Deploy app') {

when {
not { triggeredBy 'TimerTrigger' }
}
environment {
// the name of the target environment to deploy to (required for Helm templates)
WERF_ENV="production"
}
steps {
runWerf("${dockerCreds}","deploy --stages-storage :local --images-repo ${imagesRepo}")
}
}
stage('Cleanup werf Images') {

when {
allOf {
triggeredBy 'TimerTrigger'
branch 'master'
}
}
steps {
sh "echo 'Cleaning up werf images'"
runWerf("${dockerCreds}","cleanup --stages-storage :local --images-repo ${imagesRepo}")
}
}
}
}

Notes:

  • Building and deploying are performed for any branch specified in the Jenkins’ discover section. They will be automatically performed after we configure the commit-based triggers in the next section of the article.
  • All secrets such as werf-secret-key-default and docker-credential-default are stored in Jenkins Credentials:

The Jenkinsfile that is located inside the project repository now looks like this:

@Library('common-ci') _
multiStage ([
namespace: 'yournamespace'
])

The name of the method is the same as that of the file in the vars directory.

If there is a need to deploy to multiple environments, you can include conditional expressions at the beginning of the code (where the namespace is being defined). You also need to delete the check for the presence of the namespace argument in Map, as well as its default value.

Here is an example:

def namespace = "test"
def werf_env = "test"
if (env.JOB_BASE_NAME == 'master') {
namespace = "stage"
werf_env = "stage"
}
if (env.TAG_NAME) {
namespace = "production"
werf_env = "production"
}

# adding stages to the environment
environment {
WERF_ENV="${werf_env}"
}

Use the currentBuild.rawBuild.getCauses()[0].toString().contains('UserIdCause') condition to launch stage pipeline on all branches but use a button in Jenkins for tags in production. It allows you to track whether the build was started by a person or triggered as an event by a webhook.

Triggering based on Bitbucket commits

Jenkins cannot integrate into Bitbucket on its own. For this, you need to install the plugins mentioned above:

If you are using the cloud version of Bitbucket, you only need to permit creating webhooks automatically.

You also have to create a service user with access to repositories because Jenkins detects repositories via the API. This applies to both the cloud version and the stand-alone Bitbucket server.

Here is an example borrowed from the global Jenkins settings:

Now you need to configure source in Multibranch Pipeline (it happens in interactive mode). It will help Jenkins to find all available repositories and allow you to select one of them. To achieve this, you need to add relevant credentials: user’s Bitbucket and team’s or user’s name (their projects will be used).

In the repository itself, we search only for specific branches because we do not know how many branches are there, and Jenkins searching over all branches can take a long time. It imposes certain restrictions since tags also can match the regular expression. Fortunately, Java Regular Expressions are quite flexible, so that is not a problem.

There is an alternative approach: if you want to completely separate tags from branches, you can add another source to the repository which can be the same but configured for tags discovery only.

So here goes the configuration:

Jenkins can use the service account to check Bitbucket and create a webhook:

Now, with each commit, Bitbucket will trigger pipelines (for filtered tags and branches) and even return the pipeline status back to Bitbucket (you can see it in the last column of the commit description):

As a final touch, you need to change Jenkins location in the main settings since it sits behind the nginx proxy and needs to have its basic URL configured. This way, it will know its endpoint:

If you do not do that, URLs to the Bitbucket pipeline will be generated incorrectly.

Conclusion

This article discusses configuring the CI pipeline using Jenkins, Bitbucket, and werf. This is a very general example and is not a panacea for organizing any development process. However, it gives you an idea of how you can build your CI pipeline using werf.

Note that while the pipeline status is returned to Bitbucket, we still have to refer to Jenkins to find out what happened in case of a failure. Obviously, the tag-based deploy using a webhook works just once. The rollback to the previous tag is performed manually from within Jenkins.

The main advantage of this approach is its flexibility. We can include anything we need in the CI pipeline, even if the learning curve for a good understanding of this process is slightly steeper than for other CI systems.

Epilogue: general approach to werf and CI/CD

The general approach to integrating werf with CI/CD systems is provided in the documentation.

Here are the typical steps for any project:

  1. Creating a temporary DOCKER_CONFIG to avoid conflicts between simultaneous jobs on the same runner (learn more).
  2. Performing Docker authorization for container registries. You can use the native Docker Registry implementation in the CI system or some third-party one. In the case of built-in implementations (such as GitLab Container Registry or GitHub Docker Package) all the necessary parameters are available as environment variables. You can perform authorization for the alternative registries manually on each runner or via parameters stored in secrets (also for each job).
  3. Setting WERF_IMAGES_REPO, WERF_STAGES_STORAGE environment variables and the necessary parameters (they depend on the specific implementation). werf must be aware of the implementation used since some actions involve the native API. It is worth noting that werf automatically tries to find out which implementation is used based on the address of the registry. However, this task is often impossible to do (and then you need to specify the implementation explicitly).
  4. Setting up WERF_TAG_* tagging options: find out what process has started the current job using CI environment variables and then choose the right tagging method or use content-based tagging at all times (it is the recommended approach).
  5. Using the environment of the CI system for controlling the subsequent deployment process. For better understanding, you can read about environments in GitLab.
  6. Adding automatic annotations (WERF_ADD_ANNOTATION_*) to all deployed resources. These annotations may include arbitrary data that can help you with operating and debugging application resources in the Kubernetes cluster. We have come to the conclusion that all resources should contain the following set of annotations:
  7. WERF_ADD_ANNOTATION_PROJECT_GIT — the Git address of the project;
  8. WERF_ADD_ANNOTATION_CI_COMMIT — the commit that corresponds to the deployment;
  9. WERF_ADD_ANNOTATION_JOB or WERF_ADD_ANNOTATION_PIPELINE — the job or pipeline address (based on the CI system and the need) related to the deployment.
  10. Enabling the comfort browsing mode for werf’s logs:
  11. WERF_LOG_COLOR_MODE=on — enables color highlighting (werf is not in an interactive terminal, color highlighting is disabled by default);
  12. WERF_LOG_PROJECT_DIR=1 — displays the full path to the project directory;
  13. WERF_LOG_TERMINAL_WIDTH=95 — sets the output width (werf is not in an interactive terminal, the default width is 140).

We have used werf in a large number of projects. Thanks to this, we have accumulated a set of methods that unify the configuration, solve common problems, and make maintenance easier and more straightforward.

Currently, all the steps described above (and methods accumulated) are already built into the werf ci-env command for GitLab CI/CD and GitHub Actions. Users of other CI systems can implement similar actions themselves using the example with Jenkins above as a guide.

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

This article has been written by our software engineer Andrey Koregin.

--

--