Building and deploying lots of microservices using werf and GitLab CI

Flant staff
werf blog
Published in
7 min readOct 31, 2019

--

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

Are you struggling with implementing CI/CD for many microservices in a efficient and elegant way? Here’s our current approach in solving this task using GitLab CI (thanks to its include keyword in .gitlab-ci.yml) and werf.

Before we start

For this article, we assume that:

  • There is a huge application splitted into multiple repositories.
  • Each repository represents a separate application that you need to run in the Kubernetes cluster.
  • We use GitLab CI as a tool of choice for Continuous Integration.
  • Deployment (the infrastructure into which the code is being deployed) is described by Helm charts.
  • We use werf for building images and deploying them to the Kubernetes cluster.

For simplicity and convenience’s sake (and as a tribute to fashion), we will be calling these applications microservices. All these microservices are being built, deployed, and run the same way. Their specific settings are configured via environmental variables.

Obviously, copying .gitlab-ci.yml, werf.yaml, and .helm brings a lot of issues. After all, any corrections to the CI, to the process of building images, or to the Helm chart must be spread between all repositories.

Including templates in .gitlab-ci.yml

The introduction of include:file directive in GitLab CE (version 11.7 or above) has paved the way for implementing a full CI pipeline. The include keyword itself emerged a little earlier (in version 11.4). However, its functionality was somewhat limited since it allowed fetching templates via HTTP (from public URLs) only. The GitLab documentation perfectly describes all its features and use cases.

This way it became possible to stop copying .gitlab-ci.yml between repositories and maintaining its relevance. Here is the example of .gitlab-ci.yml with include parameter:

include:
- project: 'infra/gitlab-ci'
ref: 1.0.0
file: base-gitlab-ci.yaml
- project: 'infra/gitlab-ci'
ref: 1.0.0
file: cleanup.yaml

We recommend you to use branch names in the ref parameter with extreme caution. Includes are activated at the time of a pipeline creation, so your changes can be automatically injected into the production pipeline via CI at the worst possible time. Now, we suggest using tags in ref since they make versioning of the description of CI/CD processes easier. During the update, everything looks as transparent as it can be, and you can easily track the history of changes in the pipeline versions by using semantic versioning for tags.

Using .helm from an external repository

Since these microservices are deployed and run in the same way, they require the same set of Helm charts. To avoid copying .helm directory between repositories, we used to clone the repository where Helm charts were stored and perform git checkout to the required tag. It looked something like this:

   - git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.example.com/infra/helm.git .helm
- cd .helm && git checkout tags/1.0.0
- type multiwerf && source <(multiwerf use 1.0 beta)
- type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
- werf deploy --stages-storage :local

NB: There were also methods involving Git submodules, however we’d like to show how it could be done taking an advantage of werf features.

In one of the recent releases werf has got the long-awaited ability to include charts from external repositories. Full support of package manager capabilities, in turn, allowed us to describe dependencies for deploying an application in a transparent manner.

Course of actions

Let’s get back to our microservices. Firstly, we have to create a dedicated repository for storing Helm charts — e.g., ChartMuseum. It is easily deployed in the Kubernetes cluster:

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/chartmuseum --name flant-chartmuseum

Now it’s time to set up Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: chart-museum
spec:
rules:
- host: flant-chartmuseum.example.com
http:
paths:
- backend:
serviceName: flant-chartmuseum
servicePort: 8080
path: /
status:
loadBalancer: {}

You have to set flant-chartmuseum Deployment’s environment variable DISABLE_API to false. Otherwise (by default), requests to the ChartMuseum API will not work and it will not be possible to create new charts.

Now let’s configure the repository where the shared Helm charts will be stored. It has the following structure:

.
├── charts
│ └── yii2-microservice
│ ├── Chart.yaml
│ └── templates
│ ├── app.yaml
└── README.md

Chart.yaml might look like this:

name: yii2-microservice
version: 1.0.4

All the necessary Kubernetes primitives required for deploying an application to the cluster must be present in the templates folder. As you may have guessed, in our case, microservice is a PHP application based on the Yii2 framework. Let’s describe a basic Kubernetes deployment consisting of two containers, nginx, and php-fpm, that are being built with werf:

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.global.werf.name }}
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
service: {{ .Values.global.werf.name }}
spec:
imagePullSecrets:
- name: registrysecret
containers:
- name: backend
{{ tuple "backend" . | include "werf_container_image" | indent 8 }}
command: [ '/usr/sbin/php-fpm7', "-F" ]
ports:
- containerPort: 9000
protocol: TCP
name: http
env:
{{ tuple "backend" . | include "werf_container_env" | indent 8 }}
- name: frontend
command: ['/usr/sbin/nginx']
{{ tuple "frontend" . | include "werf_container_image" | indent 8 }}
ports:
- containerPort: 80
name: http
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx", "-s", "quit"]
env:
{{ tuple "frontend" . | include "werf_container_env" | indent 8 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.global.werf.name }}
spec:
selector:
service: {{ .Values.global.werf.name }}
ports:
- name: http
port: 80
protocol: TCP

The .Values.global.werf.name variable contains the name of the project from the werf.yaml file. With it, you can get the names of services and Deployments.

Let’s implement the basic automation for pushing our charts to a ChartMuseum when committing to the master branch. To do this, we will insert the following code into the .gitlab-ci.yml:

Build and push to chartmuseum:
script:
- for i in $(ls charts); do helm package "charts/$i"; done;
- for i in $(find . -type f -name "*.tgz" -printf "%f\n"); do curl --data-binary "@$i" http://flant-chartmuseum.example.com/api/charts; done;
stage: build
environment:
name: infra
only:
- master
tags:
- my-shell-runner-tag

Versioning of charts is performed by modifying the version parameter in the Chart.yaml. All new charts will be automatically added to ChartMuseum.

Okay, the finish line is in sight! The next step is to specify dependencies for the chart in the .helm/requirements.yaml:

dependencies:
- name: yii2-microservice
version: "1.0.4"
repository: "@flant"

… and execute the following commands in the repository directory:

werf helm repo init
werf helm repo add flant http://flant-chartmuseum.example.com
werf helm dependency update

Now we have a .helm/requirements.lock file in that directory. From this moment on, all you have to do to deploy an application to the cluster is to run werf helm dependency build and then werf deploy.

To update the description of the deployment of the application, you have to apply patches to requirements.yaml and requirements.lock with changes to hashes and tags in each microservice repository.

Bonus! Using werf templates from a separate repo

You can also reuse any common snippets from werf.yaml via separate template files:

Template files should live in the .werf directory with .tmpl extension (any nesting is supported).

In terms of CI, it allows you to get these templates before the building process making them available when werf build is executed.

Here is how we can implement it using GitLab CI and Git submodules. In the root directory of your project, execute:

git submodule add git@gitlab.example.com:infra/werf.git .werf/werf_repo

You’ll have to use relative paths in your .gitmodules file to make GitLab download (automatically) the sources from repositories specified in submodules when CI Job is performed. Please check corresponding GitLab docs for details.

Our .gitmodules will look similar to this:

[submodule ".werf/werf_repo"]
path = .werf/werf_repo
url = ../../infra/werf.git

To make GitLab download sources from Git submodules, you will also need to pass the GIT_SUBMODULE_STRATEGY environment variable to the job, so GitLab will know how to handle submodules. Again, GitLab documentation is perfect in describing all the values available. We’ll choose the normal strategy — it means the top-level submodules will be used only. It’s equivalent to performing:

git submodule sync
git submodule update --init

Now we should tell werf to use our templates downloaded via submodules to the .werf/werf_repo directory. We will use include for that. Here’s an example of werf.yaml using an external template (please note our paths should be relative to the .werf directory):

configVersion: 1
project: campaign-microservice
---
{{ include "werf_repo/yii2-microservice/php-7.3.tmpl" . }}

The last step is to commit all new files to the project’s repository. When the GitLab Job is launched, we should see the following output:

Updating/initializing submodules...
Synchronizing submodule url for '.werf/werf_repo'
Entering '.werf/werf_repo'
Entering '.werf/werf_repo'
HEAD is now at 50646b3 fix templates naming

It will mean all our submodules have been successfully added to the building directory.

That’s the way we can easily reuse all the components involved into our CI/CD process.

Conclusion

We hope that the described course of action for deploying similar applications will prove useful to engineers and specialists who experience akin problems. We will gladly share other werf use cases and examples.

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

This article has been written by our engineer Konstantin Aksenov.

--

--