werf 1.1: Release notes and future plans

Flant staff
werf blog
Published in
12 min readMar 27, 2020

--

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

werf is our Open Source GitOps tool to build your applications and deploy them to Kubernetes. As promised, the release of werf 1.0 marked the beginning of the era of new features and revising established approaches. Now, we are excited to announce the latest version (v1.1) of our tool which is a massive step in the development of its builder and laying the groundwork for the future. Currently, version 1.1 is available in the 1.1 ea channel.

The truly remarkable features of the new version are completely novel design of stages storage and optimization of both kinds of builders (for Stapel and Dockerfile). The improved design of stages storage opens the door for both, distributed building from multiple hosts and concurrent building at the same host.

Optimizations for builders include getting rid of excessive calculations when computing stage signatures and more efficient algorithms for calculating file checksums. It results in reducing the average build time. Dummy builds, when all the stages already exist in the stages storage’s cache, are now really fast. In most cases, rebuilding takes less than one second! It is also true for the verification of stages when executing werf deploy and werf run commands.

Additionally, we have implemented a strategy for tagging images based on their contents — the so-called content-based tagging. It is enabled by default in the latest version and is the only recommended one.

Okay, let’s have a more detailed view on new features of werf 1.1 and our future plans.

What’s new in werf v1.1?

1. New stages naming and new algorithm for stage selection

werf v1.1 introduces the new rule for generating stage names. From now on, each build generates an individual name for the stage that consists of two parts: a signature (as in v1.0) and a unique timestamp.

For example, the full name of the stage image might look like this:

werf-stages-storage/myproject:d2c5ad3d2c9fcd9e57b50edd9cb26c32d156165eb355318cebc3412b-1582656767835

… or, in a general form:

werf-stages-storage/PROJECT:SIGNATURE-TIMESTAMP_MILLISEC

Where:

  • SIGNATURE is an identifier of stage contents. The signature depends on the history of Git commits, which have led to its current state;
  • TIMESTAMP_MILLISEC is a unique image identifier that is generated when building a new image.

Stage selection algorithm is based on the analysis of Git commits:

  1. Werf calculates the signature of a certain stage.
  2. There might be several stages with the same signature in the stages storage; werf selects all of them.
  3. If the current stage somehow relates to Git (git-archive, user-defined stage with Git patches: install, beforeSetup, setup; or a git-latest-patch), werf selects only stages related to the commit that is an ancestor of the current Git commit.
  4. Then it selects the oldest stage according to the creation timestamp.

The stage that relates to different Git branches may have the identical signature. However, werf will prevent the use of cache related to other branches even if signatures match.

Documentation

2. New algorithm of building and saving stages

If, during the stage selection, werf cannot find a suitable stage, it builds a new image for the stage.

Note that several processes (on a single or multiple hosts) may start building the same stage at the same time. Werf uses optimistic locking when saving the newly built image into the stages storage. When a new stage is ready, werf locks the stages storage and saves the newly built image into it, providing that there is no suitable image (as per signature or other parameters — see the new algorithm for selecting stages above).

The newly built image is guaranteed to have a unique identifier thanks to TIMESTAMP_MILLISEC (see the new stages naming above). If there is a suitable image in the cache, werf would drop the new image and use the cached one.

In other words, the first process that has managed to build an image (i.e., the fastest one) will be privileged to save it to the stages storage (this particular image will be used in all subsequent builds). The additional advantage of such an approach is that the slower process would never block the faster one from saving the results of the building of the current stage and proceeding to the next one.

Documentation.

3. Better performance of the Dockerfile builder

Currently, the stage sequence for an image built from a Dockerfile consists of a single stage called dockerfile. When calculating a signature, werf computes the checksum of context files that will be used during the assembly. Before this improvement, werf went over all files recursively and calculated the checksum by summing up the context and the mod of each file. Starting with v1.1, werf can use calculated checksums stored in the Git repository.

The algorithm is based on git ls-tree. It takes into account entries in the .dockerignore and crawls a file tree recursively only when needed. In doing so, we were able to abandon reading the entire filesystem. At the same time, the algorithm is not particularly dependent on the size of the context.

The algorithm also checks untracked files and, if necessary, takes them into account when calculating a checksum.

4. Better performance when importing files

Werf v1.1 uses the rsync server to import files from artifacts and images. In previous versions, the import was carried out in two steps by mounting the directory of the host system.

The performance of importing in macOS is no longer limited by the speed of Docker volumes, and the process itself takes the same time as in Linux or Windows.

5. Content-based tagging

werf v1.1 supports the so-called content-based tagging. Tags of the resulting Docker images depend on their contents.

By running werf publish --tags-by-stages-signature or werf ci-env -–tagging-strategy=stages-signature, you may tag images to be published with the so-called stages signature. Each image has its own stages signature, which is calculated using the same rules as for the regular signature of each individual stage. Stages signature is a summarizing identifier of an image.

The stages signature of an image depends on:

  • image contents;
  • history of Git commits that have led to such contents.

The Git repository often has the so-called dummy commits. These commits do not change the contents of an image. Examples are the comment-only commits, merge-commits, or commits modifying those files in Git that will not be included in the image.

By using the content-based tagging, you solve the issue of unnecessary pod restarts in Kubernetes due to changes in the name of the image even if its contents stay the same. By the way, this is one of the reasons preventing the storage of multiple microservices of an application in a single Git repository.

The content-based tagging is also more reliable compared to the tagging strategy that is based on Git branches. In this case, the contents of the resulting images do not depend on the order of pipeline execution in the CI system when building an image based on multiple commits of the same branch.

Note: From now on, the stages signature is the only recommended tagging strategy. It will be used by default when running werf ci-env (unless another tagging strategy is specified explicitly).

Documentation. We are also planning to publish a detailed article dedicated to this feature.

6. Logging levels

Thanks to --log-quiet, --log-verbose, --log-debug options that have been added to the werf, the user now is able to control the output, set the level of logging, and log debugging information.

By default, the output contains a minimum of information:

The detailed output ( --log-verbose) is best for monitoring werf actions:

In addition to werf actions information, the --log-debug output also displays logs of the libraries used. For example, you can observe how werf interacts with Docker Registry and also spot problem areas (processes that consume a lot of time):

Future plans

A note of importance: The features described below and marked with “v1.1” will be available in the current version (many of them in the nearest future). Upcoming werf builds will be delivered to end users via auto-updates (if werf was installed with multiwerf). These features do not affect stable werf v1.1 functions in any way. Furthermore, their installation does not require manual intervention in the existing configurations.

1. Full support for various Docker Registry implementations (NEW)

The idea is to permit a user to work with any DR implementation without limits when using werf.

So far, we have isolated the following range of solutions for which we are going to guarantee full support:

  • Default (library/registry)*
  • AWS ECR
  • Azure*
  • Docker Hub
  • GCR*
  • GitHub Packages
  • GitLab Registry*
  • Harbor*
  • Quay

* Starred items are fully supported by werf. Others enjoy only limited support.

Currently, there are two main problems:

  • Some solutions do not support the deletion of tags via the Docker Registry API, thus depriving users of automatic cleanup implemented in werf. This is true for AWS ECR, Docker Hub, and GitHub Packages.
  • Some solutions do not support the so-called nested repositories (Docker Hub, GitHub Packages, and Quay) or have limited support, forcing the user to create them manually using the UI or API (AWS ECR).

We are going to solve the above and other problems using native API solutions. Also, we are planning to cover the entire werf operating cycle for each solution with tests.

2. Distributed building of images (↑)

  • Version: v1.2 → v1.1 (the priority of this feature has been raised)
  • ETA: March-April March
  • Issue #1614

Currently, you can use werf 1.0 and 1.1 on a single dedicated host only to build and publish images as well as to deploy an application to Kubernetes.

To implement a distributed approach, when building and deploying applications to Kubernetes take place on several hosts simultaneously, and these hosts do not save their interim states (temporary runners), werf has to learn to use Docker Registry as stage storage.

Back then, when werf was written in Ruby and was known as dapp, it had this feature. However, there were some problems that we must solve before reintegrating this feature into werf.

Note: This feature does not mean that the builder will operate inside the Kubernetes pods. To do so, we first have to get rid of the dependency on the local Docker server. Kubernetes pods do not have access to the local Docker server since the process itself runs in the container, and werf does not support (and will not support) using the Docker server over a network. Support for operating inside Kubernetes will be implemented separately.

3. Official support for GitHub Actions (NEW)

It includes improving werf documentation (especially reference and guide sections) as well as an official GitHub Action to automate werf workflow.

Also, it will allow werf to use ephemeral runners.

The user interaction with the CI system will be based on assigning labels to pull requests to launch specific actions to build/deploy an application.

4. Local development and deployment of applications with werf (↓)

  • Version: v1.1
  • ETA: January-February April
  • Issue #1940

Our primary objective is to implement the unified configuration scheme for deploying applications both locally and in production. It will be available right out of the box and will not require complex actions.

We also plan to implement a special mode making it possible to edit the code and get instant feedback from the running application (for further debugging).

5. New algorithm for cleaning up (NEW)

In the current build of werf v1.1, the cleanup procedure cannot clean up images tagged with the content-based tagging scheme. So, these images would pile up over time.

Also, the current builds of werf v1.0 and v1.1 use different policies for cleaning up images published under git-branch, git-tag, and git-commit tagging schemes.

That is why we implemented a brand-new, unified image cleaning algorithm for all tagging schemes based on the history of commits in Git:

  • Keep no more than N1 images related to N2 last commits for all git HEADs (branches and tags).
  • Keep no more than N1 stage images related to N2 last commits for all git HEADs (branches and tags).
  • Keep all images that are being used with any resources of the Kubernetes cluster (by default, all kube-contexts of the configuration file and namespaces are scanned; you can customize the behavior with options).
  • Keep all images that are involved in Helm release manifests.
  • The stage can be cleared if it does not relate to any of git HEADs (for example, if the corresponding HEAD has been deleted) and if it is not used in any of manifests in the Kubernetes cluster and Helm releases.

6. Concurrent building of images (↓)

  • Version: v1.1
  • ETA: January-February → April*

Currently, werf 1.0 builds all image stages and artifacts, declared in werf.yaml, sequentially. We plan to make this process concurrent as well as provide an easy-to-read and informative output.

* Note: We have rescheduled the estimated time of accomplishment of this feature due to the raised priority of distributed assembly. The latter would open new possibilities for horizontal scaling and the usage of GitHub Actions jointly with werf. The point is that concurrent building is the next logical step towards optimization, providing vertical scalability when building a single application.

7. Switching to Helm 3 (↓)

  • Version v1.2
  • ETA: February-March May*

This step includes an upgrade to the new Helm 3 codebase and a proven, convenient way to migrate existing installations.

* Note: Switching to Helm 3 will not introduce any new, exciting functionality, because all the essential features of Helm 3 (3-way-merge and dropping tiller) are already implemented in werf. Moreover, werf boasts some enhanced features in addition to those mentioned above. Still, we are planning this transition.

8. Jsonnet format for Kubernetes configurations (↓)

  • Version: v1.2
  • ETA: January-February → April-May

Werf will support the Jsonnet format for Kubernetes configurations. At the same time, we will keep werf compatible with Helm and provide an ability to the user to choose the preferred format for description.

The reason behind this is that Go templates are regarded by many as a way too difficult to comprehend and as having a steep learning curve.

Also, we consider implementing other systems for configuration description (such as Kustomize).

9. Building images inside Kubernetes (↓)

  • Version: v1.2
  • ETA: April-May → May-June

The aim is to enable building images and delivering applications using runners in Kubernetes. In other words, you can build, publish, deploy, and clear images right out of Kubernetes pods.

To implement this feature, we first have to enable distributed image assembly (see the point above). Also, we have to implement the building mode that doesn’t require Docker daemon (i.e., Kaniko-like or building in userspace).

Werf will support building in Kubernetes using not only Dockerfile but Stapel as well (so incremental rebuilds and Ansible will be supported).

The step towards Open-Source development

We love our community and want to involve as many people as possible in the development of werf.

To make the project more transparent, to communicate our goals and ideas, we have decided to switch to GitHub project boards. With it, users may have a glimpse into the workflow of our team. At the moment, you can learn about our near-term plans as well as ongoing activities in the following areas:

Considerable efforts have been devoted to processing issues:

  • We deleted issues that have become irrelevant;
  • We converted the existing ones to a single format with a sufficient amount of details;
  • We added new issues with ideas and proposals.

How to activate version v1.1

Version v1.1 is currently available in the 1.1 ea channel. It will be also available in stable and rock-solid channels after it becomes more tested and stable. However it should be noted that ea version is stable enough for use since it has gone through the alpha and beta channels already. You can activate version v1.1 via multiwerf in the following way:

source $(multiwerf use 1.1 ea)
werf COMMAND …

Conclusion

The new architecture of stages storage and streamlining the operation of Stapel and Dockerfile builders make it possible to implement distributed and concurrent builds in werf. These features will be available soon in version 1.1. They will be activated automatically through the auto-update mechanism (for multiwerf users).

In werf v1.1, the new content-based tagging strategy has been introduced (it became the default strategy). We have also redesigned major commands: werf build, werf publish, werf deploy, werf dismiss, werf cleanup.

The next significant step would be to implement distributed builds. Since v1.0, the priority of distributed builds has been increased compared to concurrent ones due to their higher utility. They allow us to implement the vertical scaling of builders, support for ephemeral builders in various CI/CD systems, and provide an opportunity to implement official support for GitHub Actions. That is why ETA for concurrent builds has been shifted. Nevertheless, we are actively working on completing both features as soon as possible.

Stay tuned and follow us on Twitter! And do not forget to visit our GitHub repository, where you can create a new issue, find the existing one and give it a thumbs up (upvote it), create a new pull request, or take a closer look at the project.

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

This article has been written by our system developer Alexey Igrychev.

--

--