Announcing werf 1.0 stable: The state & future of our GitOps tool

Flant staff
werf blog
Published in
10 min readJan 14, 2020

--

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

werf is an Open Source GitOps CLI utility for building and delivering applications to Kubernetes. werf supports building application images from Dockerfiles or via its custom advanced image builder (it has YAML syntax, supports Ansible as well as Git-based incremental rebuilding). For application delivery, it uses the Helm-compatible configuration format. werf stores the application code, the configuration of images to be built, and the deployment configuration in the single Git repository.

The long-awaited stable release of werf 1.0 is a full-fledged basic version of the tool (to be precise, v1.0.6 is the exact number of the first stable release). In this version, werf supports the full life cycle for containerized applications. It includes building application images, deploying them to Kubernetes, deleting unused images.

Please note that in version 1.0, all operations (i.e. build, deploy, cleanup) for a single application must be performed on the same host. In other words, you have to use a dedicated and persistent worker in your CI system. At the same time, there are no restrictions on the parallelism of tasks: werf fully addresses this issue. You can also bind different projects to different workers.

In this article, timed to coincide with the werf release, we will provide a detailed description of what this version can and cannot do, as well as discuss our plans for future versions. However, let’s start with what the term “GitOps” means and what role werf has in the process of continuous integration and continuous delivery (CI/CD).

Why werf is GitOps

Okay, what do we mean by “GitOps”, and what corresponding functionality does werf provide?

The concept of GitOps was invented by Weaveworks approximately 2.5 years ago (here’s a good summary on this term). The general idea, and the point of GitOps model — as we get it — lies in the fact that Git is considered a “single source of truth”. This approach implies everything is stored in the Git repository including:

  • application code;
  • all dependencies;
  • information on how to build containers;
  • information on how to deploy them to a Kubernetes cluster;
  • and so on.

When it’s done, there is “magic something” that keeps the reality in line with the changes in Git. You can do that not only by installing some Kubernetes operator (which watches the Git repository), but also via some console tool capable of working with any CI system. Moreover, in our opinion, the CLI tool-based approach does not impose unnecessary restrictions: you can use any system you like for CI, and you can define as many parameters as you like by calling CLI utility that synchronizes reality (that is, the Kubernetes cluster) with the state of Git.

werf provides a high-level CLI interface that supports all basic commands for building and publishing images, delivering applications, cleaning old images up:

  • werf build-and-publish
  • werf deploy
  • werf dismiss
  • werf cleanup

It is assumed that these commands can be integrated into any CI system and can keep it in sync with the cluster. Besides, werf provides a low-level CLI interface for controlling various subsystems — see low-level management commands.

It doesn’t matter if we use a pull or a push approach (you can learn more about them here) for the entire CI/CD since werf can be built into any model. At the same time, werf addresses various issues, such as a need to work with low-level tools like git, docker, and kubernetes api-server, thus being the “missing part” for configuring a unified CI/CD system for applications.

What is werf 1.0 stable

1. Building, publishing, and cleaning images

If your application requires building Docker images, then werf 1.0 helps you to:

  • specify rules for building images (even multiple images at once) in the single werf.yaml configuration file;
  • build and publish images to the Docker Registry;
  • regularly clean up the Docker Registry using custom policies.

werf supports two methods of defining the building process. You can include the existing Dockerfiles into the werf.yaml, or you can use Stapel instructions. The Stapel approach has its advantages:

  • accelerating incremental rebuilding upon application code changes in the Git repository;
  • using Ansible-like syntax for the build process;
  • and more.

You can learn more about Stapel and its syntax in the documentation. And here is a good example of how to use it.

Different schemes for tagging/versioning images being built are available, you can use Git commits, branches, and tags for that.

Image building is an optional stage. It can be skipped if there are no custom images requiring building.

2. Single host for stages storage

werf introduces the so-called stages storage. The basic werf commands use it in a variety of ways:

  • they save the results of the building process (i.e. Docker images) to it;
  • they use images from the stages storage as a cache for rebuilding and for creating new images;
  • they use the stages storage specific information about the images for their further usage (e.g., when delivering an application to the Kubernetes cluster).

In the lifecycle of a single application, a single stages storage should be used for all commands (building, publishing and cleaning images, deploying your application).

In werf 1.0, the local host is the only option for stages-storage (the corresponding command parameter is --stages-storage=:local). When used with :local, stages are stored on disk. Therefore, werf 1.0 is limited to a single host when deploying a specific application. This host should keep all data between command runs for werf to operate correctly.

Version 1.0 doesn’t support external storage for keeping stages — a prerequisite for implementing a distributed image building process. However, this feature will be implemented in one of the future versions of werf (more details are provided below).

3. Deploying an application and checking its state

To deploy an application, the user creates a chart in the Helm-compatible format: a set of Kubernetes manifests and template parameters.

werf deploys an application to the Kubernetes cluster and, before this process is completed, it ensures that the app has successfully started and operates smoothly. This involves the output of components’ logs and instant response to errors — if something’s gone wrong, the command will fail with a non-zero exit status code. Thus, by using werf deploy feature as part of the CI/CD process, we get adequate feedback from the software. It allows us to find out whether the deploying process is complete while obtaining plenty of details for debugging and fixing problems (without having to run other utilities, such as kubectl, for identifying them).

werf is fully compatible with the existing Helm 2 installations, yet it has several advantages. For example, werf uses 3-way-merge patches to update resources in Kubernetes. Also, it provides the ability to receive feedback when an application is being delivered to the cluster. A complete list of differences is available here.

4. Linking built images to the process of application delivery

werf integrates the process of building images, the process of tagging and versioning them, and the process of application delivery to Kubernetes into a single consistent system. Images that werf builds can be a part of Kubernetes resources’ templates.

Given all this, we believe that werf provides a higher-level interface than Helm, Docker, and other builders and dedicated tools for deploying applications. Using this interface, you can integrate werf into any existing CI/CD system while avoiding the tedious task of combining all the components manually — werf handles this job on its own.

5. Integration with existing CI/CD systems

werf 1.0 automatically integrates with the GitLab CI only. For this, werf has a special werf ci-env command. This command gets all the necessary information from the CI/CD system and automatically configures werf to operate correctly in the CI environment.

If you want to learn more about integrating werf into any CI/CD system, you might consider reading corresponding manuals:

6. Cross-platform development in Linux, Windows, and macOS

werf 1.0 is a statically linked binary file that works on its own with both Docker and Helm releases. werf requires the following external dependencies to be present on the host:

  • local Docker daemon;
  • Git tool.

werf can work on GNU/Linux, Windows, and macOS platforms. Moreover, the result of the building process will be the same for all systems: the same signatures of the cache stages, the same contents (regardless of the system where the particular stage was built). Plus, you can use the same configuration file with any system.

Thus, werf 1.0 provides cross-platform tooling for building and delivering applications to Kubernetes.

It is also worth noting that while werf creates standard Docker Linux-based images, there is no support for Windows-based containers as of now.

7. Test coverage

Currently, 60% of werf code is covered with e2e and unit tests.

werf is tested on all operating systems (Linux, Windows, and macOS) and uses GitHub Actions for launching these tests. Some details are also available at Code Climate.

8. werf versioning

At the moment, the werf project has almost 700 releases.

werf employs the advanced release system where releases are divided into stability channels: alpha, beta, rc, ea, stable and rock-solid. This article coincides with the release of the first stable version of werf (1.0). Each non-stable feature goes through all the channels and eventually end up in the rock-solid channel. Releases are frequent (sometimes there are several releases per day), and changes are delivered continuously and in small portions.

Stability channels and frequent releases provide continuous feedback regarding new features as well as the ability to instantly revert changes while ensuring a high level of stability of the software and reasonable rates of development.

Please note that some changes in werf might require manual intervention by the user during the transition between global versions (1.0→1.1, 1.1→1.2). It can be some migrating script or the sequence of manual actions described in the release. In case of minor updates (1.0.1, 1.0.2, …. 1.0.6-alpha.1, 1.0.6-rc.2, etc.) the manual intervention will not be required.

You can read the detailed description of our backward compatibility promise here.

Future plans

Here are our global plans for future versions and the expected timeline for their implementation:

1. Local development and deploying applications with werf

Our primary objective is to implement the unified configuration scheme for deploying applications both locally and in production. It will be available right out of the box and will not require complex actions.

We also plan to implement a special mode making it possible to edit the code and get an instant feedback from the running application (for further debugging).

Version 1.1, January-February, 2020.

2. Content-based tagging

Implementing the content-based tagging scheme for images is one of our objectives for version 1.1. Unlike the tagging scheme binded to Git commits, this mode will make it possible to get rid of redundant rebuilds, since Git commit ID isn’t a universal identifier of a worktree content (even if it depends on it).

If the application code has not changed, but a new commit has been made, the current tagging mode based on Git commits will create a new image when publishing. It will lead to the redeployment of resources that use that image in Kubernetes (while its contents have remained the same).

werf will introduce a new tagging scheme to solve these problems. It is based on the checksum of the contents of an application — the so-called content-based tagging.

Version 1.1, February-March, 2020.

3. Switching to Helm 3

This step includes the upgrade to the new Helm 3 codebase and a proven, convenient way to migrate existing installations.

Version 1.1, February-March, 2020.

4. Concurrent building of images

Currently, werf 1.0 builds all image stages and artifacts, declared in werf.yaml, sequentially. We plan to make this process concurrent.

Version 1.1, January-February, 2020.

5. Distributed building of images

Currently, werf 1.0 can be used on a single dedicated host (see “Single host for stages-storage” above).

With a distributed building of images, the building process takes place on several hosts simultaneously, and these hosts do not save their interim states (temporary runners), werf has to learn to use Docker Registry as stage storage.

At the time when has been written in Ruby and known as dapp, it has had this feature. However, we have encountered some problems which must be solved before this feature will be reintegrated to werf.

Version 1.2, March-April, 2020.

6. Support for Jsonnet

We are planning to implement the support for Jsonnet format for Kubernetes configurations. At the same time, werf will remain compatible with Helm and will provide an option to select the preferred description format.

The reason behind this is that Go templates are considered excessively hard-to-read by many people and have a steep learning curve.

We also consider adding some other configuration managers for Kubernetes, such as Kustomize.

Version 1.1, January-February, 2020.

7. Building images inside Kubernetes

The aim is to enable building images and delivering applications using runners in Kubernetes. In other words, you can build, publish, deploy, and clear images right out of Kubernetes pods.

To implement this feature, we first have to enable distributed image assembly (see the point above).

Also, we have to implement the building mode that doesn’t require Docker daemon (i.e., Kaniko-like or building in userspace).

werf will support building in Kubernetes using both Dockerfile and Stapel building modes (the latter with incremental rebuilds).

Version 1.2, April-May, 2020.

8. Various

The following improvements are also planned:

  • updating Ansible and introducing the ability to use its different versions;
  • support for Ansible roles;
  • support for custom building stages in Stapel (currently, werf supports the static set of stages: beforeInstall, install, beforeSetup, setup);
  • improving werf.yaml syntax, switching to configVersion: 2 (this requirement is related to two previous points, among other things), support for the Open API specification;
  • Git LFS support in Stapel for storing large files in Git;
  • improving image clearing methods (currently, images that are not declared in the werf.yaml in the main master branch aren’t deleted; however, they will be deleted during the periodic clearing);
  • tuning werf to work properly with the shared Kubernetes namespace, when several applications are being deployed into the same namespace;
  • reverting an application to the latest working version in case of a failed deployment.

Conclusion

Here is a summary of our efforts so far. We

  • have come a long way in developing the first stable version of werf;
  • have processed and integrated a lot of real-life experience;
  • are presenting a proven utility with stable functionality, tested on tens of thousands of deployments.

The release of version 1.0 marks the beginning of an entirely new phase in werf development. We intend to add many innovative features to it in the near future. Stay tuned and welcome to follow our Twitter!

Please note this post was moved to a new blog: https://blog.werf.io/ — follow it if you want to stay in touch with the project’s news!

This article has been originally written by our system developer Timofey Kirillov.

--

--