How to Implement GitOps for Software Delivery

A relatively new term in the “Ops Prefixes,” the GitOps space continues to gain momentum and evolve as more organizations and vendors participate in the paradigm. In the DevOps spectrum, GitOps slants towards the software engineering side of the development and operations continuum. 

Like any new pattern, the journey towards GitOps is not an all-or-nothing approach; a similar journey is followed for organizations embracing microservices. Organizations can take advantage of the pillars of GitOps where the investment/shift makes sense. If starting on a greenfield project, having technology and process choices that are conducive to GitOps makes sense. Understanding the pros/cons, history, and basic implementation of a GitOps-based approach can help you make a decision on what to embrace.  

What is GitOps?

Decomposing the word GitOps, focusing on the prefix before Ops, is Git. Git is an SCM, or source code management tool, that was created in 2005 to support projects making up the Linux operating system. Since then, Git has become the defacto standard for many development teams. As codification with infrastructure-as-code and other declarative cloud native stacks started to gain traction, those in charge of operations found themselves needing to leverage SCM tools also. 

GitOps Diagram

Since Git’s popularity was skyrocketing and the lines between software development and operations continued to blur, operationally-focused engineers also started to leverage SCM/VCS (version control systems). More often than not, Git would be that SCM tool of choice following the organization’s adoption of Git. Examples of SCM tools based on Git would be GitHub, GitLab, and Bitbucket.

A term coined by Weaveworks in 2017, GitOps is the concept that deployments should be as easy as an engineer enacting a code change/commit/merge. Taking consensus of all that needs to occur for a deployment, changes in the code base, build, packaging, and underlying infrastructure need to occur. Not to mention the confidence-building exercises needing to validate the application and infrastructure changes. 

Because of all that is required, the Kubernetes ecosystem was the ecosystem of choice that GitOps providers and projects grew up in. Harness and Weaveworks were in a podcast together in 2019 walking through a stringent definition of GitOps. The barrier of entry to GitOps was the need for declarative infrastructure. The desired state of the application is stored as code, then associated engines (for example, Kubernetes) would reconcile the differences, updating the application state.  

The evolution of GitOps continues to occur. The strict definition of microservices has changed to represent more of a movement. More recently, in a 2021 webinar that Harness and Weaveworks participated in, GitOps is shifting from the purest definition to a movement. Organizations should be able to pick up on the benefits of GitOps without being tightly coupled to the definition. 

Principles of GitOps

Stemming from a definitive piece by Weaveworks, there are several guiding principles that GitOps encompasses. 

The Entire System is Described Declaratively

Declarative infrastructure is an approach that focuses on what the target configuration should be. Declarative approaches focus on the desired state and then the system executes on what needs to happen to achieve the desired state. Compared to an imperative approach, which is a set of explicit commands to change state, reconciling becomes difficult in an imperative approach. Declarative infrastructure is aware of state vs an imperative approach that is not aware of the state. The declarative state of the total system needs to be stored in Git. Kubernetes is the most prolific declarative piece of infrastructure whose state can be stored in Git. 

The Canonical Desired System State is Versioned in Git

Canonical (state) in terms of GitOps is the “source of truth” state. The state that is stored and versioned in source control (e.g. Git) is the source of truth and should be viewed as such. In computer science/compute algebra, objects can be tested on how equal they are to the canonical form. If there is a deviation in state, this drift can be recognized and reconciled to the canonical state in source control. 

Approved Changes That Can Be Automatically Applied in the System

Once any sort of code or declarative definition changes pass (e.g. passes a pull or merge request (PR/MR)), those changes should be allowed to be automatically applied to the system. GitOps favors a low barrier of entry and an immediate deployment/reconciliation until the new canonical state is achieved. 

Benefits of Using GitOps

As a new software engineer, you are quick to find out there can be a lot between your local changes on your machine (the code you write) and actually getting your idea into production. 

Software is rarely written in a vacuum, and some of the first skills you pick up are source code management skills, such as having to commit what you have written and manage/integrate differences of what others have written.

Having stability in the code base is different than having stability in the build, and that is different than having stability in the deployed version of the application. Focusing on the purview of a software engineer, negotiating change usually involves a pull request (PR) or merge request (MR) process before code changes are accepted in the appropriate feature/branch. The journey typically ends at the Continuous Integration process, where a successful PR/MR would create another build that could be a release candidate, and will eventually need to be deployed. 

With infrastructure becoming much more codified and declarative, taking application code changes to production all within an SCM-esque process started to become more feasible. 

Like a software engineer making changes to configuration/properties for an application to function/have connectivity a certain way, the codification of infrastructure started to allow for additional pieces to be laid down, allowing for a deployment to occur.

GitOps engines focus on the steps required for a deployment to be written as code. If items are stored as code and take advantage of an SCM like Git, there are a host of benefits that this can bring. 

One of the first rules of engineering efficiency/developer experience is meeting the customer where they are. Software engineers typically work with code and configurations. With a codified approach, a main benefit of GitOps is meeting the developer where they are familiar. Having access to an SCM is one of the first tools developers have when starting a new project, so items that are powered by the SCM will have a more native feel for a software engineer. 

There are several inherent benefits of GitOps. Since the steps and infrastructure is codified, learning about how something is deployed becomes easier for the engineer. With legacy processes that require substantial human interaction and leverage several disparate tools, gluing all of that together and following a human-centric change management process can be cumbersome. Frustration in with the amount of rigor to get a code change into an SCM is outweighed by the rigor and process to actually get into production. 

Since all the steps are laid out in code, the learning curve on what an engineer is required to pass/accomplish is significantly lowered because the steps are codified. There is no human interpretation with the steps to execute. In essence, these deployments become self-documenting.

With the learning curve down and the elimination of multiple team interactions, engineers can, in theory, deploy more frequently. Even if deploying to a lower environment (or for a stringent definition, a separate Kubernetes Namespace) to iterate changes, it allows for more complete iteration and helps with eventually landing on a solution quicker. 

Complete iteration is enabled by dev-prod parity. A challenge for many organizations is having parity between development and production environments. The less change between environments, the less potential risk of change failure, and a more complete picture on how changes will impact the application/infrastructure become easier to hone in on. 

Further supporting iteration is knowing that reverting is there to help you start over. As a software engineer, when making new changes for the first time, how often do you revert? Having the ability to revert and create an audit trail (e.g. Git blame) is a paradigm most developers understand. 

Reconciling differences is a core part of being a software engineer. Boiling into other parts of the stack needed to power an application (e.g. infrastructure), this becomes much more difficult. Reconciling the state vs configuration of a piece of infrastructure most of the time can not be accomplished in an SCM, or a repository (such as a configuration management database housing infrastructure configurations) that is not accessible to the software engineer . 

Like any paradigm in technology, no one size fits all and there are challenges with following a stringent definition of GitOps. GitOps is designed for declarative infrastructure and speed. Any deviation means there is reconciliation. 

Challenges With GitOps

If following a stringent definition of GitOps, everything an application needs to survive and thrive must be codified. More often than not, this means that an application is headed to a Kubernetes endpoint. Networking complexities, such as exposing a new Kubernetes Service/service mesh change, need to be included as a configuration along with the application code. Since Git will be the single source of truth, any deviation from the desired state that cannot be codified is problematic for GitOps practices.   

Reliance on Declarative Infrastructure

To fully embrace GitOps, the underlying infrastructure needs to be declarative and expressive of the final state. This can be a wide paintbrush. For now, almost all GitOps tooling is focused on the Kubernetes ecosystem. With most tooling focused on the Kubernetes ecosystem, there is an expectation that there is a readily available capacity on a Kubernetes cluster or easily attainable additional capacity.  

A platform engineering team might draw a line in the sand that they will maintain the needed Kubernetes clusters and capacity ensuring the infrastructure-as-a-service (IaaS) and infrastructure-as-code (IaC) layers can scale. If the underlying infrastructure for the Kubernetes cluster and/or support resources need to be auto-scaled, this might fall outside of the declared state, if not tightly bound. For example, in Kubernetes, you can set the number of replicas, but if a scaling event needs to occur based on CPU and memory that surpasses that replica, there is now a deviation.  

Immediate Deployment

As quickly as you can approve/merge a PR/MR (or with a lower barrier of entry, like a commit), you are off to the deployment races. Per design, a declarative piece of infrastructure like Kubernetes is designed to as quickly as possible fulfill the desired state. As soon as the command is given to execute “kubectl apply” for example, the hammer is dropped and rolling deployments occur as quickly as possible. During more risky releases where additional safety is needed in the form of incremental release, such as a canary, defining multiple incremental states goes against the grain of GitOps. 

To start to leverage GitOps, two pieces are required. Firstly, an engine to interact with the SCM. Secondly, and from a purist standpoint, orchestrating the kubernetes resources/manifests. 

Framework for GitOps

Focusing on the Kubernetes ecosystem, taking a stringent or purist approach to GitOps, there are two pieces to the framework. One is an engine that facilitates the actions in the SCM and the interaction with the declarative system. The other are the instructions/packaging for the declarative system (the desired state to march towards). 

GitOps Engines

There are now several GitOps engines out there. These engines focus on the orchestration between the SCM and declarative system, and also allow GitOps steps themselves to be codified. Argo, Flux, and Jenkins X are GitOps-centric tools. Each with their own opinion around orchestration and ways to manage automation. 

Kubernetes Packaging

There are several methods for packaging and templating Kubernetes resources/manifests. Core to any GitOps implementation is the ability to execute on Kubernetes resources. In today’s landscape, more often than not, several Kubernetes resources/manifests need to be executed for a functioning application. Popular package/configuration management tools such as Helm and Kustomize allow for GitOps engines to call a singular helm chart/package. Advancements in templating technology, such as Jsonnet, also furthers the ability to store dynamically generated manifests as code. 

Where Infrastructure-as-Code (IaC) Comes Into Play

GitOps draws a lot of the concepts and benefits from the Infrastructure-as-Code world. Modern IaC solutions such as Terraform have been declarative for some time, usually stored in source control. Those are core foundational concepts in the GitOps world. 

A purist view of GitOps focuses on automating tasks around Kubernetes clusters and deployments. Infrastructure-as-Code solutions focus on the underlying infrastructure, for example, that the Kubernetes cluster or other solutions need to run. This is one of the challenges with a full GitOps model: the GitOps process is not infrastructure-aware. 

As a sort of proverbial “chicken or egg” argument, to enable GitOps is there a requirement to have IaC (or vice versa). Both have similarities to each other. In the continuum, adopters of GitOps most likely have organizational skills already in IaC.  

Push-Based vs Pull-Based Pipelines

The difference between push- and pull-based pipelines is where the deployment is taking place. In a GitOps model, is the deployment being orchestrated by a tool (push), or is the Kubernetes cluster self-deploying (inside the cluster itself)? From a Continuous Delivery perspective, artifacts can be pushed from a CI system to the pipeline vs the CD tool pulling an artifact from a repository.  

Push-Based Pipelines

Both in GitOps and Continuous Delivery models, push-based pipelines (the triggering event) have genesis externally; in the Continuous Delivery realm, this is a Continuous Integration system. From a GitOps standpoint, this is an external system to the Kubernetes cluster orchestrating the pipeline (a GitOps Engine or a Continuous Delivery solution like Harness). The benefits are that you are not limited to the functionality of what is achievable in a Kubernetes model, for example with Kubernetes resources, controllers, and operators.  

Pull-Based Pipelines

In a GitOps sense, the pull-based pipeline is where all changes are applied inside a Kubernetes cluster. The cluster state will be updated by the Kubernetes cluster itself. From a Continuous Delivery perspective, a pull-based pipeline has a genesis in the artifact repository; once an artifact is present and labeled/deemed ready to deploy, the CD pipeline will pull the artifact through the pipeline. 

Building a GitOps CI/CD Pipeline

No matter where you fall on the spectrum of GitOps, you can leverage Harness to quickly enable GitOps goals in your organization. Looking at a push-based pipeline model, getting started with Harness CI allows you to monitor the SCM solution for changes, then immediately build and publish the artifact. Once the artifact is built, Harness CI can trigger a deployment, leveraging Harness Continuous Delivery. 

We have a detailed example in another post that walks you through the exact steps and code samples. There are three main pieces that encapsulate a GitOps workflow, which is a code change, build, and deployment.

Make a Code Change

Commit or merge a change in a git repository. This can follow a pull/merge request. Once complete, the SCM event/hook will trigger more events. 

Commit GitOps Change

Build and Push to Pipeline

Turning the code into a deployable artifact (for example, docker images) is required. After this is done, they can be sent to a Continuous Delivery solution to execute on the needed manifests and confidence-building steps. 

GitOps Execute

Deploy to Your Kubernetes Cluster

Once the needed artifacts are published, join the changes in the artifacts with any updated Kubernetes resources/manifests for a deployment. By leveraging Harness, you can orchestrate as many confidence-building steps as you need, such as a soak test, and enable a canary release. 

GitOps Deployment

Deploying is just half the battle. Making judgment calls when to promote artifacts and decisions after an application is running is crucial. 

GitOps & Observability

The typical GitOps journey ends when the changes are deployed and the cluster state matches the declared state. Taking the deployment mechanism out of the picture, having a clear understanding of how the system is functioning is challenged. Even with the desired state running, performance or unexpected behavior can still occur. Even more so, the declared state might not be the correct end state and can need refinement. 

Integrating one or more of the three pillars of observability (logging, metrics, and tracing) as decision criteria for deployments is crucial. Determining a baseline and the ability to view and take action of performance regression/drift from a baseline is challenging for any organization.  The Harness Platform is purpose-built to leverage observability principles and providers to make health and regression decisions about deployments during and after deployment. 

Supercharge Your GitOps Pipelines with Harness

The big benefit of leveraging the Harness software delivery platform is that the Harness Platform supports both a strict and liberal view of GitOps. Like any pattern that is emerging, certain tools take a stringent approach. Harness can allow you to take the best parts of GitOps and implement them in your pipeline. As GitOps practices become more inclusive to technologies inside and outside of a Kubernetes cluster, Harness is well-positioned to enable those practices when you are ready. 

Feel free to sign up for a Harness account and get started on your GitOps journey!



Modernizing Continuous Integration Banner

Modernizing Continuous Integration: Best Practices and Challenges

Modernizing Continuous Integration is great, but there are some key challenges to avoid - and best practices to ensure success!

Feature Flags Best Practices Banner

Feature Flags Best Practices (Feature Toggles)

Testing in production, trunk-based development, using feature flags by default - these are all Feature Flags Best Practices. Learn more now, and make your feature flag experience as beneficial as it can be!

Jenkins Pipelines to Harness CD Banner

Migrating CD Jenkins Pipelines to Harness Using Helm

Making the switch from Jenkins pipelines to Harness CD was a breeze! Here was Corey's experience.

GitHub Actions Support in Harness CI Banner

GitHub Actions Support in Harness CI

GitHub Actions let you create custom actions that can perform predefined tasks. Learn how to use them in Harness CI for added extensibility.