GitLab and the three ways of DevOps

Most of my daily conversations are focused on features and very deep technical concepts, which provide valuable and actionable insight. However, we miss the fact that tools and technology are leveraged to solve business challenges. When talking about features and technology, it’s very easy to see the possible financial gain when replacing different tools with a unified platform. But it’s missing all the improvement opportunities that will provide value at all the levels of a company from developers to executives.

The reality is that we’re working in very complex systems, making it hard to see the forest from the trees. As an engineer, you’re focused on solving the next immediate problem that arises without taking a step back to reevaluate the system itself. In some cases, the problem itself is created by the design of our SDLC. As an executive, it’s difficult to balance the effort required to address the technical challenges with the pressure that comes from the business in this ever-increasing rhythm of change.

My goal with this article is to provide a high-level map that contains the most important DevOps principles and a shortcut. I know this is a bold statement as there is a lot of literature on this topic but my approach will be different.

First, I’m going to use the Three Ways as coined in The DevOps Handbook because those are the three foundational principles of DevOps as they were refined from Lean, the Toyota Production System, Theory of Constraints, Six Sigma, and System Thinking principles. Secondly, I’ll reference GitLab as the tool of choice because I think a good tool lets you focus on the work at hand, and GitLab does just that.

You’re invited! Join us on June 23rd for the GitLab 15 launch event with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform.

Here is a short description of what the Three Ways are, what they’re about, and why you should care.

First Way: Maximize Flow

The First Way is all about making work/value flow better through the whole value stream (left to right), and to do that, we need to have a systems thinking approach and always look at the end-to-end result. In the case of IT, this means we optimize for speed from the moment we had the idea, to generating value with software running in production.

We need to have a good understanding of the system to find potential bottlenecks and areas of improvement. Our improvements should always lead to better overall performance, be aware of the cases in which local enhancements lead to global degradation and avoid that.

In this process, it is crucial to stop defects from passing downstream from one workflow stage to another. Why? Because defects generate waste (of time and resources).

Second Way: Feedback loops

The Second Way deals with feedback loops, amplifying and shortening feedback loops so that we get valuable insight into the work we’re doing. The feedback can be related to the code that’s written or the improvement initiatives. Feedback loops maximize flow from right to left of the value stream.

Quick, strong feedback loops help build quality into the product and ensure that we’re not passing defects downstream. The quicker we do this the quicker and cheaper we can solve them, continuously keeping our software in a deployable state. It’s easier for a developer to fix a bug when they are working on that change, and the code and the thought process are fresh in their mind. Suppose days or even weeks pass between the moment of the commit and the moment we realize there is a problem with the change. It will be significantly harder to address the problem, not to mention that we probably realized the problem only when trying to deploy the software and we have a service that’s not working on our hands. On the flip side, feedback loops enable learning and experimentation, a point on which I’ll return a bit later.

Usually, more developers lead to more productivity, but as presented in The State of DevOps Report, this is true only for high performers. Why? If we have a team of 50 developers and problems aren’t immediately detected, technical debt builds up. Things will only get worse when we have 100 developers because they will generate even more technical debt with every development cycle. A natural tendency would be to add more developers in the hope velocity will get better, but it will degrade, so we add even more developers, and things degrade even more, and deployment frequency starts to suffer as it takes a lot of time to fix all the problems that came from upstream in order to get to a deployable state.

Third Way: Continuous Experimentation and Learning

The Third Way is about creating a culture of trust where continuous experimentation and learning can thrive. This leverages the first two ways in order to be successful.

Making work flow-easily through the Value Stream enables us to experiment and even take some risks, while failing fast and inexpensively. Feedback loops act as the guardrails that help us keep the risk in check but also facilitate learning because learning happens only when strong fast feedback is available. We can have a scientific approach, experiment with things, and extract the learning and improvement that results from these experiments and their feedback.

This is an iterative process that will lead to mastery (through increased repetition). This should be coupled with an environment where these local learning become global and they’re integrated into the daily work of all the teams.
For this approach to work and start getting some results, 20% of our time should be reserved for these improvement activities. I’m aware how difficult it can be to carve 20% of your time for improvement initiatives when dealing with urgent problems is your full time job. Protecting this improvement time helps us pay our technical debt and make sure things are not spiraling out of control.

GitLab and the Three Ways

Now that we presented the Three Ways of DevOps, maximizing flow (left to right), feedback loops (maximizing flow right to left) and having a continuous learning process, implementing these requires some effort from a tooling and process perspective.

It’s time to introduce GitLab into the picture, the only DevOps platform that covers the whole SDLC. Why is this useful for you? Because there is a synergy that happens when all the capabilities you need are provided in the same platform, the result is more than the sum of the components. Additionally, a good tool lets you focus on your work, not on the tool itself, so you can spend more time and effort driving your DevOps transformation. The fact that you’ll spend less money and time integrating different tools is the first immediate return of your investment.

When the goal is to maximize flow from left to right, GitLab can facilitate that, starting from idea to production. Having the benefit of being a platform built from the ground up, work can flow from Planning to the commit and managing code changesSCM stage and forward to CI/CD seamlessly. Any person involved in the SDLC can perform their work from the same UI. All the information they need is available without a need to switch through different UIs while paying the mental context switching cost associated when using disparate tooling.

GitLab provides different control mechanisms to make sure that if defects are introduced they are isolated and they don’t move downstream. Working in short lived feature branches, different controls around merging and MR Request Approval rules act as gates.

By having everything on the same platform it’s easier to understand the whole flow of work, coupling this with our Value Stream Metrics enable everyone involved to get a better understanding of the overall system and find potential bottlenecks and improvement opportunities.

Improved flow

As mentioned, flow in one direction – left to right – is not enough to deliver better software products faster. Feedback loops that are quick and provide strong feedback are crucial for great business outcomes. From a developer perspective, the results of the CI pipeline provides immediate feedback about your change. If this pipeline contains security scans it’s even better. Providing feedback from a security standpoint ensures that we’re not deploying vulnerable code and it gives the developer the opportunity to go back and fix it immediately. This is very actionable feedback that also provides a learning opportunity because the security reports come with information about the vulnerabilities, and also where possible, a potential solution to the vulnerability. All this is available for you without any additional work to integrate different tools.

Switching perspectives, someone that needs to review or approve a code change has everything they need at their fingertips in one place. It’s straightforward to pull in or “@mention” other necessary parties and they’ll get access to all necessary context. A decision can be made immediately and it’s based on accurate and clear feedback that you can trace back to the initial idea.

Metrics matter

Taking another step back, we get different metrics (Value Stream, Contribution) at the project level. This is one of the advantages that comes with a platform approach, and these insights are very easy to obtain and feed-back into the process. When doing software development at scale, more senior managers need this feedback at an even higher level, and therefore, these are available across multiple teams, projects or departments. All this information is very valuable from a current perspective, but also it helps guide and shape business decisions. If the velocity isn’t what is needed by the business we can look to remove bottlenecks, improve things or invest in some key areas.

Finally with these two capabilities in place we have a framework in which we can iterate quickly and safely. Experimentation becomes easy and very safe, we can test different business hypotheses, and see which ones work best with our customers. This should happen on an ongoing basis because this is the cornerstone of innovation.

Context is critical

Every experiment that we perform, every problem that we solve becomes valuable learnings that should be accessible to everyone in the organization. Having everything (context, actions, results, learnings) in one place enables us to open things up so that everyone can contribute. This requires an environment of trust where everyone feels comfortable to run small experiments that lead to improvements, and where these improvements can diffuse in your entire organization. By having a tool that just works and provides everything you need without any additional work,you gain back capacity that you can use to improve your product, overall system or organization.

It’s been a long journey up to this point, with the purpose of taking a look beyond immediate feature comparisons and the immediate financial gain that is realized when replacing multiple tools with one. We looked at the core principles of DevOps as a map in your DevOps transformation and at GitLab as a tool to facilitate that. Improving very complex systems is hard, driving that change through your company is a challenge, knowing that you have a tool that just delivers on your needs you can focus on developing code and on your continuous improvement efforts.

I hope this is useful to everyone involved in the SDLC, from the engineers who need to work with and within the system everyday, to senior leaders who need to deliver business results.

“DevOps isn’t just an esoteric philosophy – it actually is a roadmap for faster and safer software releases, if you choose the right tool. Here’s how to take the principles of DevOps and get the most out of the One DevOps Platform.” – Vlad Budica


Click to tweet