DevOps

Tackle a Plan of Actions and Milestones with GitLab’s risk management features

Software is an essential part of everyday life. More and more organizations are being forced to push software to consumers faster for a better customer experience. But increasing software delivery speed cannot come at the expense of security. This adds more pressure on internal development, security, change management, operations, and site reliability teams. Shifting left to find security vulnerabilities earlier within the DevOps process is a critical aspect of ensuring security scales with the pace of development. But U.S. federal government operations go a step further with the implementation of the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF). The RMF, implemented with standards such as NIST 800-53, NIST 800-171, and NIST 800-37 all require careful consideration of security vulnerabilities identified as properly managed risks. This is further recommended with NIST 800-160 and NIST 800-161. However, practically speaking, not even the most diligent IT team can ensure full compliance with every requirement. This is when risk management becomes more critical as it has to be continuously monitored and evaluated through the software development lifecycle (SDLC). Generally, the prescribed methodology is to prepare a plan and document the tasks necessary to resolve risks, along with the resources required to do so. Due to interdependencies with other software components, milestones may also be needed to track the work. This is embodied in the Plan of Actions and Milestones (POA&M) process. GitLab and the POA&M process There are two aspects of identifying and managing vulnerabilities. First, there has to be a quick and relatively easy way to identify new vulnerabilities and zero-day exploits as they become public. Second, it should be possible to check for existing vulnerabilities periodically – ideally in an automated or ad-hoc way as new information becomes available and internal or external auditor reviews are conducted. NIST provides a sample POA&M template to help organizations track the actions needed. But in our experience, the mental load to manage another separate document can be an added burden on all the teams, not to mention confusing as new versions of the information become available. GitLab provides numerous resources to assist with this process. Using GitLab to identify vulnerabilities GitLab has multiple types of security and compliance scanners that evaluate source code in various ways. These scanners are capable of finding security weaknesses introduced in new code, vulnerable dependencies, container images, and non-compliant licenses from third-party code. These scans can run against every commit on every feature branch – before any code is merged or deployed into production.  As potential security issues are found, GitLab provides an aggregated view of the findings both in the developer workflow and in dedicated vulnerability management tools. GitLab’s Vulnerability Reports allow security teams the ability to triage and manage vulnerabilities for individual projects or across groups of projects. From here, security teams can evaluate vulnerabilities, track remediation progress, or dismiss any false positives.   This provides a direct way to find, catalog, and manage vulnerabilities. As this process moves further along, and vulnerabilities are characterized as a risk, GitLab provides a one-click process to convert and link the vulnerability with a work management item known as an Issue in GitLab. This can become a central location where, as per the POA&M process, it can be assigned to the Directly Responsible Individual (DRI), with due dates and milestones.  The Issue can also […]

Read More

How to leverage modern software testing skills in DevOps

Testing is a critical step in the software development lifecycle but also the part of the process most DevOps teams trip over. The solution — test automation — has been talked about for years but has been far easier said than done. However, with new technologies on the rise, test automation is taking off. DevOps teams need to be prepared with modern software testing skills. Here’s how to get started. The benefits of automated software testing In GitLab’s 2021 Global DevSecOps Survey of over 4,000 developers, security professionals, and operations team members, respondents agreed on one universal truth: Software testing is the biggest reason why development is delayed. It’s critical to get software testing right because it’s financially disastrous to get it wrong. How much money do software mistakes add up to? Somewhere in the trillions. Yes, with a “t.” DevOps.com reported that software failures in companies’ operations systems cost a total of almost $1.6 trillion in the U.S. in 2019 alone. But testing has traditionally been difficult to do efficiently and not particularly popular with developers. The solution? Test automation combined with modern software testing skills. It’s a hands-on start DevOps teams looking to up their test game need to take a step back… into manual testing. (The irony is not lost on us.) A manual testing mindset can actually improve all facets of automated software testing. As devs perform basic tests on their code as it’s being written, channeling their inner manual tester can be helpful. Whether it’s looking at the requirements again or running failed fixes one more time, that attention to detail should be brought into how automated test cases are built and executed. Take the modern view Once developers have incorporated some old-school habits into their test cases, it’s time to consider some fresh perspectives, up to and including a deep understanding of the organization’s goals and objectives. According to Modern Testing, there are key principles of modern testing that every developer needs to be aware of for successful testing at any stage: Job one is to make the business better. Rely on trusted resources like Lean Thinking and the Theory of Constraints. Fail fast but focus on success. Always be the customer when testing. Do data-driven work. Testers are evangelists. Get certified As the saying goes, every little bit helps. Though it is not required, a training program or certification course in software testing can enhance team capabilities. If there’s interest in this option, research courses online that might fit. From beginners to experienced testers, there’s something for everyone. Not sure where to start? Teams can explore the International Software Testing Qualifications Board (ISTQB) Foundation Level Certification for CTFL certification. This is required before taking any other certifications (see the full list of ISTQB prerequisites). After CTFL, there are many interesting certification options. The American Software Qualifications Board, which offers the ISTQB certifications, is another great resource and has a helpful Software Testing Career Road Map. Embrace new technologies Artificial intelligence and machine learning are at the core of test automation, so a thorough understanding of the technologies is a key modern software testing skill to have onboard. If AI/ML is already in use, ask to shadow or “apprentice” those working with it. Organize a Q&A for the DevOps team with an expert, and […]

Read More

Why we’re sticking with Ruby on Rails

When David Heinemeier Hansson created Ruby on Rails (interview), he was guided by his experience with both PHP and Java. On the one hand, he didn’t like the way the verbosity and rigidness of Java made Java web frameworks complex and difficult to use, but appreciated their structural integrity. On the other hand, he loved the initial approachability of PHP, but was less fond of the quagmires that such projects tended to turn into. It seems like these are exclusive choices: You either get approachable and messy or well-structured and hard to use, pick your poison. We used to make a very similar, and similarly hard, distinction between server-class operating systems such as Unix, which were stable but hard to use, and client operating systems such as Windows and MacOS that were approachable but crashed a lot. Everyone accepted this dichotomy as God-given until NeXT put a beautiful, approachable and buttery-smooth GUI on top of a solid Unix base. Nowadays, “server-class” Unix runs not just beautiful GUI desktops, but also most phones and smart watches. So it turned out that approachability and crashiness were not actually linked except by historical accident, and the same turns out to be true for approachability and messiness in web frameworks: They are independent axes. And these independent axes opened up a very desirable open spot in the lower right hand corner: an approachable, well-structured web framework. With its solid, metaprogrammable Smalltalk heritage and good Unix integration, Ruby proved to be the perfect vehicle for DHH to fill that desirable bottom right corner of the table with Rails: an extremely approachable, productive and well-structured web framework. When GitLab co-founder Dmitriy Zaporozhets decided he wanted to work on software for running his (and your) version control server, he also came from a PHP background. But instead of sticking with the familiar, he chose Rails. Dmitry’s choice may have been prescient or fortuitous, but it has served GitLab extremely well, in part because David succeeded in achieving his goals for Rails: approachability with good architecture. Why modular? In the preceding section, it was assumed as a given that modularity is a desirable property, but as we also saw it is dangerous to just assume things. So why, and in what contexts, is modularity actually desirable? In his 1971 paper “On the Criteria to be Used in Decomposing Systems into Modules”, David L. Parnas gave the following (desired) benefits of a modular system: Development time should “be shortened because separate groups would work on each module with little need for communication.” It should be possible to make “drastic changes or improvements in one module without changing others.” It should be possible to study the system one module at a time. The importance of reducing the need for communication was later highlighted by Fred Brooks in The Mythical Man Month, with the additional communication overhead one of the primary reasons for the old saying that “adding people to a late software project makes it later.” We don’t need microservices Modularity has generally been as elusive as it is highly sought after, with the default architecture of most systems being the Big Ball of Mud. It is therefore understandable that designers took inspiration from arguably the largest software system in existence: the World Wide Web, which is modular by necessity, […]

Read More

Use Streaming Audit Events to connect your technology stack with GitLab and Pipedream

Gitlab recently released Streaming Audit Events to provide you real-time visibility into what happens inside your GitLab groups and projects. Whenever something happens, an event will be sent to the HTTPS destination of your choice. This is a great way to understand immediately when something has changed and if there is an action that needs to be taken. These events are often used to drive automation to update GitLab in response to certain actions, such as creating a new issue to onboard a team member when an account is added to a group, or to restore the correct value of a merge request approval setting if it is changed. We know that many users want to combine the streaming audit events with other data sets and tools they already work with. Taking automatic action in response to audit events happening can help ensure your GitLab groups and projects are always in the correct, compliant state. Pipedream simplifies the streaming audit event process Driving automation off of these events or combining the events with other data sets means the destination which will receive the events needs to be running and have logic in place for how to handle the events as they come in. This normally would require setting up and maintaining a server with high availability to receive events as they happen, run any automation scripts, and then process the events if they needed to be sent to another tool or combined with another data set. This is tricky to do right and an extra step that takes time. Enter our partner, Pipedream.  Pipedream lets you connect APIs, remarkably fast. This includes the new streaming audit events from GitLab. When you select the GitLab New Audit Events trigger in a Pipedream workflow, Pipedream will automatically register an HTTPS endpoint for audit events in your GitLab group: From there, Pipedream allows you to transform the data, forward it to any other tools using Pipedream’s prebuilt actions, or write any custom automation with code (i.e., Node.js, Python, Go, or Bash). Getting started with Pipedream and GitLab The video below shows an example of how to use GitLab streaming audit events and Pipedream together to automatically alert your security team if a sensitive project setting is changed. This is powerful because it ensures that your security teams can immediately take action when a change occurs and understand why it happened. This is just one example of what you can do with Pipedream and GitLab together. Pipedream allows you to use any GitLab API in response to an audit event: You can change the setting to its original value, add comments to issues, kick off pipelines, and more. You can also trigger any action in any of the 700+ other apps that it has built-in integrations with. Open source integration means everyone can contribute Pipedream and GitLab are both strong believers in open source. The integration is publicly available at the Pipedream repository, and contributions are welcome! We are excited to see what sort of workflows you create with Pipedream and GitLab together. Final thoughts In this post, we talked about the power of GitLab’s Streaming Audit Events to give you immediate visibility into your groups and projects and how Pipedream makes it easy to build and automate workflows based on those audit events. […]

Read More

Battling toolchain technical debt

Developers love their tools. Operations teams love their tools. And security teams love their tools. As Dev, Sec, and Ops consolidate onto a single DevOps platform, toolchain technical debt becomes exponentially more costly and complex. “Tools should be in the background enabling excellent development, operations, and security practices. However, DevOps teams are often led by their tools rather than the other way around and that can hinder all aspects of the software development lifecycle (SDLC),” says Cindy Blake, CISSP, director of product and solutions marketing at GitLab. An April 2022 Gartner® report titled “Beware the DevOps Toolchain Debt Collector” notes that “many organizations find themselves with outdated, poorly governed, and unmanageable toolchains as they scale DevOps initiatives.” One of the key findings, according to Gartner, is that “most organizations create homegrown toolchains, often leveraging the tools beyond their functional design. This not only leads to a fragmented toolchain, but also creates complications when tooling needs to be scaled, replaced, or updated.” Toolchain technical debt introduces complexity as companies shift critical tasks such as reliability, governance, and compliance left in the SDLC. Discover how GitLab 15 can help your team deliver secure software, while maintaining compliance and automating manual processes. Save the date for our GitLab 15 launch event on June 23rd! No time for technical debt Few DevOps teams give toolchain upkeep the time and attention it requires. According to GitLab’s 2021 DevSecOps survey, nearly two-thirds of survey respondents, 61%, said they spend 20% or less of their time on toolchain integration and maintenance each month. “Developers face challenges and time constraints while maintaining these complex, stand-alone tool siloes, building fragility and technical debt that the [infrastructure and operations] leader has to deal with,” Gartner states. The research firm adds, “These outdated toolchains further increase overhead costs, magnify technical risks, add operational toil, and limit business agility.” Blake agrees: “Complex toolchains inhibit the ability to govern the software development and deployment process. Policies must be managed across tools and visibility into code changes and changes to its surrounding infrastructure become difficult to see and track. Time is wasted on managing the toolchain instead of value-added work.” Getting purpose-driven The remedy to toolchain sprawl and subsequent debt is to change strategy. Instead of putting energy into figuring out how to maintain one-off tools, DevOps teams should focus on how to enable processes and policies that support simplicity, control, and visibility across the SDLC. “These are the characteristics needed to meet reliability, governance, and compliance demands. A united platform like GitLab helps you do that,” Blake says. Gartner states: “Successful infrastructure and operations leaders reduce technical debt and sustainably scale DevOps toolchain initiatives across the organization by using a prioritized, iterative strategy that minimizes friction in making changes to toolchains and more quickly delivers customer value.” Adopting a purpose-built platform instead of a complex and ad-hoc toolchain also eases an organization’s ability to automate the SDLC. “Automation abstracts complexity away from the developer and provides guard rails so DevOps teams gain greater efficiency, accuracy, and consistency,” Blake says. In addition, automation reduces the audit footprint in terms of what needs oversight and inspection. Platforms also support automation throughout operations, including building and testing infrastructure as code, so that “you can eliminate the variables when you’re trying to debug an application,” she says. […]

Read More

How to ask smarter DevOps questions

GitLab has surveyed DevOps practitioners for more than five years now. In that time, we have come to know what questions to ask to understand how well teams are doing with DevOps. In sharing these 10 questions, we aim to help you assess your own team’s capabilities and achieve smarter, faster DevOps. How fast is your team releasing code today vs. one year ago? Tracking release speed is like taking the temperature of your DevOps team. You’d like to think everything is going well, but you might be surprised. Occasionally DevOps teams report to us they are actually releasing code more slowly than in the past. What stage(s) in the process are causing the most release delays? This question will shine a spotlight on the areas in your DevOps practice that simply don’t work. Spoiler alert: The answer will certainly be testing, though other things, from planning to code development and code review, might pop up, too. How automated is your DevOps process? Ask this, but don’t just focus on testing, tempting as that might be. Also think about what else in the software development lifecycle would benefit from automation. Consider what getting that time back would afford you. Could you assign your developers and ops pros to other business-critical projects? What’s been added to your DevOps tech stack over the last year? It’s good to look back and take inventory of the technology you have in play. This is also data that can help inform what your next steps might be, such as adopting GitOps, observability, or AI. How are your DevOps roles changing? If your team is like others we’ve heard from, (big) changes are happening. Devs are picking up tasks that have traditionally been owned by ops, ops is becoming anything from a DevOps coach to a platform engineer or a cloud expert, and security is likely now embedded in development teams. How does security integrate with DevOps in your organization? The most successful DevOps teams have figured out how to bridge the dev and sec divide. Whether your team has a security champion or actually embeds sec pros on the dev team, this is a critical piece in the process to release safer software faster. What advanced technologies are you using (or considering) in your DevOps practice? “Bots” can test code, AI can review code, and a low code/no code tool will make citizen developers out of anyone in the organization. Now is definitely the time to make sure your DevOps team is future-proofing the tech stack. Do you have a plan for governance and compliance of your software supply chain? To keep the software supply chain secure, DevOps teams need visibility into and control over the entire development lifecycle. Can you easily deal with audits or attestations of compliance? Mature governance and compliance processes are essential in all industries today, not just those that are highly regulated. What advanced practices are you using (or considering) in your DevOps environment? Whether it’s Infrastructure as Code (IaC), GitOps, or MLOps, cutting-edge practices can jumpstart your releases and bring new and interesting opportunities to DevOps teams. Do you regularly assess DevOps careers and roles on your team? Happy team members really are more productive, so consider this a PSA to keep career growth conversations a priority. In considering […]

Read More

GitLab is the single source of truth for eCommerce provider

eCommerce platform provider Swell was built to give entrepreneurs the opportunity to build the online business that they envision. A GitLab customer since 2021, GitLab has been adopted as Swell’s one DevOps, project management, and support ticketing tool for the whole organization. It’s the foundational platform that the business works on. Swell is using GitLab Premium in many different areas, including for product development and to build the platform infrastructure, says Nico Bistolfi, vice president of technology. “GitLab is our source of truth for everything,” Bistolfi says. Now, Swell is looking into expanding its usage of the platform to leverage features such as code quality, automation, and other types of dynamic application security and static application security. GitLab for CI/CD Swell upgraded to the Premium version and the biggest advantage so far has been the review operations capability, Bistolfi says. The company has created environments for every merge request users make, and that replicates in production for testers to see what was changed, whether a fix was made, or how the new feature is working. “We could not go to our software development lifecycle today without the review ops. That’s something that is critical for us,” Bistolfi says. GitLab is used for both continuous integration (CI) and continuous deployment (CD). While building the CI/CD pipeline process is ongoing, Bistolfi says, “We are slowly changing it and relying more and more on GitLab” in areas, including application security. Before moving to GitLab, Swell was using bare-metal servers. The company now uses GitLab’s container management solutions and all API updates are happening through the platform. From inputting issues to resolution Everyone at Swell is using GitLab — not just developers — and for a variety of tasks. The company has created a way to process support tickets through the platform. Another use case is knowledge management. “We find ourselves making some decisions from comments in GitLab,” he says. The whole process from the time a ticket is created to being resolved is done within the platform. The company culture is about full information transparency, Bistolfi says, particularly since Swell is fully remote and employees work from 11 different countries. So one goal is to maintain asynchronous communication. When an issue is created in the platform, a little bit of coding is required, but he said non-developer users have adapted well. The feedback so far has been that using GitLab has been frictionless. Speed to delivery Initially, for some services, it took about 30 minutes to build and deploy an image. Now, the process has been decreased to between one and five minutes in most cases. Swell manually sets release dates for system improvements and, right now, there are about two a week. The company is working on automating the process for continuous delivery with the goal of soon having releases every couple of hours. Team play Swell manages team backlogs, sprints, milestones, and future work using its own flavor of Kanban with what Bistolfi calls “quick labels.” Engineering teams are being scaled and, in addition to Kanban, some projects are done using Scrum. Changing their GitLab configuration has let teams measure velocity better. A future goal is to gain visibility into team results, as well as use GitLab for project planning and management, he says. GitLab as a product and company Bistolfi […]

Read More

GitLab 15: The retrospective

No cloud native, no containers, and no remote work: Those were just a few of the things missing from the technology landscape in 2011 when we launched GitLab 1.0. It’s been a journey, for sure. Here’s a look back at how far we’ve traveled to get to GitLab 15. It started with source code management In the beginning of GitLab there was source code management (SCM)… and that was it. Continuous integration (CI) became part of GitLab because our co-founder Dmitriy Zaporozhets got tired of having to keep the CI servers running separately, so we decided to bring continuous integration into the mix. Even then we knew it didn’t make sense for companies to “DIY” critical parts of their process. That being said, it did feel counterintuitive to bring SCM and CI together, but we tried it anyway. Continuous delivery (CD) eventually evolved out of the CI/SCM integration, but it is crazy to think that when we started GitLab, CI/CD was not really a consideration. DIY DevOps really did exist What people were talking about, though, was DevOps, and specifically DIY DevOps because back then it was completely normal for teams to assemble a bunch of tools and call it done. When we would talk about the importance of fewer tools and more integration, people would turn up their noses. We heard a lot of “different tools for different things” and “many have sharp tools.” Today we know that a DevOps platform increases development speed and release cadences. But back then, gluing together tools was seen as normal. What’s old is new again Back in the day there were lots of tools and also very different programming languages than we reach for today. In the 2014 era, developers often wrote code in Ruby or JavaScript, and kept things layers away from the microprocessor. Over the years, that’s changed drastically. Rust and Go – as just two examples – have brought us back to the processor and reflect today’s modern programming styles. It’s another sign of how drastically things have shifted over time. It wasn’t cloud-y The cloud was in its infancy when GitLab started and at the time we all thought it was probably a great solution for startups or small businesses, but perhaps not something that would ever be in widespread use. Fast-forward to today where most companies run their infrastructures in the cloud. Now it’s widely accepted a cloud native architecture helps teams deliver better software faster and cloud skepticism has drifted away. Security was siloed Security teams, and tools, were completely separate entities when GitLab began and that, of course, made doing something inherently difficult even more so. Devs were asked to fix bugs without any context, process, or knowledge of deployment status, and naturally weren’t very excited about it all. Realizing this, we began slowly adding scans to our CI/CD steps so that security was part of the pipeline and not separate from it. The goal is to let developers and teams deal with security in an incremental way, rather than a large to-do list at the end of the process. And that progress is ongoing. Code review wasn’t integrated Eleven years ago, code review wasn’t that different from security, i.e., it was something done in a distant time and place and without context. Today, merge […]

Read More

Observability vs. monitoring in DevOps

In almost any modern software infrastructure, there is inevitably some form of monitoring or logging. The launch of syslog for Unix systems in the 1980s established both the value of being able to audit and understand what is going on inside a system, as well as the architectural importance of separating that mechanism. However, despite the value and importance of this visibility into system behavior, too often monitoring and logging are treated as an afterthought. There are countless instances of systems emitting logs into a void, never being aggregated or analyzed for critical information. Or infrastructure where legacy monitoring systems were installed a decade ago and never updated to modern standards. Recently, shifts in the operational landscape have given rise to the concept of observability. Rather than expect engineers to form their own assumptions about how their application is performing from static measurements, observability enables them to see a holistic picture of their application behavior, and critically, how a user perceives performance. You’re invited! Join us on June 23rd for the GitLab 15 launch event with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform. What is observability? To understand the value in observability, it’s helpful to first establish an understanding of what monitoring is, as well as what it does and does not provide in terms of information and context. At its core, monitoring is presenting the results of measurements of different values and outputs of a given system or software stack. Common metrics for measurement are things like CPU usage, RAM usage, and response time or latency. Classic logging systems are similar; a static piece of information about an event that occurred during system operation. Monitoring provides limited-context measurements that might indicate a larger issue with the system. Aggregation and correlation are possible using traditional monitoring tools, but typically require manual configuration and tuning to provide a holistic view. As the industry has advanced, the concept of what makes for effective monitoring has moved beyond static measurements of things like CPU usage. In its now-famous SRE book, Google emphasizes that you should focus on four key metrics, known as “Golden Signals“: Latency: The time it takes to fulfill a request Traffic: High-level measurement of overall demand Errors: The rate at which requests fail Saturation: Measurement of resource usage as a fraction of the whole; typically focuses on constrained resources While these metrics help home in on a better picture of overall system performance, they still require a non-trivial engineering investment to design, build, integrate, and configure a complete monitoring system. There is considerable effort involved in enumerating failure modes, and manually defining and associating the correct correlations in even simple cases can be time-consuming. In contrast, observability offers a much more intuitive and complete picture as a first-class feature: You don’t need to manually correlate disparate monitoring tooling. An aggregated monitoring dashboard is only as good as the last engineer that built it; conversely, an observability platform adapts itself to present critical information in the right context, automatically. This can even extend further left into the software development lifecycle (SDLC), with observability tooling providing important performance feedback during CI/CD runs, giving developers operational feedback about their code. Ultimately, observability provides more holistic debugging […]

Read More

GitLab and the three ways of DevOps

Most of my daily conversations are focused on features and very deep technical concepts, which provide valuable and actionable insight. However, we miss the fact that tools and technology are leveraged to solve business challenges. When talking about features and technology, it’s very easy to see the possible financial gain when replacing different tools with a unified platform. But it’s missing all the improvement opportunities that will provide value at all the levels of a company from developers to executives. The reality is that we’re working in very complex systems, making it hard to see the forest from the trees. As an engineer, you’re focused on solving the next immediate problem that arises without taking a step back to reevaluate the system itself. In some cases, the problem itself is created by the design of our SDLC. As an executive, it’s difficult to balance the effort required to address the technical challenges with the pressure that comes from the business in this ever-increasing rhythm of change. My goal with this article is to provide a high-level map that contains the most important DevOps principles and a shortcut. I know this is a bold statement as there is a lot of literature on this topic but my approach will be different. First, I’m going to use the Three Ways as coined in The DevOps Handbook because those are the three foundational principles of DevOps as they were refined from Lean, the Toyota Production System, Theory of Constraints, Six Sigma, and System Thinking principles. Secondly, I’ll reference GitLab as the tool of choice because I think a good tool lets you focus on the work at hand, and GitLab does just that. You’re invited! Join us on June 23rd for the GitLab 15 launch event with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform. Here is a short description of what the Three Ways are, what they’re about, and why you should care. First Way: Maximize Flow The First Way is all about making work/value flow better through the whole value stream (left to right), and to do that, we need to have a systems thinking approach and always look at the end-to-end result. In the case of IT, this means we optimize for speed from the moment we had the idea, to generating value with software running in production. We need to have a good understanding of the system to find potential bottlenecks and areas of improvement. Our improvements should always lead to better overall performance, be aware of the cases in which local enhancements lead to global degradation and avoid that. In this process, it is crucial to stop defects from passing downstream from one workflow stage to another. Why? Because defects generate waste (of time and resources). Second Way: Feedback loops The Second Way deals with feedback loops, amplifying and shortening feedback loops so that we get valuable insight into the work we’re doing. The feedback can be related to the code that’s written or the improvement initiatives. Feedback loops maximize flow from right to left of the value stream. Quick, strong feedback loops help build quality into the product and ensure that we’re not passing defects downstream. The quicker we do this […]

Read More