DevOps

The Future of DevOps: Predictions and Insights From Industry Experts

DevOps is a crucial part of the ever-evolving field of technology, shaping the future of software development and operational efficiency. Here are the trends, transformations and breakthroughs that will redefine the DevOps landscape in 2024. 2024: The Year for DevOps In 2024, DevOps is poised for a transformative journey. Automation is predicted to surge to unprecedented levels, reshaping development workflows and expediting deployment cycles. Continuous integration and continuous delivery (CI/CD) pipelines are expected to attain new heights of efficiency, facilitating rapid and reliable software releases. DevOps, synonymous with agility, is foreseen as a key driver of innovation and efficiency in software development.Expert Insight: Ramendeep Bhurjee, VP, Cigniti Technologies. BizDevOps Redefines Software 2024 will witness BizDevOps redefining how businesses approach software development and operations. The integration of business stakeholders into the development process is expected to reach new levels of maturity. Continuous feedback loops between business, development and operations teams will become standard practice. Automation will undergo further refinement, enabling swift adaptation to changing market dynamics.Expert Insight: Raghu Krovvidy, chief delivery officer, Cigniti Technologies. DevOps and Agile Convergence A convergence between DevOps and Agile practices is anticipated to enhance software development. Breaking down silos and improving collaboration for faster, high-quality development is the goal. Tools supporting continuous integration and delivery are deemed crucial in this integrated approach, streamlining the path from development to deployment.Expert Insight: Paul Lechner, VP of product management, Appfire. Faster Development Life Cycles Continue The relentless march towards faster development life cycles to meet escalating demand is expected to persist in 2024. As organizations push new applications into production more swiftly, a focus on real-time security practices within the CI/CD pipeline is crucial during source code development.Expert Insight: Dan Hopkins, VP of engineering, StackHawk. Agile Development Shapes the Future In the realm of development, agile practices will continue shaping the future of innovation by incorporating advanced technologies and methodologies. The adoption of Scaled Agile Frameworks like SAFe is predicted to be a significant facet of agile development in 2024.Expert Insight: Nitin Garg, VP of practice delivery, AgreeYa Solutions. Fostering a Human-Centric Agile Mindset Companies are expected to realize that agile transformation must be holistic, involving shorter cycles and business-side changes beyond just software. A shift towards reinvigorating the human-centric aspects of agile development is seen as essential for success.Expert Insights: Tina Behers, VP of enterprise agility, Aligned Agility; Jon Kern, Agile Consultant, Adaptavist. Moving From Tracking Developer Productivity to Engineering Efficiency Leaders are anticipated to shift their focus from tracking individual developer productivity to engineering efficiency. The measurement will transition from individual metrics to team-centered metrics around engineering efficiency.Expert Insight: Ori Keren, co-founder and CEO, LinearB. Collaboration – The Future of DevOps In the era of multi-cloud architectures and diverse vendor reliance, the future of DevOps is expected to hinge on strengthened collaboration. DevOps professionals are set to forge robust partnerships with traditionally siloed teams, emphasizing automation to seamlessly engage at critical junctures.Expert Insight: Erez Tadmor, Cybersecurity Evangelist, Tufin. Recognition of the 99% Developers Businesses are predicted to recognize the significance of the “99% Developers” who work outside the limelight but contribute significantly to software development. Understanding the needs of this majority is seen as crucial for sustained business success.Expert Insight: Jean Yang, Head of Product, Observability, Postman. Debugging Remains a Challenge Debugging is expected to remain a […]

Read More

All Your IT Team Wants This Holiday Season is a Break!

The holiday season is all about giving. As organizations increasingly look to IT as they move toward new digital tools and processes, now is the perfect time to give back to IT teams tirelessly working to keep the modern enterprise online. Whether your system performance has been naughty or nice this year, there’s no denying that tech professionals have earned our appreciation, respect—and the tools to set them up for success in 2024. For IT teams limited in both time and resources, simply maintaining systems can feel as impossible as squeezing themselves down a chimney or delivering gifts to millions of homes in a single night. On top of that, instead of being greeted with milk and cookies, they’re inundated with endless performance issues, support requests and alerts—leaving little time left over for the important work of innovating. They say the best gifts are the ones you can’t wrap. That holds true for IT teams, too. This year, bring your organization the gift of a simpler, speedier, more rewarding workload. If your team is dreaming of a tech-savvy future, here are some enterprise software solutions to make their lives easier that they won’t want to re-gift: Enjoy the View With Observability Everyone loves to cozy up at home during a winter snowstorm, but with the widespread migration to combined remote, on-premises and distributed hybrid environments, the daily monitoring journey for today’s IT teams is more akin to trekking blindly through a blizzard. Observability tools are metaphorical snowshoes and goggles that can help them not only weather the storm but see clearly from the mountaintop. Observability is the answer to the modern enterprise’s struggle to gain full visibility into their organization’s apps, networks, databases and infrastructure—something nearly half of IT professionals lack, according to SolarWinds research. IT teams will be able to rest easier at night with visions of sugarplums, rather than outages or anomalies, dancing in their heads. Even better, integrating artificial intelligence (AI) capabilities into observability solutions to collect and provide data on what’s not performing as expected and why will help your teams take a proactive approach to solving issues. Lend a Helping Hand With AIOps AI isn’t just the shiny new toy of the tech world. Organizations using AI for IT operations (AIOps) can give the gift of support to their overworked IT teams by automating some of the time-consuming and mundane tasks that stand between them and a focus on innovation. Adding AIOps to observability can provide IT teams with maximum visibility into the state of their digital ecosystems through automated discovery and dependency mapping. Additionally, your teams can gain the ability to easily track inbound connections linked across the organization’s application stack and storage volumes with auto-instrumented views. Today, it simply isn’t feasible for humans alone to manage modern IT environments without intelligent automation. Think of AIOps as a workshop of elves operating in the background to ensure workloads and processes are streamlined and moving as efficiently as possible. With AIOps in place to analyze data and streamline workloads and processes, IT teams are relieved of some pressure—and can focus on accelerating your digital transformation rather than just maintaining it. Give the Gift of Time Finally, although you can’t outright give the gift of time to your IT team, you can still arm them with […]

Read More

DevOps Best Practices for Faster and More Reliable Software Delivery

Imagine a scenario where teams creating the software and delivery aren’t just passing information but sitting together, brainstorming and solving problems in real time. That’s the core of DevOps. It’s not a one-click software solution, but teams working together to provide a reliable solution for seamless and faster software delivery. Let’s take an example of an app or software update; users would expect it to work seamlessly. The secret here for that seamless experience is often a well-structured DevOps strategy. DevOps isn’t just about speeding things up, it’s about balancing the need for speed with the need for stability. According to research, 99% of organizations witnessed a positive impact after implementing DevOps in their business delivery processes. They’re deploying updates far more frequently, their failure recovery is lightning-fast, and they see fewer issues when they launch new features. Using DevOps for Efficient Software Delivery DevOps is crucial for organizations looking to resolve the complexities of modern software delivery. It bridges the gap between ‘code complete’ and ‘code in production,’ ensuring that software isn’t just created but delivered swiftly and effectively to the end-user. This approach not only accelerates time-to-market but also enhances product quality and customer satisfaction. By adopting continuous integration and continuous delivery (CI/CD), automation, and constant feedback, DevOps empowers teams to respond to market changes with agility and confidence. It’s about balancing processes, people and technology that work together to unlock higher efficiency, innovation and success. Implementing Continuous Integration and Continuous Deployment (CI/CD) Continuous integration and continuous deployment (CI/CD) are core practices in the DevOps approach, designed to streamline and automate the steps in getting software from development to deployment. CI/CD establishes a framework for development teams that supports frequent code changes while maintaining system stability and high-quality output. This method depends on automation to detect problems early, reduce manual errors and speed up the delivery process, ensuring that new features, updates and fixes are available to users quickly and reliably. Teams should follow several best practices: • Commit to Version Control Rigorously: Every piece of code, from application to configuration scripts, should be version-controlled. It ensures that any changes can be tracked, rolled back or branched out at any point, providing a solid foundation for collaborative development and deployment.• Automate the Build for Consistency: Automation is the key to CI/CD. By automating the build process, one can ensure that the software can be reliably built at any time. This automation includes compiling code, running database migrations, and executing any necessary scripts to move from source code to a working program.• Incorporate Comprehensive Automated Testing: A robust suite of automated tests, including unit, integration, acceptance, and regression tests, should be run against every build to catch bugs early. Automated tests act as a safety net that helps maintain code quality throughout the rapid pace of DevOps cycles.• Replicate Production in Staging: A staging environment replicates your production environment and is crucial for pre-deployment testing. It should mimic production as closely as possible to surface any environment-specific issues that could otherwise cause unexpected behavior after release. • Ensure Quick and Safe Rollbacks: The ability to roll back to a previous state quickly is essential. This safety measure minimizes downtime by swiftly reversing failed deployments or critical issues without going through a prolonged troubleshooting process during peak hours.• Monitor Relentlessly […]

Read More

The Growing Impact of Generative AI on Low-Code/No-Code Development

No-code/low-code platforms, once a disruptor in the realm of software development, are now embracing the capabilities of generative AI to create even more dynamic experiences. This union of convenience and innovation redefines how users interact with their software. Imagine a scenario where crafting complex instructions like “Deploy endpoint protection to noncompliant devices” becomes as simple as conversing with your application. The fusion of generative AI and no-code/low-code platforms empowers users to shape their software’s behavior without delving into intricate technicalities. Users can input prompts such as “Generate a code snippet for converting date formats” or “Create a workflow that automates inventory updates.” By translating natural language into action, this approach streamlines development and fosters creativity. An Amalgamation of Generative AI and No-Code/Low-Code Beyond buzzwords, the amalgamation of generative AI with no-code/low-code platforms offers tangible benefits. The efficiency gains that occur when users can sidestep the need for manual configurations and directly communicate their intentions are both remarkable and unprecedented. Accessibility is enhanced, enabling non-technical individuals to actively participate in application development. Moreover, innovative use cases emerge, allowing organizations to streamline complex workflows with ease. As with any transformative technology, challenges emerge alongside benefits. Privacy concerns loom large when dealing with data input into generative AI models. Striking a balance between providing valuable insights and safeguarding sensitive information becomes paramount. Additionally, the inherently non-deterministic nature of generative AI can lead to varying outcomes, requiring careful consideration of use cases to ensure reliable results. As this collaboration matures, the landscape of software development is poised for significant change. Conversational interfaces that empower users to dictate software behaviors will continue to evolve, reducing implementation and configuration overhead. Imagine a future where complex workflows are summoned with a simple request or applications are custom-built based on natural language blueprints. This shift will not only streamline development but also democratize technology, making it accessible to a broader audience. The integration of generative AI with no-code/low-code platforms allows users to express their creativity more freely. By enabling natural language prompts like “Design an app to manage inventory with automatic restocking” or “Build a workflow that offboards a user across Google, Slack, and Salesforce,” users can drive software behaviors without being constrained by technical jargon. This fusion redefines the efficiency of software interaction. Tasks that previously required meticulous configuration or coding can now be executed through simple prompts. Whether generating email templates, creating data transformation scripts, or orchestrating multi-step workflows, the convenience of natural language input eliminates barriers and accelerates results. A Democratic Approach Looking forward, the integration of generative AI in no-code/low-code platforms points toward a more democratic approach to software development. This convergence will enable a broader range of individuals to participate actively, regardless of their coding expertise. By simplifying the process and making it more inclusive, we’re shaping a future where software truly adapts to human intent. As businesses continue to harness the potential of generative AI and no-code/low-code platforms, adaptation and learning will be key. Embracing this transformation requires a shift in mindset, and understanding that software can be molded through conversations and prompts. As technology matures, the barriers between user intent and software behavior will fade, ushering in an era where technological fluency is defined by our ability to communicate rather than code. Speculating on how this shift will impact the day-to-day […]

Read More

Generative AI’s Impact on Developers

There is a growing belief in the developer community that future software development will be performed by machines rather than humans by 2040. Software development will undergo a radical change with the combination of machine learning, artificial intelligence, natural language processing and code generation that draws from large language models (LLMs). Most organizations believe that there will be a 30-40% improvement in overall developer productivity with AI. While these arguments have some merit, I do not believe developers will be fully replaced. Instead, I believe generative AI can augment developers to support faster development and higher-quality code. I’ll address the impact of generative AI on the development community under three pillars: Automation and productivity Quality engineering and compliance Ways of working Automation and Productivity There will be a focus on increased automation all the way from business planning to operations of applications. LLMs can help provide better alignment of user stories to business requirements. In fact, one of the best use cases for generative AI during planning phases is to auto-generate user stories from business requirements documents. Since ambiguity of requirements or guesswork is taken out of the equation, one can expect a clearer “definition of done” through the auto-creation of acceptance criteria. In a typical development cycle, 15%-20% of coding defects are attributed to improperly defined requirements. Generative AI augmentation can result in significant reduction of those defects. Generative AI augmentation can help developers with better planning and estimation of work. Rather than relying on personal experience or home-grown estimation models, LLMs can better predict the complexity of work and developers and continually learn and adapt through multiple development sprints. AI-augmented code creation can allow developers to focus on solving complex business problems and creative thinking rather than worrying about repetitive code generation. Over the last decade or so, the perception of software development as a creative pursuit has been a dying phenomenon. With AI, I think that more and more younger developers will be attracted to the field. AI will put the “fun” back in coding. AI-assisted DevOps and continuous integration will further accelerate deployments of code so developers can focus more on solving complex business problems. Deployment failures due to human errors can be drastically reduced. Elaborating on the above, newer and less experienced developers can also generate higher-quality code with AI-augmentation, leading to better overall consistency of code in large programs. Overall, from a development standpoint, I think AI augmentation will free up 30% of developers’ time to work on enhancing user experience and other value-added tasks. Quality Engineering and Compliance In a hybrid cloud world, solutions will become more distributed than ever, making system architecture more complex. LLMs can assist in regulating design documents and architecture work products to conform to industry/corporate standards and guidelines. In essence, LLMs can act as virtual Architecture Review Boards. In a typical development life cycle, architecture/design reviews and approvals make up 5%-8% of work and augmenting the process with generative AI capabilities can cut that time in half. Ssecurity compliance for cloud-based solutions is imperative. LLMs can assist in ensuring such compliance very early on in the development life cycle, leading to more predictable deployments and timely program delivery. Generative AI-augmented test case creation can optimize the number of test cases needed to support the development while increasing the […]

Read More

Will ChatGPT Replace Human Software Developers? Probably Not

Since the release of ChatGPT, there has been a great deal of hype around generative AI and how companies can leverage it to cut costs and democratize software and application development. Naturally, with discussions of cost-cutting and democratization come whispers about what will happen to software developers. This is a real and valid concern, but software developers’ skills, expertise and creativity are still very much needed. While generative AI and AI code generation tools like ChatGPT have shown some promise and potential benefits, they are still in their infancy—like many other innovative technological advancements. We also don’t know what scenarios they may present down the road or their true abilities when the technology matures. For instance, how will it integrate with other technologies? We don’t know what will happen when a ChatGPT-generated line of code breaks or needs to be changed. We don’t know if it can provide a novel solution to a unique problem or what security threats it will present. Given these unknowns, technology executives should think twice about replacing experienced and creative technology talent, such as software developers, with AI code generators. Will ChatGPT create novel code to solve a unique problem never encountered before? Probably not. A Tale as Old as Time (Or at Least a Decade) The technology industry has been searching for and developing new ways to make certain software development tasks much easier and more streamlined for years. One example of this is low-code/no-code. The notion of simplifying application development and replacing software developers with laypeople (citizen developers) has been around for more than a decade now, as low-code and no-code solutions have grown more popular. These solutions have promised that companies don’t need technical talent to get their software and app projects off the ground. However, if you look at the impact of these solutions today, their use can result in large amounts of technical debt and almost always require the skill of experienced software developers. The reason? Building complex software and applications is extremely difficult; it’s an art. Low-code and no-code solutions have their rightful place and can make things easier if a company is looking to launch a simple app or static web page. These solutions can increase the pace of development and time-to-market and enable everyday people without any development skills to facilitate them. However, they are not actually a complete solution and often overlook aspects of development that a human software developer would typically address. Without a skilled expert involved, low-code/no-code platforms often can’t solve a unique problem a company has. So, how does this relate to AI code generators like ChatGPT? Here’s how. A Similar Situation—With One Key Difference When thinking about their place in the development process, AI code generators are not that different from low-code or no-code solutions. The thinking is that they will also enable non-technical individuals to create software and applications with ease. Yet, there is one key difference—they promise expertise, too. But is the expertise coming from the AI code generator or the person piloting it? The answer is simple; it is not from the code generator. There have been examples of companies and individuals that have tried using ChatGPT to build code, and they have appeared to be successful. However, without the input of the individuals using it, it never would […]

Read More

How product managers can get more out of user calls

One of the core jobs of product managers is to speak with users to better understand their needs, pain points and the context in which they operate and use our products. But not all user calls are the same. There are 3 prominent types of user calls: Discovery or problem validation calls Roadmap discussions Solution validation calls Here’s an in-depth look at how we approach the three types of user calls at GitLab. Discovery calls Discovery or problem validation calls are product managers’ most crucial conversations with users. Discovery calls are typically set up to learn about our users in a targeted way. These calls help build a better understanding of users’ pain points. For discovery, we need a recipe for repeatable, comparable user calls. For this reason, we should create an interview script and follow that script on all the user calls. This does not mean these calls are robotic and devoid of improvisation, not at all! The script should provide the backbone of the discussions. We can adjust it either during the call or in advance based on prior knowledge about the user. Good discovery calls typically take the form of a deep-dive conversation: we know the script by heart and can run back and forth around it, always asking the questions that fit the conversation. Finding the right users is one of the most challenging parts of discovery calls. Thankfully, with GitLab, this is relatively easy. We can always reach out to the most active users on issues and invite them to a call. Another technique I employ is to find users in the Cloud Native Computing Foundation and Kubernetes communities’ Slack channels and articles on Medium. This way, I can also find non-GitLab users, a set of people likely more valuable to interview than existing users. Finally, we can recruit users with the support of the account managers. They are always helpful in connecting PMs with users. Asking the users about their needs shows them that we genuinely care about them. There are at least two distinct discovery calls: PM-led or UX-led. UX research typically works on projects with a strict scope. For PM-driven calls, a great framework is “Continuous discovery” calls by Teresa Torres. With continuous discovery, we build a deep understanding of our users and get well-understood opportunities. The technique allows us to get a broad view and to dive deep into specific aspects of our problem space when needed. Roadmap discussions Roadmap discussion calls are typically initiated by sales or account management teams. Product managers are asked to join the prospect/customer call to strengthen our positions and show how much we care for the customer. To prepare for roadmap discussions, PMs should have an effective way to present the roadmap. This typically happens in the form of slides. A diligent PM might even prepare something specifically for the client. During these calls, the user/customer/prospect will typically ask the questions, and the PMs respond. Our role in these calls is to represent the truth. We might be tempted to paint a rosier picture about the current or expected state of the product than is actually true, and we should avoid making time-bound promises. What are the expected outcomes of roadmap discussions? They can help strengthen our position with the user. Remember that these calls […]

Read More

How to quickly (and successfully) onboard engineers

No one ever said hiring was easy. As a matter of fact, talent hiring and retention are some of the hardest aspects to get right for any software company. According to a recent article at Developer Pitstop the average engineer will only stay at a job for an average of two years before moving on, and this tenure is shrinking as time goes on. When we look at the average timeline for engineers in a new role we usually see something like: Learning and adaptation (3 / 6 months): Coming to grips with the new company, team, and their processes. Creating value for the organization (6 / 12 months): Adding value to the business by becoming a functioning member of the team. Becoming a role expert (6 / 18 months): Owning the role completely and helping to shape the direction of the team. At GitLab we pride ourselves on an outstanding onboarding process to reduce the amount of time an engineer will spend in the learning and adaptation bracket and accelerate their evolution into the creating value for the organization bracket. We do this for two main reasons: Quicker integration: We aim to have engineers ship production code in less than one week, and fully onboard them in less than three months. Reduce turnover: Engineers who have an awesome onboarding experience tend to stay with the same company longer. The bottom line is that with these benefits, investing in an amazing onboarding process gives you the highest ROI on your hiring initiatives. So, now that we know why we need to ensure we onboard quickly and correctly, let’s talk about how we do it at GitLab. Overview 💯 Before day one 💥 It’s all about the onboarding issue 🥂 Pick the right onboarding buddy 👌 Pair, pair, and more pairing 🖐 All the coffee chats 🤘 Tailor the experience to the role 🚢 Ship some code in a week or less 💬 Let’s get (and give) some feedback 💯 Before day one The best onboarding processes start as soon as the candidate has officially accepted the offer. This is done in a few ways: An onboarding issue is created with tasks for the hiring manager, their buddy, and People Experience (HR). The hiring manager selects the right onboarding buddy for the engineer and communicates expectations (more on this later). The engineer’s accounts (Email, GitLab account, Okta, etc) are created and their hardware is shipped. GitLab reaches out via email to let the candidate know what the onboarding process looks like. The hiring manager reaches out to the engineer via email to set up a coffee chat on Day 1 as the initial process might seem overwhelming. For us, the most important aspect is communication with the engineer to ensure they are set up for success. We provide them with access to their onboarding issue, helpful video guides for getting started, and a primer on how to navigate our handbook like a pro. The reason this is so important is that we know if we stop communicating with the engineer after signing, we are at risk of creating uncertainty, introducing inefficiency, or even losing them to another offer during that time. 💥 It’s all about the onboarding issue At GitLab, our onboarding issue is the most effective tool we have for […]

Read More

Open core is worse than plugins… and that’s why it’s better

Open core is obviously a horrible approach to creating a product with an ecosystem of extensions and integrations: There are no proper protocols and interfaces. Instead, anyone can just add their integration to the code base and even adjust said code base to their needs if it doesn’t fit. So why have we been using the “Worse” approach at GitLab for many years now, with great success? Because Worse is Better (a term conceived by Richard P. Gabriel). Of course, it turns out that “Worse” is actually even better than Worse is Better suggested. Gabriel’s original argument was that (slightly) intrinsically worse but simpler and easier to implement software has better survival characteristics than better-designed, more complex software, and thus will consistently win in the marketplace. At GitLab, we have found that this is basically true, which is why we, for example, favor “boring technology,” even if it might not be the best possible solution for a given scenario. But this doesn’t tell the whole story: It turns out that such software is not just more successful, it also ends up being qualitatively better in the end. Worse is even better It is important to note that Gabriel’s original argument was not that bad software wins out. In fact, both his “worse” and his “better” have the same qualities: Simplicity, of interface and implementation Correctness Consistency Completeness However, his “worse” and his “better” have slightly different weights for the value placed on these characteristics, with the (worse) New Jersey school favoring simplicity of implementation over simplicity of interface, whereas the (better) “MIT” school favors simplicity of interface, even at the cost of a more complex implementation. If a simple interface can be achieved with a simple implementation, both schools agree, the difference comes when there are tradeoffs to be made. What makes worse even better, and what Gabriel didn’t take into account even in later versions, is the tremendous value of feedback loops. Being early doesn’t just let the New Jersey approach win in the marketplace, it also allows it to collect feedback much, much earlier and much more quickly than the MIT approach. Paul MacCready won the first Kremer prize not by initially setting out to build the best human-powered aircraft, but by building the one that was easiest to repair in order to gather feedback more quickly. While other teams took a year or more to recover from a crash, his plane sometimes flew again the same day. And so it was exactly this willingness to lose sight of the prize that resulted in him winning it. In much the same way, it is these quick feedback loops that a “worse” approach enables, started much earlier, that eventually lead to a better product. The problem with plugins At least since the success of Photoshop, a proper plugin interface has been recognized as The Right Way to make software both more compelling for users and less easy to leave behind by creating a third-party ecosystem that provides useful functionality without the vendor having to provide all of that functionality themselves. It was so successful that systems like OpenDoc took the idea further to be just a set of plugins, with no real hosting application. None of these systems succeeded in the marketplace. One of the reasons is that good plugin interfaces are not just hard, but downright fiendishly difficult […]

Read More

5 Tips for managing monorepos in GitLab

GitLab was founded 10 years ago on Git because it is the market leading version control system. As Marc Andressen pointed out in 2011, we see teams and code bases expanding at incredible rates, testing the limits of Git. Organizations are experiencing significant slowdowns in performance and added administration complexity working on enormous repositories or monolithic repositories.  Why do organizations develop on monorepos?  Great question. While some might believe that monorepos are a no-no, there are valid reasons why companies, including  Google or GitLab (that’s right! We operate a monolithic repository), choose to do so. The main benefits are:  Monorepos can reduce silos between teams, streamlining collaboration on design, development, and operation of different services because everything is within the same repository. Monorepos help organizations standardize on tooling and processes. If a company is pursuing a DevOps transformation, a monorepo can help accelerate change management when it comes to new workflows or the rollout of new tools. Monorepos simplify dependency management because all packages can be updated in a single commit. Monorepos offer unified CI/CD and build processes. Having all services in a single repository means that you can set up one system of pipelines for everyone. While we still have a ways to go before monorepos or monolithic repositories are as easy to manage as multi-repos in GitLab, we put together five tips and tricks to maintain velocity while developing on a monorepo in GitLab. 1. Use CODEOWNERS to streamline merge request approvals CODEOWNERS files live in the repository and assign an owner to a portion of the code, making it super efficient to process changes. Investing time in setting up a robust CODEOWNERS file that you can then use to automate merge request approvals from required people will save time down the road for developers.  You can then set your merge requests so they must be approved by Code Owners before merge. CODEOWNERS specified for the changed files in the merge request will be automatically notified. 2. Improve git operation performance with Git LFS A universal truth of git is that managing large files is challenging. If you work in the gaming industry, I am sure you’ve been through the annoying process of trying to remove a binary file from the repository history after a well-meaning coworker committed it. This is where Git LFS comes in. Git LFS keeps all the big files in a different location so that they do not exponentially increase the size of a repository. The GitLab server communicates with the Git LFS client over HTTPS. You can enable Git LFS for a project by toggling it in project settings. All files in Git LFS can be tracked in the GitLab interface. GitLab indicates what files are stored there with the LFS icon. 3. Reduce download time with partial clone operations Partial clone is a performance optimization that allows Git to function without having a complete copy of the repository. The goal of this work is to allow Git to better handle extremely large repositories. As we just talked about, storing large binary files in Git is normally discouraged, because every large file added is downloaded by everyone who clones or fetches changes thereafter. These downloads are slow and problematic, especially when working from a slow or unreliable internet connection. Using partial clone with a […]

Read More