Security

Introducing auto-triage rules for Dependabot

Since the May beta release of our GitHub-curated Dependabot policies that detect and close false positive alerts, over 250k repositories have manually opted in, with an average improvement of over 1 in 10 alerts. The impact so far: auto-dismissal of millions of alerts that would have otherwise demanded a developer’s attention to manually assess and triage. Starting today, you can create your own custom rules to control how Dependabot auto-dismisses and reopens alerts, so you can focus on the alerts that matter without worrying about the alerts that don’t. Today’s ship—our public beta of custom auto-triage rules—makes that engine available for everyone, so you can specify and delegate specific decision making tasks to Dependabot with your own custom rules. Today’s release is part of a series of ships that make it easier to scale your security strategy, whether you’re an open source maintainer or an application developer on a centralized security team. Custom auto-triage rules for Dependabot are free for public repositories and available as part of GitHub Advanced Security for private repositories. Together with auto-triage presets and a renewed investment in alert metadata, custom auto-triage rules relieve developers from the overhead of alert management tasks so they can focus on creating great code. What are auto-triage rules? Auto-triage rules are a powerful tool to help you reduce false positives and alert fatigue substantially, while better managing your alerts at scale. Rules contain criteria that match the targeted alerts, plus the decision that Dependabot will perform on your behalf. What can you do with rules? With auto-triage rules, you can proactively filter out false positives, snooze alerts until patch release, and – as rules apply to both future and current alerts – manage existing alerts in bulk. What behaviors can Dependabot perform? For any existing or future alerts that match a custom rule, Dependabot will perform the selected behavior accordingly. Our first public beta release covers ignore and snooze-until-patch functionality with repository-level rules. We will follow-up soon with support for managing rules at the organization-level. Both are managed via the auto-dismiss alert resolution, which provides visibility into automated decisions, integrates with existing reporting systems and workflows, and ensures that alerts can be reintroduced if alert metadata changes. What alert criteria are supported by custom rules? Custom rules can target alerts based on multiple criteria, including the below attributes as of today. Attribute Description severity Alert severity, based on CVSS base score, across the following values: low, medium, high, and critical. scope Scope of the dependency: development (devDependency) or runtime (production). package-name Packages, listed by package name. cwe CWEs, listed by CWE ID. ecosystem Ecosystems, listed by ecosystem name. manifest Manifest files, listed by manifest path. Who can use this feature? GitHub-curated presets–such as auto-dismissal of false positives–are free for everyone and on all repositories. Custom auto-triage rules are available for free on all public repositories, and available as a feature of GitHub Advanced Security for private repositories. What’s next for Dependabot? In addition to gathering your feedback during the public beta, we’re working to support additional alert metadata and enforcement options to expand the capabilities of custom rules. We’re also working on new configurability options for Dependabot security updates to give you more control over remediation flows. Keep an eye on the GitHub Changelog for more! In the meantime, try out […]

Read More

Top 5 compliance features to leverage in GitLab

GitLab’s compliance management capabilities are designed to integrate compliance into development and deployment processes from the start. As a tenured compliance professional and member of our Security Compliance team here at GitLab, I can tell you from experience it is always easiest to design your processes to be secure and compliant from the start than it is to re-engineer existing processes to be compliant. Why should you care about your GitLab instance being secure and compliant? In additon to reducing the risk of a breach and lowering costs, there are regulatory and compliance requirements to consider. Regulatory and compliance audits are unavoidable and can be time-consuming and stressful. However, GitLab has many easy-to-use, built-in features that may help fulfill your organization’s compliance requirements and make your environment more secure. Here at GitLab, these are features we use everyday. The best part is, most of the features I’ll outline below are included as free features. Note: I’ll add an asterisk (*) next to any feature which is not available on our free tier. Here’s the tl;dr list: 1. Enable MFA Enabling MFA is simple and reduces the risk of attacks by making it more difficult to gain access to accounts. MFA can be enforced for all users in your GitLab instance in the admin center. Alternatively, MFA can be configured for accounts individually. You can learn how to enable MFA in our GitLab documentation. Compliance standards and GitLab controls for MFA MFA relates to the following compliance standards: AICPA TSC CC6.1 ISO 27001 2013 A9.2.3, A9.2.4, A.9.3.1, A9.4.3 NIST 800-53 IA-5, IA-5(1), IA-2(1), IA-2(2) Illustrative GitLab controls for MFA: IAC-02: GitLab Inc. has implemented mechanisms to uniquely identify and authenticate organizational users and processes acting on behalf of organizational users. IAC-06: GitLab Inc. has implemented automated mechanisms to enforce MFA for: remote network access; and/or non-console access to critical systems or systems that store, transmit and/or process sensitive data. 2. Review privileged access for critical projects Undoubtedly, one of the biggest risks to your environment is logical access. To reduce the risk, we recommend administrators ensure access is restricted based on the principle of least privilege. Access should be monitored continuously as access changes can occur multiple times, daily, in most organizations. In order to appropriately review access in your GitLab instance, it is important to first understand the access security structure within GitLab. Breaking down the access security structure Within GitLab, there are six different roles that can be assigned to users – “Guest”, “Reporter”, “Developer”, “Maintainer”, “Owner” and “Administrator”. Privileged access within GitLab is considered to be the “Administrator”, “Owners”, and “Maintainers” roles. GitLab Administrators receive all permissions Owners and Maintainers are considered administrative because these roles have permissions to do highly sensitive actions including but not limited to: managing merge settings; enabling or disabling branch protection; managing access to a project; managing access tokens; exporting a project; and deleting issues, merge requests, and projects. As privileged access is the highest risk to your environment, these roles should be tightly controlled. Some best practices in regards to ensuring access is restricted based on the principle of least privilege include: When privileged access is requested, ensure appropriate approvals are received prior to access being provisioned. Best practice is to obtain approvals from the data owner and the manager of the […]

Read More

GitLab’s commitment to enhanced application security in the modern DevOps world

With GitLab 14, we saw deep emphasis on modernizing our DevOps capabilities. This modernization enabled enhanced application security and strenghtened collaboration between developers and security professionals. We saw enhancments such as: global rule registry and customization for policy requriements with support for separation of duties a newly developed browser-based Dynamic Application Security Testing (DAST) scanner used to test and secure modern APIs and Single Page Applications more support for different languages using Semgrep new vulnerability management capabilities to increase visibility With the GitLab 15 release, we can see how our commitment to enhancing application security across the board is stronger than ever. In this blog post, I will provide details on how GitLab is commited to enhancing not only security, but efficiency. Discover how GitLab 15 can help your team deliver secure software, while maintaining compliance and automating manual processes. Save the date for our GitLab 15 launch event on June 23rd! GitLab 15 security features We see that with every GitLab release, there are plenty of enhancements to our security tools. GitLab 15 is no exception! We can see a boatload ? of security enhacements released in GitLab 15 below: These features run across different stages of the software development lifecycle. I have created a video showing some of the coolest new security features in GitLab 15: Scanners moved to GitLab Free Tier A lot of our scanners were only part of GitLab Ultimate in the past. However, over time, certain scanners have been moved over to GitLab Free Tier, enabling you to enhance the security of your application no matter what tier of GitLab you are using. Scanner Introduced Moved to Free SAST 10.3 13.3 Container Scanning 10.4 15.0 Secret Detection 11.9 13.3 Within the free tier, you are able to download the reports generated by the security scanners. This allows developers to see what vulnerabilites were detected within their source code and container images. However, there are benefits to upgrading to Ultimate, which are described below. Benefits of upgrading to Ultimate Some organizations have multiple groups and projects they are working on, as well as a the security team, which manages all the detected vulnerabilities. While having security scan reports ready for download is useful, it is not exactly scalable across an organization. This is where Ultimate assists in enhancing DevSecOps efficiency. Scanners While the GitLab Free Tier includes SAST, Secret Detection, and Container Scanning to find vulnerabilities in your source code, when you upgrade to Ultimate, you are provided with even more scanners. Here are some of the additional scanners provided in Ultimate: Developer Lifecycle In Ultimate, there is enhanced functionality within the developer lifecycle. The merge request a developer creates will contain a security widget which displays a summary of the new security scan results. New results are determined by comparing the current findings against existing findings in the default branch. The results contain not only detailed information on the vulnerability and how it affects the system, but also solutions to mitigating or resolving the issue. These vulnerabilities are also actionable, meaning that a comment can be added in order to notify the security team, so they may review – enhancing developer and appsec collaboration. A confidential issue can also be created so that developers and security professionals can work together towards a resolution safely […]

Read More

Terraform as part of the software supply chain, Part 1 – Modules and Providers

When talking about Terraform security, there are many resources covering the security aspects of the infrastructure surrounding certain Terraform configurations. Looking at the security of Terraform itself and the things which could go wrong when running it, however, have very little coverage so far. Some previously published work I’m aware of includes: “Terraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace. Terraform Cloud cannot prevent malicious providers and modules from exfiltrating this sensitive data. We recommend only using trusted modules and providers within your Terraform configuration.” The blog post you’re reading is part one of a three-part series examining the supply chain aspects of Terraform and aims to look at malicious Terraform modules and providers. I’ll also give recommendations on securing the process of running Terraform against modules and providers gone rogue. The next two blogs in the series will build upon these findings and cover more in-depth topics and vulnerabilities. Provider security Providers in Terraform are executable binaries, so if a provider turns malicious it’s certainly “game over” in the sense that it can do whatever the host OS it runs on allows. Providers need to have a signature which gets validated by Terraform upon installation of the Provider. Version 0.14 Terraform creates a dependency lock file which records checksums of the used providers in two different formats. zh and h1 checksums The first format, zh, is simply a SHA256 hash of the zip file which contains a provider for a specific OS/hardware platform combination. The h1 hash is a so-called “dirhash” of the provider’s directory. So if we look at the following lock file .terraform.lock.hcl we can observe the two different types of hashes: # This file is maintained automatically by “terraform init”. # Manual edits may be lost in future updates. provider “registry.terraform.io/hashicorp/aws” {  version = “4.11.0”  hashes = [    “h1:JTgGUEVVuuv82X0ePjDM73f+ZM+NfLwb/GGNAOM0CdE=”,    “zh:3e4634f4babcef402160ffb97f9f37e3e781313ceb7b7858fe4b7fc0e2e33e99”,    “zh:3ff647aa88e71419480e3f51a4b40e3b0e2d66482bea97c0b4e75f37aa5ad1f1”,    “zh:4680d16fbb85663034dc3677b402e9e78ab1d4040dd80603052817a96ec08911”,    “zh:5190d03f43f7ad56dae0a7f0441a0f5b2590f42f6e07a724fe11dd50c42a12e4”,    “zh:622426fcdbb927e7c198fe4b890a01a5aa312e462cd82ae1e302186eeac1d071”,    “zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425”,    “zh:b0b766a835c79f8dd58b93d25df8f37749f33cca2297ac088d402d718baddd9c”,    “zh:b293cf26a02992b2167ed3f63711dc01221c4a5e2984b6c7c0c04a6155ab0526”,    “zh:ca8e1f5c58fc838edb5fe7528aec3f2fcbaeabf808add0f401aee5073b61f17f”,    “zh:e0d2ad2767c0134841d52394d180f8f3315c238949c8d11be39a214630e8d50e”,    “zh:ece0d11c35a8537b662287e00af4d27a27eb9558353b133674af90ec11c818d3”,    “zh:f7e1cd07ae883d3be01942dc2b0d516b9736a74e6037287ab19f616725c8f7e8”,  ] } The zh entries can also be found in the provider’s v.4.11.0 release within the SHA256SUMS file. To understand the single h1 dirhash entry we need to have a look at the provider’s directory. In our Terraform project it is constructed like this: $ ls .terraform/providers/registry.terraform.io/hashicorp/aws/4.11.0/linux_amd64/                                    terraform-provider-aws_v4.11.0_x5 $ cd .terraform/providers/registry.terraform.io/hashicorp/aws/4.11.0/linux_amd64/ $ sha256sum terraform-provider-aws_v4.11.0_x5 34c03613d15861d492c2d826c251580c58de232be6e50066cb0a0bb8c87b48de  terraform-provider-aws_v4.11.0_x5 $ sha256sum terraform-provider-aws_v4.11.0_x5 > /tmp/dirhash $ sha256sum /tmp/dirhash   253806504555baebfcd97d1e3e30ccef77fe64cf8d7cbc1bfc618d00e33409d1  /tmp/dirhash $ echo 253806504555baebfcd97d1e3e30ccef77fe64cf8d7cbc1bfc618d00e33409d1 | ruby -rbase64 -e ‘puts Base64.encode64 [STDIN.read.chomp].pack(“H*”)’ JTgGUEVVuuv82X0ePjDM73f+ZM+NfLwb/GGNAOM0CdE= The dirhash, called h1 in the lock file, is created from an alphabetical list of sha256sum filename. Once this list is sha256sum ed again, the resulting hash is taken in binary representation and then converted to Base64. From an attacker’s perspective, the interesting part about the lock file is that it can contain multiple zh and h1 hashes per provider. It is also noteworthy that those two types don’t have to have any relationship. If we modify a downloaded provider’s content on disk, we can simply place the corresponding h1 hash next to any other h1 in the lock file. As there can be multiple entries we would not break any legitimate installation and just allow-list a modified provider directory on-disk on top of what’s already allowed. Lessons learned here Put your .terraform.lock.hcl under version control (Terraform even suggests this on the […]

Read More

GitLab extends Omnibus package signing key expiration by one year

GitLab uses a GPG key to sign all Omnibus packages created within the CI pipelines to insure that the packages have not been tampered with. This key is seperate from the repository metadata signing key used by package managers and the GPG signing key for the GitLab Runner. The Omnibus package signing key is set to expire on July 1, 2022 and will be extended to expire on July 1, 2023 instead. Why are we extending the deadline? The Omnibus package signing key’s expiration is extended each year to comply with GitLab security policies and to limit the exposure should the key become compromised. The key’s expiration is extended instead of rotating to a new key to be less disruptive for users who do verify package integrity checks prior to installing the package. What do I need to do? The only action that needs to be taken is to update your copy of the package signing key if you validate the signatures on the Omnibus packages that GitLab distributes. The package signing key is not the key that signs the repository metadata used by the OS package managers like apt or yum. Unless you are specifically verifying the package signatures or have configured your package manager to verify the package signatures, there is no action needed on your part to continue installing Omnibus packages. More information concerning verification of the package signatures is available in the Omnibus documentation. If you just need to refresh a copy of the public key, then you can find it on any of the GPG keyservers by searching for support@gitlab.com or using the key ID of DBEF 8977 4DDB 9EB3 7D9F C3A0 3CFC F9BA F27E AB47. Alternatively you could download it directly from packages.gitlab.com using the URL: https://packages.gitlab.com/gitlab/gitlab-ce/gpgkey/gitlab-gitlab-ce-3D645A26AB9FBD22.pub.gpg What do I do if I still have problems? Please open an issue in the omnibus-gitlab issue tracker. Sign up for GitLab’s twice-monthly newsletter

Read More

How we run Red Team operations remotely

At GitLab, our Red Team conducts security exercises that emulate real-world threats. This gives us an opportunity to assess and improve our ability to deal with cyber attacks. These types of exercises require a lot of planning, which is traditionally done by getting folks from multiple departments into the same room at the same time to discuss hypothetical scenarios and expected outcomes. Then, actually conducting the attacks and validating the detection/response capabilities is, once again, traditionally done by people who are physically sitting in the same space. Like many things at GitLab, we are not quite so traditional. Each member of our Red Team is separated from the others by a literal ocean, with about eight hours difference in local time between us. Our entire organization works remotely, and the various groups we need to involve in these exercises are spread across the world. We understand our approach is unique. However, more of the workforce is moving to remote work models, so we recently spent some time writing down what works for us when doing these types of complex exercises asynchronously across time zones. Keep reading to see what we came up with and how you can use it yourself. Take our 2022 DevSecOps survey and get a $10 gift card. Have your voice count! Defining an asynchronous workflow If there is one thing we’ve learned about working remotely, it’s that you need to write things down. In a traditional office, it’s possible to have a back-and-forth conversation in a matter of minutes. Conversations across time zones and departments, however, can take days when you’re not co-located. This is why we use our public handbook as a single source of truth for how we run the company and why we decided to use this same spot to document how our Red Team will work to propose, plan and execute operations. Purple Team process As you can see, we’ve broken down the process of “Purple Teaming” into nine unique stages. Each of these stages is supported with a GitLab issue template that clearly explains what must be completed in order to move to the next stage. At GitLab, we use the term “Purple Teaming” to describe an exercise where emulated attacks are done in the open: Everyone involved knows what the attack is, when it is coming, and exactly what techniques will be involved. When we perform more traditional “Red Team” exercises where stealth is involved, we use roughly the same process; only with less active participants. When we begin planning an operation, we open a new epic on gitlab.com. Inside that epic, we open nine new issues using the templates linked above. Everyone involved in the operation will have access to these issues, and everyone can clearly see what has been completed and what comes next. New Red Team issue While it may seem like a lot of stuffy process, this level of clarity and transparency gives us the freedom to focus more on the interesting work and less on figuring out who should be doing what next. Even better, the process is open source: Anyone with an idea for improvement can simply open a merge request to discuss. This includes you: We would love to hear from the community to continually improve this process. Dive deeper into our workflows […]

Read More

Updates regarding Rubygems ‘Unauthorized gem takeover for some gems’ vulnerability CVE-2022-29176

We want to share the actions we’ve taken in response to the critical Rubygems ‘Unauthorized gem takeover for some gems’ vulnerability (CVE-2022-29176). Upon becoming aware of the vulnerability within Rubygems.org, we immediately began our investigation and contacted Rubygems who quickly patched the vulnerability. Our Security team tested the usage of gems within our product and across our company and found gems within GitLab from Rubygems.org were no longer vulnerable. At this time, no malicious activity, exploitation, or indicators of compromise have been identified within GitLab.com and customer data. Further, our team’s review of gems used in the GitLab product showed no indication of compromise or integrity violations. There is no action needed by GitLab.com or self-managed users. Our teams are continuing to investigate and monitor this issue to help protect our products and customers. We will update this blog post and notify users via a GitLab security alert with any future, related updates. More information “Actions @gitlab has taken to investigate the Rubygems vulnerability ‘Unauthorized gem takeover for some gems’ CVE-2022-29176.” – GitLab Click to tweet Sign up for GitLab’s twice-monthly newsletter

Read More

Updates regarding Spring remote code execution vulnerabilities CVE-2022-22965 and CVE-2022-22963

We want to share the actions we’ve taken in response to the critical Spring remote code execution vulnerabilities (CVE-2022-22965 and CVE-2022-22963). Upon becoming aware of the vulnerabilities, we immediately mobilized our Security and Engineering teams to determine usage of this software component and its potential impact within our product, across our company, and within our third-party software landscapes. At this time, no malicious activity, exploitation, or indicators of compromise have been identified on GitLab.com. Further, our product packaged Java components for both GitLab.com and self-managed instances do not use vulnerable Spring components, and thus are not vulnerable. Our teams are continuing to investigate and monitor this issue to help protect our products and customers. We will update this blog post and notify users via a GitLab security alert with any future, related updates. More information “Actions @Gitlab has taken to investigate the Spring remote code execution vulnerabilities CVE-2022-22965 and CVE-2022-22963.” – GitLab Click to tweet

Read More

5 motive esențiale de a utiliza InterBase in 2020

InterBase in 2020 will continue to be, one of the hidden gems of the relational database world. From its inception in the early 1980s, through mainstream adoption and evolution under Borland, InterBase looks back at a track-record that spend decades; at times defining the standard that all other databases were measured against. With Embarcadero acquiring the Borland development portfolio in 2008, InterBase has again been brought up to speed with the latest technological advances; surpassing them even with features like Change Views. Thanks to steadily refactoring and evolution since Embarcadero took over; its performance and scope have seen radical performance gains. Once again InterBase is the cutting edge, synonymous with performance, security and platform diversity. The optimizations invested in our gentle giant over the past eight years alone are too many to list. Embarcadero has done an amazing job on modernizing this much loved — and dare I say, archetypal relational database. At the same time, they have managed to retain the functionality that is quintessentially InterBase: Features that set the product apart. For an old Delphi developer like myself, using InterBase in my production environment again is an emotional experience. InterBase was part of my university curriculum and used in my first commercial software development alongside Delphi. Familiar yet unmistakably modern, fresh yet mature and established. I want to present five good reasons why InterBase should be your next database. Writing about a subject I am passionate form easily turns into a novel, which is why I am limiting the features to a modest five. Let’s jump in and look at why should InterBase in 2020 be your next database? 1: Platform Diversity The world of technology has changed dramatically in a very short time. The way that technology evolves, be it software or hardware, is typically through sudden, unexpected leaps. The mobile revolution of 2007 spearheaded by Steve Jobs, as he unveiled the iPhone at the Apple developer conference in San Francisco, was one such leap. Overnight, the criteria for software development were irrevocably changed. Fast forward to 2020 and two-thirds of the planet’s population are walking around with a proverbial super-computer in our pockets. Each filled with applications, ever-growing in complexity, and with a very real need for reliable data persistence. Today business is conducted more and more on mobile devices, and with that, the ability to deploy software to different platforms, operating systems and hardware is a necessity. Multi-platform computing is now the prerequisite that all developers, regardless of programming language, must base their strategy on. When you need multi-platform support, InterBase is a pioneer and ahead of its time. Already in the late 80s, InterBase was available for a variety of computer systems; from large and powerful business machines running Unix, to more modest home computers like the Apollo or the Commodore Amiga. The targets of 2020 are very different, but InterBase remains the same versatile and platform-independent database system that it has always been. Today, it can be deployed to all leading platforms and operating systems: Windows, Linux, macOS, Android, and iOS. InterBase also supports heterogeneous OS connectivity across all supported platforms. The ability to use the same database on multiple architectures is by far my favorite feature. It saves time, reduces cost, and makes life significantly easier during maintenance. Internet of Things is InterBase in 2020 With the […]

Read More