From the blog

Manager of France’s .fr domain selects GitLab for its DevSecOps capabilities

Association Française pour le Nommage Internet en Coopération (Afnic) is a longstanding nonprofit in France that manages .fr domain names. Chosen 20 years ago by the French State to operate the .fr country code top-level domain, Afnic’s motto is “reliability first.” Afnic uses GitLab, The One DevOps Platform, to help sustain that motto through modernization of its software development environment. Afnic’s mission as the French National Top Level Domain Registry is to bring together public authorities, Internet users, and domain name professionals to build a secure and stable Internet, open to innovation and in which the French Internet community plays a leading role. Outages of such a digital service could prevent the provisioning of other services that rely on it and could thus have an impact on key economic and societal activities. Afnic started using GitLab about four years ago to build and secure the brand-new version of its Shared Registry System (SRS). The SRS is a platform that manages the domain names from the subscription of a domain name to the publication in the DNS database and all the updates during its life, including contacts, server names, and DNSSEC keys, according to Richard Coffre, Afnic’s principal product manager. Since the project began, all the technologies have changed. Previously, Afnic’s team was mainly using Java and Perl and now they use Kubernetes, Angular, the latest version of Java, and Docker, among others. Security is paramount, and the team is using private clouds. That means Afnic has its own data centers in France and in colocation facilities all over the world. Modernizing software development with automation and integration Afnic selected GitLab to automate and integrate processes during the deployment process. Previously, the majority of things were done manually and now Afnic’s team wants to follow DevSecOps philosophy and governance. They wanted one DevOps platform with state-of-the-art CI/CD capabilities, the ability to quickly onboard new developers, and features to improve compliance and monitoring functionality. Now, Gitlab is one of the core components of Afnic’s systems. The company’s use of GitLab expanded as they deployed new versions of Java and Docker and other technologies. “We wanted to take a big step to align our technology with the state of the market,” Coffre says, and after surveying the development team, the choice was GitLab. The team is integrating GitLab with Jira, which is providing a lot of value, he adds. Now, in addition to developers, Afnic’s database administrators and network administrators use GitLab. The team is using Docker for images and Ansible. Jira is used for ticketing issues and is linked to GitLab and Confluence as a wiki to create the documentation. What GitLab brings to the table The goal for Afnic is to increase automation and to have everything in the same place and for anyone to be able to get at the proper version anytime. “That’s the strength of GitLab,” Coffre says. “That’s also why we chose it because it’s one of the leaders. Like many modern source code management systems, GitLab allows our developers to concurrently create source code. But it does it easily, giving us the possibility to do it safely, remembering our motto.” Previously Afnic used only open source tools that they had to customize, which Coffre says was not efficient on a daily basis. To manage source code […]

Read More

GitLab provides small business with a professional, mature DevOps platform

Blonk is an international leader in the field of environmental and sustainability research in the agri-food sector. But as a small business without a QA team or a security team, the challenge was figuring out how to deliver professional software with only a few developers. Blonk used an external company to help set up what Bart Durlinger, product development manager, and software devevloper Pieter van de Vijver envisioned as its platform at the time. “They set up an environment on Amazon, a separate built server, a separate repository, and then some scripts in between to link it all together,” Durlinger recalls. “But when we decided to take more control, that was just too complex. We had too many different parts in many different places. We didn’t have the capacity at the time to really oversee how this should all work together.” That’s when the Blonk team started looking for platforms that offered a more integrated approach, with project management, CI/CD, repository, and version control features all in one place. Mature, with a modern vision of software development Blonk turned to GitLab after finding that the platform “had a lot of the things you need to have a professional delivery pipeline integrated into one solution,” says Durlinger. At the time, the consultancy was using GitHub, which was more expensive, he says. When Blonk started with GitLab, the platform was free, which was a big factor in its selection, van de Vijver says. “But it was also an up-and-coming startup with a vision of that CI/CD integration built into how you envisioned the whole service itself,” he says. “GitHub was more of a repository that might provide you with those things, but it required more manual setup.” Blonk liked that GitLab was a mature and stable solution “but still new enough to have a vision of how software is approached nowadays with easy setup and an integrated pipeline by default, and useful branching strategies by which you could support a multi-level, multi-stage deployment process easily,” Van de Vijver says. At the time Van de Vijver was the only one at Blonk with a background as a software developer, and another bonus was his familiarity with all the tools in GitLab. “By using GitLab, we could hit the ground running, and keep the scale small. You don’t have to worry about all kinds of CI/CD operations and integrations and the configuration of that but use it just out of the box,” he says. How Blonk is utilizing GitLab today Currently, Blonk has 38 GitLab premium licenses, about half of which are used by software developers. The rest are used by data scientists, consultants, project managers, and others, so there are different ways the platform is utilized within the company; that also means there are different levels of software literacy but that hasn’t been an issue. The software development team has been onboarding very junior developers over the past couple of months, and “never have I had questions of how to do stuff in GitLab, because the platform is very intuitive,” Durlinger says. The software development team has been integrated further into the core business, which also fits nicely with GitLab’s services, including the milestones Blonk uses as well as its repositories and project management strategies. “Also data scientists and methodology developers are now […]

Read More

GitLab extends Omnibus package signing key expiration by one year

GitLab uses a GPG key to sign all Omnibus packages created within the CI pipelines to insure that the packages have not been tampered with. This key is seperate from the repository metadata signing key used by package managers and the GPG signing key for the GitLab Runner. The Omnibus package signing key is set to expire on July 1, 2022 and will be extended to expire on July 1, 2023 instead. Why are we extending the deadline? The Omnibus package signing key’s expiration is extended each year to comply with GitLab security policies and to limit the exposure should the key become compromised. The key’s expiration is extended instead of rotating to a new key to be less disruptive for users who do verify package integrity checks prior to installing the package. What do I need to do? The only action that needs to be taken is to update your copy of the package signing key if you validate the signatures on the Omnibus packages that GitLab distributes. The package signing key is not the key that signs the repository metadata used by the OS package managers like apt or yum. Unless you are specifically verifying the package signatures or have configured your package manager to verify the package signatures, there is no action needed on your part to continue installing Omnibus packages. More information concerning verification of the package signatures is available in the Omnibus documentation. If you just need to refresh a copy of the public key, then you can find it on any of the GPG keyservers by searching for support@gitlab.com or using the key ID of DBEF 8977 4DDB 9EB3 7D9F C3A0 3CFC F9BA F27E AB47. Alternatively you could download it directly from packages.gitlab.com using the URL: https://packages.gitlab.com/gitlab/gitlab-ce/gpgkey/gitlab-gitlab-ce-3D645A26AB9FBD22.pub.gpg What do I do if I still have problems? Please open an issue in the omnibus-gitlab issue tracker. Sign up for GitLab’s twice-monthly newsletter

Read More

Cryptocurrency tracking in Delphi with FNC Chart

We, as developers, seek for new exciting APIs, new components that offer that little bit extra, or something completely new and mindblowing. As component developers, it’s a daily quest to put new and exciting features into our components, and offer them to our customers. Now, the FNC framework offers a lot of components to complete various tasks. Even when being excited to create a lot of new components, it’s often good to reflect on what has been done already and see where we can improve. TMS FNC Chart was the first FNC product and was an introduction into cross-platform, cross-framework and cross-IDE development. At the time it was released, VCL and FMX were supported. Later we added Lazarus, and TMS WEB Core support. More time was required to support multiple frameworks and operating systems and iron out all the differences. We immediately had an idea to create more UI components which resulted in the variety of FNC framework based component sets we have today. Meanwhile, we decided it was time to go back to the beginning, to our very first FNC product and see what can be improved. Today, we want to reveal some new and exciting features coming up in v2.0 as well as a small sample on what that means for you as a customer/developer. TMS FNC Chart v2.0  Coming up in v2.0 Inherited types: TTMSFNCChart was the only component available, and the series types had to be changed there, each time a new instance of TTMSFNCChart was used. In v2.0 we introduce inherited types, which means that there will be a descendant class type for each series type, for example: TTMSFNCBarChart, TTMSFNCLineChart, TTMSFNCPieChart, … . Using these new set of classes will preset the series type upon creation, adding new series will also be the correct type and there is a designtime preview that will resemble more what the chosen type should represent. Appearance & color themes: The chart displays various elements such as a title, x-axis, y-axis and each of those elements can be customized with a lot of properties. Changing the overal look & feel of the chart can take quite some time. In v2.0 we want to introduce a global appearance, which applies font name, color and the ability to up-scale all fonts in the chart in one go. Additionally, we also wanted to make the chart more color friendly and dynamic. 2.0 introduces a new custom color list, excel style and monochromatic colors. Data import: The chart can obviously visualize data. The data comes from various source types and currently, there are no helper methods of any kind to makes this possible. In v2.0 it will be possible to load data from CSV, JSON and predefined data arrays with a lot of customization options. Database support: In v2.0 we also bring read-only database support. The TTMSFNCChartDatabaseAdapter will be available as a separate component and will dynamically recognize fields as series with the flexibility of adding further customization.  Grid linking: In the TMS FNC UI Pack, we have the TTMSFNCGrid component. In v2.0 it will be possible to link the grid to the chart via the TTMSFNCChartGridAdapter component. An example based on cryptocurrency Cryptocurrency is hot topic and, in one way or another, keeps us occupied, interested at the very least. Looking up various known and […]

Read More

Faces of Unity – Sharlene Tan

I’d encourage others to keep an open mind and never stop learning. Back when I was a college student, I never imagined that I’d wind up in the video game industry, working with languages and the written word. As technology evolves, it’s hard to predict what jobs will be in demand 10 years down the road. I’m glad my career journey led me to where I am now. Can you share a few fun facts about yourself? I was born and raised in Singapore, but have lived in many different places: Austin, Houston, Dallas, Hakodate, Tokyo, Oita, and currently, Seattle. I enjoy translating Japanese song lyrics into English, and also really love karaoke. I’ve run into Jackie Chan twice – once in a hotel in Canada, and another time in South Africa. He was filming Who Am I? atop Table Mountain.

Read More

How To Get Cross Platform Apps To Connect To A MySQL Database

Whether you are working with small or large-scale databases, MySQL is probably one of the most popular database systems today. The webinar video below will take us back to CodeRage 2018 where Yilmaz Yoru will discuss the process of creating a MySQL Database and connecting the database using the MyDAC components through C++ Builder. MySQL is undeniably one of the most popular database servers and has been part of many windows application development projects. Thanks to its free Community Edition, the server is relatively more accessible than other open-source database systems in the market. How to connect to a MySQL Database using MyDAC Components? Normally, if you are using C++ Builder, you can connect MySQL Database using the official FireDAC component, a powerful Universal Data Access library that is a relatively easier-to-use access layer that supports, abstracts, and simplifies data access, providing all the features needed to build real-world high-load applications. However, there are also third-party components that you can use and this is where Devart’s MyDAC comes into play. MyDAC is a library of components that provides direct access to MySQL and MariaDB from Delphi and C++Builder on various Operating systems like Windows, Linux, macOS, iOS, and Android for both 32-bit and 64-bit platforms. To use MyDAC with C++ Builder, first, you must download a trial or purchased version from their site and you must install it for both VCL and FireMonkey Projects. In this video, Yilmaz will demonstrate how to create a new MySQL Database with a New User on a host server. It will also encourage you to install MySQL Workbench for table creation. In this webinar, all will be explained by a C++ Builder example. MyDAC is a good component to connect MySQL database for all platforms. It also supports the FireMonkey platform, which allows you to develop visually spectacular high-performance desktop and mobile native applications. To learn more about connecting MySQL databases using MyDAC components, feel free to watch the video below.

Read More

Three ways the current paradigm shifts in technology are shaping the future of industry

Digital twins, based on 4IR technologies, are a critical enabler for Web3 in industry. These are already in use with various levels of completeness in many industries to help organizations understand system flows, anticipate maintenance needs, reduce operational costs, and enhance overall efficiencies. “Ideally you want Web3 to revolutionize your end-to-end workflows, across the board, for every person in your organization. But I think it’s more the case that we will find pockets of traction, where somebody can get started despite the significant headwinds that are facing us.” – Matt Fleckenstein, Sr. Director, Product Management, Microsoft Mesh, Microsoft  Being a leader of innovation, Vancouver Airport Authority (YVR) wanted to reinvent itself as a gateway for learning, innovation and movement of new ideas in industries beyond aviation. The result was a digital twin of its terminal and airfield on Sea Island.  The digital twin of YVR’s facilities helps solve challenges such as training, optimization, testing, evaluating environmental impact, and planning for the future – all while enabling the airport to operate without interruption. Designed with a “people-first” mindset, YVR’s digital twin offers significant benefits to airport employees, as well as the community at large.

Read More

Unity and .NET, what’s next?

.NET Standard 2.1 support in Unity 2021 LTS enables us to start modernizing the Unity runtime in a number of ways. We are currently working on two improvements. Improving the async/await programming model. Async/await is a fundamental programming approach to writing gameplay code that must wait for an asynchronous operation to complete without blocking the engine mainloop.  In 2011, before async/await was mainstream in .NET, Unity introduced asynchronous operations with iterator-based coroutines, but this approach is incompatible with async/await and can be less efficient. In the meantime, .NET Standard 2.1 has been improving the support of async/await in C# and .NET with the introduction of a more efficient handling of async/await operations via ValueTask, and by allowing your own task-like system via AsyncMethodBuilder.  We can now leverage these improvements, so we’re working on enabling the usage of async/await with existing asynchronous operations in Unity (such as waiting for the next frame or waiting for a UnityWebRequest completion). As a first step, we’re improving the support for canceling pending asynchronous tasks when a MonoBehavior is being destroyed or when exiting Play mode by using cancellation tokens. We have also been working closely with our biggest community contributors, such as the author of UniTask, to ensure that they will be able to leverage these new functionalities. Reducing memory allocations and copies by leveraging Span. Because Unity is a C++ engine with a C# Scripting layer, there’s a lot of data being exchanged between the two. This can be inefficient since it often requires either copying data back and forth or allocating new managed objects.  Span was introduced in C# 7.2 to improve such scenarios and is available by default in .NET Standard 2.1. In recent years, you might have heard or read about many significant performance improvements made to the .NET Runtime thanks to Span (see improvements details in .NET Core 2.1, .NET Core 3.0, .NET 6, .NET 6). We want to leverage its usage in Unity since this will help to reduce allocations and, consequently, Garbage Collection pauses while improving the overall performance of many APIs.

Read More

How we reduced 502 errors by caring about PID 1 in Kubernetes

This blog post and linked pages contain information related to upcoming products, features, and functionality. It is important to note that the information presented is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this blog post and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc. Our SRE on call was getting paged daily that one of our SLIs was burning through our SLOs for the GitLab Pages service. It was intermittent and short-lived, but enough to cause user-facing impact which we weren’t comfortable with. This turned into alert fatigue because there wasn’t enough time for the SRE on call to investigate the issue and it wasn’t actionable since it recovered on its own. We decided to open up an investigation issue for these alerts. We had to find out what the issue was since we were showing 502 errors to our users and we needed a DRI that wasn’t on call to investigate. What is even going on? As an SRE at GitLab, you get to touch a lot of services that you didn’t build yourself and interact with system dependencies that you might have not touched before. There’s always detective work to do! When we looked at the GitLab Pages logs we found that it’s always returning ErrDomainDoesNotExist errors which result in a 502 error to our users. GitLab Pages sends a request to GitLab Workhorse, specifically the /api/v4/internal/pages route. GitLab Workhorse is a Go service in front of our Ruby on Rails monolith and it’s deployed as a sidecar inside of the webservice pod, which runs Ruby on Rails using the Puma web server. We used the internal IP to correlate the GitLab Pages requests with GitLab Workhorse containers. We looked at multiple requests and found that all the 502 requests had the following error attached to them: 502 Bad Gateway with dial tcp 127.0.0.1:8080: connect: connection refused. This means that GitLab Workhorse couldn’t connect to the Puma web server. So we needed to go another layer deeper. The Puma web server is what runs the Ruby on Rails monolith which has an internal API endpoint but Puma was never getting these requests since it wasn’t running. What this tells us is that Kubernetes kept our pod in the service even when Puma wasn’t responding, despite having readiness probes configured. Below is the request flow between GitLab Pages, GitLab Workhorse, and Puma/Webservice to try and make it more clear: Attempt 1: Red herring We shifted our focus on GitLab Workhorse and Puma to try and understand how GitLab Workhorse was returning 502 errors in the first place. We found some 502 Bad Gateway with dial tcp 127.0.0.1:8080: connect: connection refused errors during container startup time. How could this be? With the readiness probe, the pod shouldn’t be added to the Endpoint until all readiness probes pass. We later found out that it’s because of a polling mechanisim that we have for Geo which runs in the background, using a Goroutine in GitLab Workhorse, and pings Puma for Geo information. We don’t have Geo enabled on GitLab.com so we simply disabled it to reduce […]

Read More

Pull-based GitOps moving to GitLab Free tier

GitLab will include support for pull-based deployment in the platform’s Free tier in an upcoming release, which will provide users increased flexibility, security, scalability, and automation in cloud-native environments. With pull-based deployment, DevOps teams can use the GitLab agent for Kubernetes to automatically identify and enact application changes. “DevOps teams at all levels benefit from utilizing GitOps strategies such as pull-based deployment in their cloud-native environments. By offering this feature in GitLab’s Free tier, we can introduce more organizations to the power and utility of this secure and scalable functionality,” says Viktor Nagy, product manager of GitLab’s Configure Group. As an open-core company, GitLab is happy to contribute to the GitOps community and enable the adoption of best practices in the industry. What is pull-based deployment? Pull-based and push-based deployment are two main approaches to GitOps, an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation. GitOps enables operations teams to move as quickly as their application development counterparts by making use of automation and scalability, without sacrificing security. While push-based, or agentless, deployment relies on a CI/CD tool to push changes to the infrastructure environment, pull-based deployment uses an agent installed in a cluster to pull changes whenever there is a deviation from the desired configuration. In the pull-based approach, deployment targets are limited to Kubernetes and an agent must be installed in each Kubernetes cluster. “As long as the GitLab agent for Kubernetes on your infrastructure has the necessary access rights in your cluster, you can configure everything automatically, reducing the DevOps workload and the opportunity to introduce errors,” Nagy says. Pull-based deployment vs. push-based deployment Push-based deployment and pull-based deployment each have their pros and cons. Here is a list of the advantages and disadvantages of each GitOps practice: Push-based deployment pros: ease of use well-known as part of CI/CD more flexible, as deployment targets can be on physical servers or virtual containers, not restricted to Kubernetes clusters Push-based deployment cons: requires organizations to open their firewall to a cluster and grant admin access to external CI/CD requires organizations to adjust their CI/CD pipelines when they introduce new environments Pull-based deployment pros: secure infrastructure – no need to open your firewall or grant admin access externally changes can be automatically detected and applied without human intervention easier scaling of identical clusters Pull-based deployment cons: agent needs to be installed in every cluster limited to Kubernetes only How pull-based deployment impacts the Free-tier experience Including support for pull-based deployments in GitLab’s Free tier provides a tremendous competitive advantage for smaller organizations as they can now apply automation in a safe and scalable manner to their cloud-native infrastructure, including virtual containers and clusters. And, for organizations that are trying to get started quickly by minimizing the number of tools in their infrastructure ecosystem, this functionality is included in One DevOps Platform, not as a point solution. “DevOps teams don’t have to continuously write code for new infrastructure elements – they can write the code once, within a single DevOps platform, and have the agent automatically find it, pull it, and apply it, as well as configuration changes,” Nagy says. “Also, with the availability of pull-based deployment in this introductory tier, newcomers to […]

Read More