Noutați

Enable SLO-as-Code with Nobl9 and GitLab

Nobl9 recently integrated with GitLab’s CI to enable a consistent mechanism to publish Service Level Objectives (SLO) definitions from GitLab to Nobl9. With this SLO-as-Code integration, DevOps teams can take action when their error budgets are burning too fast or are about to be exhausted. In today’s systems, 100% uptime isn’t realistic given the complex architectures and dependencies involved. SLOs enable you to define targets and have an error budget for tracking what’s “good enough.” For example, you can target uptime of 99.9%, 99%, or even 95% because what truly matters is how much downtime or errors are acceptable before there is real customer impact. Typically when organizations think about SLO-as-Code, they must use separate products to ensure their SLO definitions are always in sync with whatever tool they are using. This usually includes running command-line tools manually or building custom integrations within their code repositories. With this CI configuration, every time you build your repo, GitLab will call sloctl, our command-line tool, and push the SLO definition to Nobl9. Customers can continue using GitLab to version their SLO definitions and keep their SLOs consistent. This ensures your SLO definition will always be up to date with what’s in Nobl9 and removes any discrepancies over what the latest SLO definition actually is. SREs, engineers, and anyone using the SLOs can still debate what the targets need to be, but there will always be a definitive source of truth in your code repository on what the current definition is. Getting started To set this up in GitLab, follow these steps: 1. Select Settings -> CI/CD, and click the Expand button next to Variables. 2. Add the following variables: CLIENT_ID CLIENT_SECRET ACCESS_TOKEN PROJECT SLOCTL_YML Note: If you haven’t done so already, you’ll need to install sloctl. You can install the executable on your local machine by following the instructions in the user guide. Once sloctl is installed, you can run the following command to retrieve your CLIENT_ID, CLIENT_SECRET, and ACCESS_TOKEN: cat ~/.config/nobl9/config.toml The PROJECT value is the name of the project inside Nobl9 that your SLO belongs to. The SLOCTL_YML value is the Nobl9 YAML file you want to push to Nobl9 on each change. 3. Create the CI/CD job to apply the YAML, by going to CI/CD -> Jobs and clicking “Create CI/CD configuration file”. Enter the following code in the .gitlab.ci.yml file: variables: CLIENT_ID: $NOBL9_CLIENT_ID CLIENT_SECRET: $NOBL9_CLIENT_SECRET ACCESS_TOKEN: $NOBL9_ACCESS_TOKEN PROJECT: $NOBL9_PROJECT SLOCTL_YML: $SLOCTL_YML include: – project: ‘nobl9/nobl9-ci-template’ ref: main file: ‘/nobl9.gitlab-ci.yml’ 4. Kick off a build. Any changes to the SLOCTL_YML file that you reference will now automatically be pushed to Nobl9 once the updates are committed. By partnering with GitLab and providing a convenient CI script and a command-line tool for managing SLOs, Nobl9 has truly enabled SLO-as-Code. We encourage existing Nobl9 customers who use GitLab to give it a try. If you haven’t experienced Nobl9 yet, you can sign up for a free 30-day trial at nobl9.com/signup to see all that it has to offer. Quan To is Senior Director of Product Management, Jeremy Cooper is Senior Solutions Engineer, and Ian Bartholomew is SRE Manager at Nobl9. Cover image by Vardan Papikayan on Unsplash “Nobl9 integrated Service Level Objectives into GitLab CI to streamline SLO process.” – Quan To and Jeremy Cooper and Ian Bartholomew Click to tweet

Read More

One DevOps platform can help you achieve DevSecOps

Application security testing (AST) is a fast-moving and important area for software development. DevOps methodologies have spurred the need to integrate testing within the developer’s workflow. GitLab believes the more ingrained AST is in the software factory, the more secure applications will be and the easier it will be for companies to meet compliance demands. We believe our strategic platform approach, where security and compliance are embedded in DevOps from planning to production, provides efficiency and value unmatched by traditional application security vendors. Gartner® has named GitLab a Challenger in the 2022 Gartner Magic Quadrant™ for Application Security Testing. According to Gartner, “a major driver for the evolution of the AST market is the need to support enterprise DevSecOps and cloud-native application initiatives.” “We are excited to see continued momentum for our unique approach that embeds security into the DevOps workflow,” says Hillary Benson, GitLab director of product management. This is the third year that GitLab has been recognized in the Gartner Magic Quadrant for Application Security Testing. “We believe that our recognition as a Challenger in the Magic Quadrant represents an evolving market understanding of the value of an approach that empowers and enables developers to find and fix vulnerabilities – and the simplicity of leveraging a DevOps platform to do so.” You can read more about the results and download a copy of the report by visiting our commentary page. GitLab’s complete DevOps platform approach provides automation needed by DevOps, along with policy and vulnerability management needed by security professionals. GitLab’s Ultimate tier provides an integrated, vetted, and managed set of scanners to meet the security and compliance needs of modern-day application development and cloud-native environments. A unique approach to AST We continue to innovate in the application security space. Let’s look at how we’re different from many of the more traditional stand-alone AST technologies. It’s these very differences that provide benefits achievable by using a single platform for DevOps and security. For example: We build comprehensive scans into the CI pipeline to enable a more interactive testing environment. This is a unique approach as others in the category focus their offering on instrumentation-based interactive AST. With GitLab, the developer gets a more complete view of the security flaws as they are created – when they are most efficiently resolved. Similarly, while analysts place emphasis on lightweight spell-check-like SAST features, we have found that these features are less important to GitLab users, again because of our built-in approach. A metaphor may be helpful to explain. We are all accustomed to saving documents frequently so edits are not lost. Developers do the same while editing software. Changes made are “committed” frequently to the code repository. Upon hitting the ‘commit’ button, GitLab performs a true, SAST scan on code changes, which gives developers instant and more complete feedback. And DevOps teams can choose to enable DAST scanning that uses GitLab’s review app feature to assess changes pre-merge. Similarly, dependencies, containers, infrastructure as code, and more can all be scanned, at the push of the commit button. In addition, GitLab also is keen on providing DevOps teams just-in-time education about vulnerabilities and fixes. Now, via partnerships with Kontra and Secure Code Warrior, GitLab provides developers with crisp training on how to mitigate the specific vulnerability they just created. This helps developers […]

Read More

Updates regarding Rubygems ‘Unauthorized gem takeover for some gems’ vulnerability CVE-2022-29176

We want to share the actions we’ve taken in response to the critical Rubygems ‘Unauthorized gem takeover for some gems’ vulnerability (CVE-2022-29176). Upon becoming aware of the vulnerability within Rubygems.org, we immediately began our investigation and contacted Rubygems who quickly patched the vulnerability. Our Security team tested the usage of gems within our product and across our company and found gems within GitLab from Rubygems.org were no longer vulnerable. At this time, no malicious activity, exploitation, or indicators of compromise have been identified within GitLab.com and customer data. Further, our team’s review of gems used in the GitLab product showed no indication of compromise or integrity violations. There is no action needed by GitLab.com or self-managed users. Our teams are continuing to investigate and monitor this issue to help protect our products and customers. We will update this blog post and notify users via a GitLab security alert with any future, related updates. More information “Actions @gitlab has taken to investigate the Rubygems vulnerability ‘Unauthorized gem takeover for some gems’ CVE-2022-29176.” – GitLab Click to tweet Sign up for GitLab’s twice-monthly newsletter

Read More

Upgrading from Legacy Analytics to Unity Gaming Services Analytics

To put more resources into our upgraded Analytics platform, we will stop investing in Legacy Analytics with the aim of sunsetting the product by the end of 2022. It is advised that new projects are created with the new Unity Gaming Services Analytics platform and we also recommend using this guide to migrate existing projects over.  Transitioning a live game to a new Analytics solution can be difficult, so we’ve designed a data pipeline so you can run Analytics and Legacy Analytics in parallel. The Core Events data (from July 2021 onwards) from your Legacy Analytics integration will be automatically imported into the new Analytics solution.  Metrics such as DAU, MAU, session length, revenue, and others will be populated in the new Analytics solution for a trial of the product ahead of implementing the SDK. Please note that this does not cover any Custom Events that you have defined; these will need to be redefined both in your game code and the dashboard – see the tech docs guide for more information. Note: No duplication or double counting will occur, standard events being triggered by the Analytics package will take precedence over imported data for each individual player.  Want to learn more about Unity Gaming Services Analytics? Register here for our free UGS Analytics bootcamp on May 17 at 12 PM EST/ 9 AM PST and get a live overview of everything you need to know for your next project. If you have any concerns or questions, please contact our support team here or reach out to your client partner and we’d be happy to help you through this transition.

Read More

DevOps in Education 2021 Survey results

In fall 2021 we launched our second annual DevOps in Education Survey. Over 460 respondents from all regions of the world shared insights on how DevOps and GitLab are transforming higher education. Key findings One platform for the win: Respondents’ enthusiasm for teaching GitLab’s single DevOps platform increased 190% over 2020; survey takers also pointed to the way GitLab can tie culture to operations as key (up 189% year over year), and they also value student portfolio management (up 200%). CI/CD success: Academic institutions reported high rates of adoption of GitLab’s CI/CD features both within the classroom and in all other use cases. Flexibility is key: Deployment flexibility stands out again as a major advantage of GitLab at institutions of higher education. Security and authentication are the primary drivers. GitLab spreads the DevOps love: Multiple departments within an academic institution are reporting they’re now using GitLab and 21% of respondents said the ability to install multiple instances across a campus was a GitLab advantage (up 6% from 2020). …and more spread = branching out: Because GitLab has one complete platform, higher ed. respondents report they’re expanding their DevOps footprint to include additional stages like Secure. The three most used stages in education continue to be Source Control Management, Plan, and Verify. Release and Package are also seeing nearly 30% adoption by respondents. Planning features: Educators find planning features such as multi-level epics, issue tracking features, labels, and project management highly useful tools. Why DevOps belongs in the classroom The benefits of teaching or learning GitLab came through clearly in the survey. The fact that GitLab is a single DevOps tool was key for 58% of respondents, up from just 20% in 2020. What are the benefits of teaching or learning GitLab? How GitLab in education works Deployment flexibility is critical to universities because security and server access can be controlled (81%), all while integrating with user authentication systems (54%). The ability to host multiple instances per institution was also a factor for 21% of respondents, up 6% from last year – another sign that cross-campus adoption is growing. Advanced features (only available in the Ultimate tier) are used by 35% of respondents, which remained fairly consistent from 2020. Security features including container scanning, SAST, advanced security testing, custom DAST, and compliance management were among the most frequently mentioned. Multi-level epics and free guest users were commonly mentioned as well. Use cases and DevOps stages The most common use of GitLab in education was source control management with 53% of respondents actively using, followed by Verify (Continuous Integration) at 40%, Plan (issue tracking, labels) 38%, Manage (authentification, compliance management) at 28%, Package 29% and Release (Continuous Delivery) at 29%. The top four tools other than GitLab used by respondents were GitHub (76%), GitHub Actions (24%), Jenkins (26%), and BitBucket (17%). Faculty respondents noted the value of bringing industry tools to the classroom. One wrote, “Thank you for the GitLab Program. It makes it possible for us to manage students’ software engineering projects in a modern development environment.” Leveraging GitLab to boost skills The 2021 survey asked an additional question regarding what specific skills are being taught with GitLab in the classroom. The three top skills taught with GitLab are: CI/CD (40%), collaboration and communication (36%), application development and design (30%). Other […]

Read More

Learn Python with Pj! Part 4 – Dictionaries and Files

This is the third installment in the Learn Python with Pj! series. Make sure to read Part 1 – Getting started, Part 2 – Lists and loops, and Part 3 – Funcitons and strings. I’ve learned a lot with Python so far, but when I learned dictionaries (sometimes shortened to dicts), I was really excited about what could be done. A dictionary in Python is a series of keys and values stored inside a single object. This is kind of like a super array; one that allows you to connect keys and values together in a single easily accessible source. Creating dictionaries from arrays can actually be very simple, too. In this blog, I’ll dig into how to create dictionaries and how to read and write files in the code. Dictionaries Dictionaries in Python are indicated by using curly braces, or as I like to call them, mustaches. { } indicates that the list you’re looking at isn’t a list at all, but a dictionary. shows_and _characters = { “Bojack Horseman”: “Todd”, “My Hero Academia”: “Midoriya” “Ozark”: “Ruth” “Arrested Development”: “Tobias”, “Derry Girls”: “Sister Michael”, “Tuca & Bertie”: “Bertie” } This is a dictionary of my favorite TV shows and my favorite characters in that show. In this example, the key is on the left and the value is on the right. To access dictionaries, you use a similar call like you would for a list, except instead of an element number, you would put the key. print(shows_and_characters[“Ozark”]) would print Ruth to the console. Additionally, both the key and value in this example are strings, but that’s not a requirement. Keys can be any immutable type, like strings, ints, floats, and tuples. Values don’t have this same restriction, therefore values can be a nested dictionary or a list, in addition to the types mentioned for keys. For instance, the following dictionary is a valid dictionary. shows_with_lists = { “Bojack Horseman”: [“Todd”, “Princess Carolyn”, “Judah”, “Diane”], “My Hero Academia”: [“Midoriya”, “Shoto”, “All Might”, “Bakugo”, “Kirishima”], “Ozark”: [“Ruth”, “Jonah”, “Wyatt”], “Arrested Development”: [“Tobias”, “Gob”, “Anne”, “Maeby”], “Derry Girls”: [“Sister Michael”, “Orla”, “Erin”, “Claire”, “James”], “Tuca & Bertie”: [“Bertie”, “Speckle”, “Tuca”, “Dakota”] } In this example, each value is a list. So if we tried to print the value for the key ”Derry Girls”, we would see [“Sister Michael”, “Orla”, “Erin”, “Claire”, “James”] printed to the console. However, if we wanted the last element in the value list, we’d write shows_with_lists[“Derry Girls”] [-1]. This would print the last element in the list, which in this case is James. Dictionaries can be written manually, or, if you have two lists, you can combine the dict() and zip() methods to make the lists into a dictionary. list_of_shows = [“Bojack Horseman”, “My Hero Academia”, “Ozark”, “Arrested Development”, “Derry Girls”, “Tuca & Bertie”] list_of_characters = [[“Todd”, “Princess Carolyn”, “Judah”, “Diane”], [“Midoriya”, “Shoto”, “All Might”, “Bakugo”, “Kirishima”], [“Ruth”, “Jonah”, “Wyatt”], [“Tobias”, “Gob”, “Anne”, “Maeby”], [“Sister Michael”, “Orla”, “Erin”, “Claire”, “James”], [“Bertie”, “Speckle”, “Tuca”, “Dakota”]] combined_shows_characters = dict(zip(list_of_shows, list_of_characters)) print(combined_shows_characters) This is one way to create a dictionary. Another is called Dictionary Comprehension. This one is a little more work, but can be used in a variety of different ways, including using a bit of logic on a single list to generate a dictionary using that original list. Here’s […]

Read More

3 Steps To Finding The Perfect Android App Builder Software

Android Studio is indeed the go-to Android App Builder Software for creating Android applications with Java or Kotlin. Since applications built with Java and Kotlin provide full native app development and all the available features, it really provides long-term success. Moreover, the community around Java and Android Studio is vast, and if there are issues that arise, there is a hundred per cent guarantee you can find the solution easily. But there are some cases where the Android Studio with Java or Kotlin cannot be a solution.  Android is very likely the most popular operating systems for mobile and portable devices with the flexibility to deploy primarily any device that can handle it. It is projected to grow and dominate not in just mobile, tablets, PCs, cars, set-top boxes, smartwatches, home appliances, and more. For these reasons, learning how to create Android applications is a must, but choosing the proper Android development framework and a programming language is tricky with its app development ecosystem. For instance, you can see an Android developer using Android Studio but also looking for other options to develop apps with more productivity. I help you select the right Android app builder and the proper framework for your projects. How to compare Android app builders and Frameworks? It really depends on the company. For instance, start-ups use hybrid app development technologies mainly because they are free and open-source but, in reality, do not provide much productivity and higher security over your code. Numerous Android app development frameworks share many identical characteristics. But with their certain unique features, creating specific types of projects can be more accessible. Moreover, the app builder environment also plays a significant role. For instance, one framework can be used to create e-commerce apps which have easy to integrate solutions for third party services but lacks graphical user interface development. Or it requires you to work with XAML to design UI, which is not productive and not intuitive at all. Here are some criteria you should know to compare framework and Android app builders: Cross-platform capability Popularity and resourcefulness Easy & Fast UI Designing Choosing a cross-platform framework where you can build native but cross-platform applications with one codebase is no news in the current state of technology. But not all framework and development platforms support this power with the ideal architecture. FireMonkey framework, with its RAD Studio app development ecosystem, you can get all the modern features and environment to build any application that can run on major platforms like Android, iOS, Windows, macOS and Linux. Additionally, you can create business-oriented web applications. The best thing about the RAD Studio development environment is that you can prototype and design applications 5x faster than most other app builders. It gives you the low code app development platform and traditional software engineering ecosystem. Why should you select RAD Studio for your Android app development? First of all, I recommend you try out the Delphi Community Edition or RAD Studio trial version to build various types of applications. When you install the IDE with its framework and libraries, you get a set of sample applications. You can see how simple and fast to create different applications with the RAD Studio platform. Official support for recent updates of OSs The Delphi 11.1 release adds official support […]

Read More

Extend TMS WEB Core with JS Libraries with Andrew: Epic JSON Primer (part 1)

For the rest of this document, we’re going to be doing triple the work – showing the JavaScript version (JS), the TMS WEB Core wrapper version (WC), and the Delphi version (PAS) of each scenario, presented as a block of code suitable for inclusion (and tested!) within a TMS WEB Core project, which supports all three of these seamlessly.  If you were working on a part of a project that used only Delphi VCL code, you could use just the PAS code equally well there.  Or if you were working on a part of a project that used pure JavaScript, then the JS code within the asm … end blocks would work just as well in that environment.  We’ll even cover a little bit about moving between these three environments.   To begin with, we’ll need to know how to define variables to reference JSON objects. We’ll also need to know how to supply JSON to them (sometimes referred to as serialization), and how to see what they contain (de serialization). For the sake of simplicity, we’re going to show our work using console.log() which works in both Delphi and JavaScript in our TMS WEB Core environment. Simply substitute with ShowMessage or some other equivalent debugging output if you’re working in any other environment.   But why do we have to define variables at all?  Well, one of the key differences between JavaScript and Delphi is how data types are managed.  Delphi is a strongly-typed language meaning that, at nearly every step, the data type assigned to a variable is known, usually in advance of its being used.  And these data types tend to be rather specific in nature.  A JSON Object is not necessarily interchangable with a JSON Array, for example.  JavaScript, on the other hand, is a weakly-typed (some might even say non-typed) language.  Data types are sort of an afterthought and you can write all kinds of code without having to think about what kind of data is flowing through it.   There are significant trade-offs to both approaches.  JSON is also very closely tied to the JavaScript language itself, and some aspects of the JavaScript language that have evolved over the years have led to significant improvements in using JSON in that environment. The overall result is that the JS code we’ll be showing tends to be very short, sometimes a little cryptic, but often very efficient.  The Delphi equivalents often have to do a little more work to achieve the same thing, but not always less efficiently, as we shall see. Using JS code in TMS WEB Core just involves wrapping it in an asm … block.  We’re not using any other libraries or supporting code to do our work here. The WC variations similarly will work without anything special in terms of the TMS WEB Core environment.  The classes we’ll be using are there by default, collectively referred to as the TJS* classes.  For Delphi though, we do need the extra little step of adding WEBlib.JSON to the uses clause of our project (or Form, etc.).  This brings in the TJSON* classes that we’ll be using. Here then is our sample WebButtonClick procedure that we’ll use in nearly every example that follows.  It will start with whatever JS, WC or PAS variables we need and then contain […]

Read More

Extend TMS WEB Core with JS Libraries with Andrew: Epic JSON Primer (part 2)

20 : Relative Performance. I’ve often heard that while it is possible to use the TJSONObject variant of these approaches (the PAS code we’ve been working so diligently on), it is preferable instead to use the WC variant as it will be better performing.  We’ll put that to the test here in a bit of a contrived example.  Often, readability and re-usability of code is more important than straight-up performance, particularly for difficult code that isn’t executed frequently. But at the same time, situations certainly do come up where a bit of code is executed frequently in a tight loop, and squeezing out every bit of performance is important. procedure TForm1.WebButton1Click(Sender: TObject); var   WC_Object: TJSObject;   PAS_Object: TJSONObject;   ElapsedTime: TDateTime;   i: Integer;   Count: Integer; begin   ElapsedTime := Now;   // JS Create 1,000,000 Objects   asm     var JS_Object = {};     for (var i = 0; i

Read More

Setting the vision for Unity DevOps

Q: Why is Plastic SCM being prioritized over Collaborate going forward? A: Collaborate was never designed to be a fully-featured VCS solution, which Plastic SCM is. The Plastic SCM technology is also a better fit for Unity creators’ needs, since it was designed specifically for real-time 3D, with separate workflows for artists and programmers, and support for handling large files and binaries common to RT3D development.  Q: What’s happening to the versions of Unity Teams bundled into Unity Editor subscription plans? A: Starting May 5, 2022, new subscribers to Unity Pro and Enterprise will no longer receive any allocation of Unity Teams. You can take advantage of Plastic SCM’s cloud edition for version control, which is free for up to three users and 5 GB per month, and then pay as you go pricing. Cloud Build has pay-as-you-go pricing. Q: What are Unity’s current DevOps offerings? A: Currently, there are two separate components, each available for purchase separately – Plastic SCM for version control and Cloud Build for CI/CD. Q: As an existing Cloud Build customer, will my pricing change? A: No, it won’t change. As an existing Cloud Build user, you will continue to have access to your current pricing and capabilities for the foreseeable future and until we move all Unity products to Cloud Build 2.0. You will receive notice 60 days in advance of changes to your account prior to conversion to Cloud Build 2.0. Note that access to larger repositories and increased concurrency limits will be unavailable if you choose to keep the old Cloud Build pricing, along with many of our planned innovations. Q: How does the new Cloud Build pricing work? A: Cloud Build 2.0 pricing is completely metered. You will only pay for what you use. Users are charged for build minutes, based on the platform they are building for. For Windows the price is $0.02/min; for Mac the price is $0.07/min; and it’s $10 per build machine concurrency. Q: Can I use version control in Unity, or do I need a separate client? A: Unity Plastic SCM works in the Unity Editor, and it can also be accessed via a separate desktop client. In supported versions of the Editor, Plastic SCM users  can check-in, check out, lock files, view file history, and even create and switch branches as well as choose to install a seperate desktop client. For former Collaborate users, see this user’s guide to switching to Plastic in Unity. A list of supported versions for the in-Editor experience is available here. Q: Can you use Cloud Build with Plastic SCM? A: When setting up Cloud Build, you can choose to connect to Unity Plastic SCM as your source control. If you previously used Collaborate for this workflow, you will need to take action to connect Cloud Build to Plastic SCM. Follow this video guide here.

Read More