From the blog

Learn About Useful Shared Mutexes Locking In Modern C++

A concurrency support library is designed to solve problems in modern C++ that arise with multi-thread development. This library includes built-in support for threads (std::thread), atomic operations (std::atomic), mutual exclusion (std::mutex), condition variables (std::condition_variable), and many other features. In C++14, in addition to mutex, there is a shared_mutex which is an instance of the class located in header. In this post, we explain using shared mutexes locking in Modern C++. What is a mutex (mutual exclusion) in C++? Mutual Exclusion is a property of concurrency control and in programming, the Mutual Exclusion is a data exclusion method to lock and unlock data that provides exclusive access to a resource. This is mostly needed when we use shared data in multi-thread and multi-task operations in parallel programming. In C++, we can use std::mutex to define mutex data variables to protect his shared data from being simultaneously accessed by multiple threads. Here is an example of how we can use std::mutex with its lock() and unlock() methods,   std::mutex mtx;   mtx.lock();   // do operations mtx.unlock();   What is a shared mutex in modern C++? The shared mutex comes with C++14, it is an instance of the class located in header and used with the shared_mutex class name in mutual exclusion operations of threads. The shared_mutex class is a part of the thread support library, it is a synchronization primitive for the thread operations that can be used to protect shared data when multiple threads try to access. Here is how we can define a shared mutex by using std::shared_mutex.   std::shared_mutex ;   Is there an example about shared mutexes (std::shared mutex)? Here is a simple example about std::shared_mutex with its try_lock_shared() and unlock_shared() methods that comes with C++17.   std::shared_mutex sharedmutex;   // in a thread function sharedmutex.try_lock_shared(); // do operations sharedmutex.unlock_shared();   How to use shared lock unlock mutexes methods? A shared_mutex has lock() and unlock() methods as in mutex type, In C++17, it is improved and supports the additional methods lock_shared, unlock_shared, and try_lock_shared. Simply these are: lock_shared (C++17) The lock_shared method is used to block the calling thread until the thread obtains shared ownership of the mutex. unlock_shared (C++17) The unlock_shared method is used to release shared ownership of the mutex held by the calling thread. try_lock_shared (C++17) The try_lock_shared method is used to obtain shared ownership of the mutex without blocking. Return type is can be  true if the method obtains ownership, or false if it cannot. Is there a full example of how to use shared mutexes (std::shared mutex) in C++? Let’s assume that we have a global val and we read data by a getv() and we write data by putv() functions, and we run these functions in threads. Here is a full and simple example about shared mutexes (std::shared mutex). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43   #include #include #include #include   std::shared_mutex sharedmutex;   int val=500;   void putv( int v ) {   sharedmutex.lock();   std::this_thread::sleep_for(std::chrono::microseconds(2));  //some latency simulation val = v;   sharedmutex.unlock(); }   void getv( int &v ) {   sharedmutex.lock_shared();   std::this_thread::sleep_for(std::chrono::microseconds(2));  // some latency simulation   v = val;   sharedmutex.unlock_shared(); }     int main() {   int readval;   std::thread t1( getv , std::ref( readval ) );   std::thread t2( putv , 100); […]

Read More

Survey Surfaces Benefits of Applying AI to FinOps

A survey of 200 enterprise IT decision-makers published this week found organizations that have infused artificial intelligence (AI) into financial operations (FinOps) workflows to reduce IT costs are 53% more likely to report cost savings of more than 20%. Conducted by the market research firm Foundry on behalf of Tangoe, a provider of tools for managing IT and telecommunications expenses, the survey found organizations that embraced FinOps without any AI capabilities averaged less than 10% in cost savings. The top three drivers for adopting FinOps/cloud cost management programs are the need to increase cloud resource production and performance (70%), reduce budgets (60%) and rising costs (58%), and simpler overall program management (50%), the survey found. Major benefits included productivity savings (46%), cost savings (43%) and reduced security risks (43%). Nearly two-thirds of respondents cited service utilization and right-sizing of services as another reason to embrace FinOps. FinOps describes a methodology for embedding programmatic controls within DevOps workflows to reduce costs. In the face of increased economic headwinds, IT leaders are looking to reduce cloud computing costs, but it’s turning out to be more challenging than many of them anticipated. Cloud infrastructure is typically provisioned by developers using infrastructure-as-code (IaC) tools with little to no supervision. The reason for this is developers have long argued that waiting for an IT team to provision cloud infrastructure took too long. Developers would be more productive if they just provisioned cloud infrastructure themselves. However, after ten years of cloud computing, it’s become apparent there are a lot of wasted cloud infrastructure resources. Developers who don’t pay the monthly bills for cloud services tend to view available infrastructure resources as essentially infinite. It’s usually not until someone from the finance department starts raising cost concerns that developers even become aware there might be an issue. The challenge is that adopting FinOps best practices is not quite as easy as it might seem. In fact, more than half (54%) of survey respondents cited challenges in building the right process and human support systems for FinOps into workflows that have been in place for years. Chris Ortbals, chief product officer for Tangoe, said the simplest path to FinOps is to rely on a software-as-a-service (SaaS) platform designed from the ground up to leverage AI to help IT teams manage cloud computing and telecommunications expenses both before and after applications are deployed. Each DevOps team will ultimately need to determine how much they will implement metrics to foster more efficient consumption of cloud computing resources. The more aware of those costs DevOps teams are, the more likely that better decisions about what types of workloads should be run where and, just as importantly in the age of the cloud, at what time, given all the pricing options provided. Developers, of course, tend to jealously guard their prerogatives. Convincing them to give up their ability to provision cloud infrastructure on demand is going to be a challenge, at least until someone makes it plain how much all those cloud instances wind up costing the organization each and every month.

Read More

What Are The Amazing Containers In Modern C++?

Containers are powerful data storage arrays in modern C++ and they are very useful to iterate and search data with their amazing methods and properties. The C++ Standards library defines 4 container types. In this post, we explain containers in modern C++. What is a container in modern C++? Containers are modern data storage arrays in modern C++ and they are very useful to iterate and search data with their amazing methods and properties. A container is a holder object that stores data elements (a collection of data objects). They are implemented as a class template to define objects that can be used with modern rules of C++ (The rule of 6), they allow great flexibility in the different data types supported as elements, they can be used with int, float, double, etc. or with struct types, they can be used with other modern types of C++, lambdas and templates. Thus, the developer can create different data sets in memory, these can be static or dynamic, they are safe and optimized well. Basically, a container manages the storage space for its elements and provides properties and methods to access and operate on them, these methods and properties can be either directly or through iterators. They are mostly dynamic data structures, and they are well optimized for the memory management and performance. In C++, there are four main types of containers, Sequence Containers (vectors, arrays, …) Associative Containers (maps, sets, …) Unordered Associative Containers (unordered_set, unordered_map, …) Container Adapters (stack, queue, priority_queue) Now, let’s see each of them. What are sequence containers in modern C++? In C++, the Sequence Containers are class templates of container types of modern C++ that can be used to implement data structure types (vector, array,…) where they can be accessed sequentially. They are a kind of data types, but objects of classes and they can use methods of its classes, optimized for many modern C++ algorithms and methods. The sequence containers are; std::array : a class template for the static contiguous array (modern C array) std::vector : a class template for the dynamic contiguous array ( modern dynamic C arrays) std::deque : a class template for the double-ended queue std::list : a class template for the doubly-linked list (modern linked list) std::forward_list : a class template for the singly-linked list (modern linked list) What are sequence containers in modern C++? Associative Containers are class templates of container types that can be used to implement sorted data structures where can be quickly searched. They are sorted by keys. We can say they are about O(log n) complexity data structures. The associative containers are; std::map : a class template for the collection of key-value pairs, its keys are unique and it is sorted by keys std::set : a class template for the collection of unique keys, it is sorted by keys  multiset : a class template for the collection of keys, it is sorted by keys multimap : a class template for the collection of key-value pairs, it is sorted by keys  What are associative containers in modern C++? Unordered Associative Containers are class templates of container types that can be used to implement unsorted (hashed) data structures where they can be quickly searched. They are about O(1) amortized, O(n) worst-case complexity data structures. The unsorted associative containers are; unordered_set : a class template for […]

Read More

What Are The Differences Between Mutex And Shared Mutex In C++?

The concurrency support library in modern C++ is designed to solve read and write data securely in thread operations that allow us to develop faster multi-thread apps. This library includes built-in support for threads (std::thread), atomic operations (std::atomic), mutual exclusion (std::mutex), condition variables (std::condition_variable), and many other features. In C++14, in addition to mutex, there is a shared mutex (std::shared_mutex) which is an instance of the class located in header. In this post, we explain a frequently asked mutex question in modern C++, what are the differences between mutex and shared_mutex? What is a mutex (std::shared_mutex) in C++? Mutual Exclusion is a property of concurrency control and in programming, the Mutual Exclusion is a data exclusion method to lock and unlock data that provides exclusive access to a resource. This is mostly needed when we use shared data in multi-thread and multi-task operations in parallel programming. In C++, we can use std::mutex to define mutex data variables to protect his shared data from being simultaneously accessed by multiple threads. Here is an example of how we can use std::mutex with its lock() and unlock() methods,   std::mutex mtx;   mtx.lock();   // do operations mtx.unlock();   Here are more details and examples about std::mutex. What is a shared mutex (std::shared_mutex) in C++? The shared mutex comes with C++14, it is an instance of the class located in header and used with the shared_mutex class name in mutual exclusion operations of threads. The shared_mutex class is a part of the thread support library. It is a synchronization primitive for the thread operations that can be used to protect shared data when multiple threads try to access. Here is a simple example about std::shared_mutex with its try_lock_shared() and unlock_shared() methods that comes with C++17.   std::shared_mutex sharedmutex;   // in a thread function sharedmutex.try_lock_shared(); // do operations sharedmutex.unlock_shared();   Here are more details and a full example about shared_mutex. ———- LINK TO Learn About Useful Shared Mutexes Locking In Modern C++ —————— What are the differences between std::mutex and std::shared_mutex? While the std::mutex guarantees exclusive access to some kind of critical resource, the shared_mutex class extends this feature by a shared and exclusive level of accesses. The shared_mutex can be used in exclusive access level to prevent access of any other thread from acquiring the mutex, as in std::mutex. No matter if the other thread is trying to acquire shared or exclusive access. The shared_mutex can be used in the shared access level to allow multiple threads to acquire the mutex, but all of them are only in shared mode. In thread operations, exclusive access is not granted until all of the previously shared holders have returned the mutex. As long as an exclusive request is waiting, new shared ones are queued to be granted after the exclusive access. For more information about shared mutex feature, please see https://open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3659.html C++ Builder is the easiest and fastest C and C++ IDE for building simple or professional applications on the Windows, MacOS, iOS & Android operating systems. It is also easy for beginners to learn with its wide range of samples, tutorials, help files, and LSP support for code. RAD Studio’s C++ Builder version comes with the award-winning VCL framework for high-performance native Windows apps and the powerful FireMonkey (FMX) framework for cross-platform UIs. There is a free C++ Builder Community Edition for students, beginners, and startups; it can be downloaded from here. For professional developers, there are Professional, Architect, […]

Read More

Getting RCE in Chrome with incomplete object initialization in the Maglev compiler

In this post I’ll exploit CVE-2023-4069, a type confusion vulnerability that I reported in July 2023. The vulnerability—which allows remote code execution (RCE) in the renderer sandbox of Chrome by a single visit to a malicious site—is found in v8, the Javascript engine of Chrome. It was filed as bug 1465326 and subsequently fixed in version 115.0.5790.170/.171. Vulnerabilities like this are often the starting point for a “one-click” exploit, which compromises the victim’s device when they visit a malicious website. What’s more, renderer RCE in Chrome allows an attacker to compromise and execute arbitrary code in the Chrome renderer process. That being said, the renderer process has limited privilege and such a vulnerability needs to be chained with a second “sandbox escape” vulnerability (either another vulnerability in the Chrome browser process or one in the operating system) to compromise Chrome itself or the device. While many of the most powerful and sophisticated “one-click” attacks are highly targeted, and average users may be more at risk from less sophisticated attacks such as phishing, users should still keep Chrome up-to-date and enable automatic updates, as vulnerabilities in v8 can often be exploited relatively quickly. The current vulnerability, CVE-2023-4069, exists in the Maglev compiler, a new mid-tier JIT compiler in Chrome that optimizes Javascript functions based on previous knowledge of the input types. This kind of optimization is called speculative optimization and care must be taken to make sure that these assumptions on the inputs are still valid when the optimized code is used. The complexity of the JIT engine has led to many security issues in the past and has been a popular target for attackers. Maglev compiler The Maglev compiler is a mid-tier JIT compiler used by v8. Compared to the top-tier JIT compiler, TurboFan, Maglev generates less optimized code but with a faster compilation speed. Having multiple JIT compilers is common in Javascript engines, the idea being that with multiple tier compilers, you’ll find a more optimal tradeoff between compilation time and runtime optimization. Generally speaking, when a function is first run, slow bytecode is generated, as the function is run more often, it may get compiled into more optimized code, first from a lowest-tier JIT compiler. If the function gets used more often, then its optimization tier gets moved up, resulting in better runtime performance—but at the expense of a longer compilation time. The idea here is that for code that runs often, the runtime cost will likely outweigh the compile time cost. You can consult An Introduction to Speculative Optimization in v8 by Benedikt Meurer for more details of how the compilation process works. The Maglev compiler is enabled by default starting from version 114 of Chrome. Similar to TurboFan, it goes through the bytecode of a Javascript function, taking into account the feedback that was collected from previous runs, and transforms the bytecode into more optimized code. However, unlike TurboFan, which first transforms bytecodes into a “Sea of Nodes”, Maglev uses an intermediate representation and first transforms bytecodes into SSA (Static Single-Assignment) nodes, which are declared in the file maglev-ir.h. At the time of writing, the compilation process of Maglev consists mainly of two phases of optimizations: the first phase involves building a graph from the SSA nodes, while the second phase consists of optimizing the representations […]

Read More

The Growing Impact of Generative AI on Low-Code/No-Code Development

No-code/low-code platforms, once a disruptor in the realm of software development, are now embracing the capabilities of generative AI to create even more dynamic experiences. This union of convenience and innovation redefines how users interact with their software. Imagine a scenario where crafting complex instructions like “Deploy endpoint protection to noncompliant devices” becomes as simple as conversing with your application. The fusion of generative AI and no-code/low-code platforms empowers users to shape their software’s behavior without delving into intricate technicalities. Users can input prompts such as “Generate a code snippet for converting date formats” or “Create a workflow that automates inventory updates.” By translating natural language into action, this approach streamlines development and fosters creativity. An Amalgamation of Generative AI and No-Code/Low-Code Beyond buzzwords, the amalgamation of generative AI with no-code/low-code platforms offers tangible benefits. The efficiency gains that occur when users can sidestep the need for manual configurations and directly communicate their intentions are both remarkable and unprecedented. Accessibility is enhanced, enabling non-technical individuals to actively participate in application development. Moreover, innovative use cases emerge, allowing organizations to streamline complex workflows with ease. As with any transformative technology, challenges emerge alongside benefits. Privacy concerns loom large when dealing with data input into generative AI models. Striking a balance between providing valuable insights and safeguarding sensitive information becomes paramount. Additionally, the inherently non-deterministic nature of generative AI can lead to varying outcomes, requiring careful consideration of use cases to ensure reliable results. As this collaboration matures, the landscape of software development is poised for significant change. Conversational interfaces that empower users to dictate software behaviors will continue to evolve, reducing implementation and configuration overhead. Imagine a future where complex workflows are summoned with a simple request or applications are custom-built based on natural language blueprints. This shift will not only streamline development but also democratize technology, making it accessible to a broader audience. The integration of generative AI with no-code/low-code platforms allows users to express their creativity more freely. By enabling natural language prompts like “Design an app to manage inventory with automatic restocking” or “Build a workflow that offboards a user across Google, Slack, and Salesforce,” users can drive software behaviors without being constrained by technical jargon. This fusion redefines the efficiency of software interaction. Tasks that previously required meticulous configuration or coding can now be executed through simple prompts. Whether generating email templates, creating data transformation scripts, or orchestrating multi-step workflows, the convenience of natural language input eliminates barriers and accelerates results. A Democratic Approach Looking forward, the integration of generative AI in no-code/low-code platforms points toward a more democratic approach to software development. This convergence will enable a broader range of individuals to participate actively, regardless of their coding expertise. By simplifying the process and making it more inclusive, we’re shaping a future where software truly adapts to human intent. As businesses continue to harness the potential of generative AI and no-code/low-code platforms, adaptation and learning will be key. Embracing this transformation requires a shift in mindset, and understanding that software can be molded through conversations and prompts. As technology matures, the barriers between user intent and software behavior will fade, ushering in an era where technological fluency is defined by our ability to communicate rather than code. Speculating on how this shift will impact the day-to-day […]

Read More

Coordinated Disclosure: 1-Click RCE on GNOME (CVE-2023-43641)

Today, in coordination with Ilya Lipnitskiy (the maintainer of libcue) and the distros mailing list, the GitHub Security Lab is disclosing CVE-2023-43641, a memory corruption vulnerability in libcue. We have also sent a text-only version of this blog post to the oss-security list. It’s quite likely that you have never heard of libcue before, and are wondering why it’s important. This situation is neatly illustrated by xkcd 2347: libcue is a library used for parsing cue sheets—a metadata format for describing the layout of the tracks on a CD. Cue sheets are often used in combination with the FLAC audio file format, which means that libcue is a dependency of some audio players, such as Audacious. But the reason why I decided to audit libcue for security vulnerabilities is that it’s used by tracker-miners: an application that’s included with GNOME—the default graphical desktop environment of many open source operating systems. The purpose of tracker-miners is to index the files in your home directory to make them easily searchable. For example, the index is used by this search bar: The index is automatically updated when you add or modify a file in certain subdirectories of your home directory, in particular including ~/Downloads. To make a long story short, that means that inadvertently clicking a malicious link is all it takes for an attacker to exploit CVE-2023-43641 and get code execution on your computer: The video shows me clicking a link in a webpage, which causes a cue sheet to be downloaded. Because the file is saved to ~/Downloads, it is then automatically scanned by tracker-miners. And because it has a .cue filename extension, tracker-miners uses libcue to parse the file. The file exploits the vulnerability in libcue to gain code execution and pop a calculator. Cue sheets are just one of many file formats supported by tracker-miners. For example, it also includes scanners for HTML, JPEG, and PDF. I am delaying publication of the proof of concept (PoC) used in the video, to give users time to install the patch. But if you’d like to test if your system is vulnerable, try downloading this file, which contains a much simpler version of the PoC that merely causes a (benign) crash. The offsets in the full PoC need to be tuned for different distributions. I have only done this for Ubuntu 23.04 and Fedora 38, the most recent releases of Ubuntu and Fedora at this time. In my testing, I have found that the PoC works very reliably when run on the correct distribution (and will trigger a SIGSEGV when run on the wrong distribution). I have not created PoCs for any other distributions, but I believe that all distributions that run GNOME are potentially exploitable. The bug in libcue libcue is quite a small project. It’s primarily a bison grammar for cue sheets, with a few data structures for storing the parsed data. A simple example of a cue sheet looks like this: REM GENRE “Pop, dance pop” REM DATE 1987 PERFORMER “Rick Astley” TITLE “Whenever You Need Somebody” FILE “Whenever You Need Somebody.mp3” MP3 TRACK 01 AUDIO TITLE “Never Gonna Give You Up” PERFORMER “Rick Astley” SONGWRITER “Mike Stock, Matt Aitken, Pete Waterman” INDEX 01 00:00:00 TRACK 02 AUDIO TITLE “Whenever You Need Somebody” PERFORMER “Rick Astley” SONGWRITER “Mike Stock, […]

Read More

Three Important Posts About The Features Of C++14

Hello C++ Developers. As I write this post, the summer is over (if you live in the Northern hemisphere), and, in most countries, the new educational year has started, and we wish good luck to all students. If you are a student and want to learn C++, we have a lot of educational posts for you. This week, we continue to explore features from the C++14 standard which is available in C++ Builder. This week, we explain what is constexpr specifier and what are the relaxed constexpr restrictions in C++14. We explain variable templates in C++ and we teach how to use them in modern C++. In another post-pick, we explain what Aggregate Member Initialization is and we give very simple examples for you to try. Our educational LearnCPlusPlus.org site has a broad selection of new and unique posts with examples suitable for everyone from beginners to professionals alike. It is growing well thanks to you, and we have many new readers, thanks to your support! The site features a treasure-trove of posts that are great for learning the features of modern C++ compilers with very simple explanations and examples. RAD Studio’s C++ Builder, Delphi, and their free community editions C++ Builder CE, and Delphi CE are powerful tools for modern application development. Table of Contents Where I can I learn C++ and test these examples with a free C++ compiler? How to use modern C++ with C++ Builder? How to learn modern C++ for free using C++ Builder? Do you want to know some news about C++ Builder 12? Where I can I learn C++ and test these examples with a free C++ compiler? If you don’t know anything about C++ or the C++ Builder IDE, don’t worry, we have a lot of great, easy to understand examples on the LearnCPlusPlus.org website and they’re all completely free. Just visit this site and copy and paste any examples there into a new Console, VCL, or FMX project, depending on the type of post. We keep adding more C and C++ posts with sample code. In today’s round-up of recent posts on LearnCPlusPlus.org, we have new articles with very simple examples that can be used with: The free version of C++ Builder 11 CE Community Edition or a professional version of C++ Builder  or free BCC32C C++ Compiler and BCC32X C++ Compiler or the free Dev-C++ Read the FAQ notes on the CE license and then simply fill out the form to download C++ Builder 11 CE. How to use modern C++ with C++ Builder? In C++, the constexpr specifier is used to declare a function or variable to evaluate the value of at compile time, which speeds up code during runtime. This useful property had some restrictions in C++11, these are relaxed in C++14 and this feature is known as Relaxed Constexpr Restrictions. In the next post, we explain what are the relaxed constexpr restrictions in modern C++. The Aggregate Member Initialization is one of the features of C++. This feature is improved and modernized with C++11, C++14, and C++20. With this feature, objects can initialize an aggregate member from the braced-init list. In the next post, we explain what the aggregate member initialization is and what were the changes to it in modern C++ standards. The template is one of the great features of modern C++. They are simple and very powerful statement in […]

Read More

GitHub Repository Rules are now generally available

Protected branches have been around for a while, and we’ve made numerous improvements over time. We’ve added new rules to protect multiple branches and introduced additional permissions. However, it’s still challenging to consistently protect branches and tags throughout organizations. Managing scripts, cron jobs, various API calls, or third-party tooling to have consistent branch protections is not only annoying but also time-consuming. You won’t know the rules in place as an engineer until you encounter a pull request. It’s time for a new approach We’re excited to announce the general availability of repository rules. Repository rules enable you to easily define branch protections in your public repositories. With flexible targeting options, you can protect multiple branch patterns using a single ruleset. Layering makes bypass scenarios dynamic; a GitHub App can skip status checks with no additional permissions, and administrators can bypass pull requests while still requiring the important CodeQL checks to run. In line with our mission to be the home for all developers, we have integrated GitHub Repository Rules to ensure that everyone collaborating on a repository knows the rules in play for them. An overview page provides visibility on rules applicable to a branch. Relevant information about rule enforcement is available at multiple touchpoints on GitHub.com, Git, and the GitHub CLI. There are also helpful prompts on ensuring the responsible use of bypass permissions. Twilio has been using GitHub Repository Rules to balance developer experience and security. At Twilio, we value the autonomy of our engineering teams, including the ability to manage their own GitHub repositories. However, this autonomy makes compliance and security more challenging. We have successfully used GitHub Repository Rules to help us meet our compliance and security requirements while maintaining team autonomy. – David Betts, Senior Engineering Manager // Twilio GitHub Enterprise Cloud customers can enforce these rules across all or a subset of their repositories in an organization. No more tedious audits checking to see if a rule existed; now, you can ensure consistency in one location. If you’re not ready to commit to a ruleset, you can trial them in evaluate mode. Rule insights allow you to see what could happen if you dismiss stale reviews or enable linear merge history. No more guessing and no more testing in “production.” Policy enforcement is a big reason Thomson Reuters has been an early adopter of repository rules across their organization. Compliance and security controls are fundamental to keeping applications safe. At Thomson Reuters, it’s important we properly enforce these policies. With repository rules, GitHub gives us the confidence to know we are enforcing our policies across an organization effectively, keeping our applications safe for end customers. – Darren Trzynka. Senior Cloud Architect // Thomson Reuters Regarding consistency, repository rules can deliver that with new metadata rules. Branch names, commit messages, and author email addresses of the commit can be governed to help ensure organizational standards. So, set all those protected tags to use SemVer and commit messages on the Emoji-Log standard. Let’s jump in with a few scenarios where repository rules can help level up your code integrity. We’re just normal repositories. Typical rules for production repositories. Setting up repository rules can help maintain code quality, prevent mistakes, and improve collaboration. There are numerous decisions to make about the security goals of a repository, let […]

Read More

How to responsibly adopt GitHub Copilot with the GitHub Copilot Trust Center

First introduced as a technical preview in June 2021, GitHub Copilot quickly emerged as the world’s first at-scale generative AI coding tool when it became generally available in June 2022. Since then, it’s played a critical role in redefining the developer experience and underscoring the impact of developer productivity and satisfaction on business outcomes. In our latest survey, we found that 92% of U.S.-based developers are already using AI coding tools both in and outside of work—which shows that most companies are already using AI, whether they know it or not. As the creators of the world’s most widely adopted generative AI coding tool, we want to empower other organizations to accelerate their innovation, while ensuring they have the transparency they need to understand and feel confident using Github Copilot. That’s why we’re launching the GitHub Copilot Trust Center. We often field questions about how GitHub Copilot protects user privacy and if the code that GitHub Copilot suggests is secure. Those questions, and many others regarding security, privacy, compliance, and intellectual property can be easily found and clearly answered on the GitHub Copilot Trust Center. When developers use GitHub Copilot, they can augment their capabilities and tackle large, complex problems in a way they couldn’t before. By following good coding practices and taking advantage of GitHub Copilot’s built-in safeguards, they can feel confident in the code they’re contributing. As organizations take note of AI’s transformative potential, GitHub aims to share guidance on how best to use these tools—and bring greater transparency to how GitHub Copilot for Business works. What you’ll find on the GitHub Copilot Trust Center AI is here to stay—and it’s already transforming how developers approach their day-to-day work. But just like any disruptive technology throughout history, AI brings important questions around its use and implications. To understand GitHub Copilot’s capabilities and proactively build policies that enable its use, organizations can reference the Copilot Trust Center to responsibly and effectively equip their developers with the AI pair programmer. Here are a few frequently asked questions to get you started: What personal data is used by GitHub Copilot for Business and how? Copilot for Business collects three kinds of personal data: user engagement data, prompts, and suggestions. User engagement data is information about events that are generated when iterating with a code editor. A prompt is a compilation of IDE code and relevant context (IDE comments and code in open files) that the GitHub Copilot extension sends to the AI model to generate suggestions. A suggestion is one or more lines of proposed code and other output returned to the GitHub Copilot extension after a prompt is received and processed by the GitHub Copilot model. Copilot for Business uses the source code in your IDE only to generate a suggestion. It also performs several scans to identify and remove certain information within a prompt. Prompts are only transmitted to the AI model to generate suggestions in real-time and are deleted once the suggestions are generated. Copilot for Business also does not use your code to train the Azure OpenAI model. GitHub Copilot for Individual users, however, can opt in and explicitly provide consent for their code to be used as training data. User engagement data is used to improve the performance of the Copilot Service; specifically, it’s used to […]

Read More