From the blog

Smarter, more efficient coding: GitHub Copilot goes beyond Codex with improved AI model

The magic of GitHub Copilot just got even better with an improved AI model and enhanced contextual filtering. These improvements give developers more tailored code suggestions that better align with their specific needs, and are available for both GitHub Copilot for Individuals and GitHub Copilot for Business. Read on to learn more about these exciting updates and how they can help you take your coding skills to the next level. Improved AI model goes beyond Codex for even faster suggestions The improved AI model behind GitHub Copilot goes beyond the previous OpenAI Codex model, offering even faster code suggestions to developers. It was developed through a collaboration between OpenAI, Microsoft Azure AI, and GitHub, and offers a 13% latency improvement over the previous model. This means that GitHub Copilot generates code suggestions for developers faster than ever, which promises to drive a substantial increase in overall productivity. Enhanced Contextual Filtering for more tailored code suggestions In addition to the improved AI model, we’ve implemented more sophisticated context filtering that takes into account a wider range of a developer’s context and usage patterns. With the update, GitHub Copilot filters prompts and suggestions more intelligently, so developers get more relevant code completions for their specific coding tasks. This has resulted in a +6% relative improvement in code acceptance rate, allowing developers to focus even more on the creative aspects of their work rather than getting bogged down in tedious coding tasks. The productivity gain also allows developers to tackle more ambitious projects and bring their ideas to life more quickly. Unlock new levels of productivity and satisfaction with GitHub Copilot The improved AI model and the new context filtering offer 13% latency improvement and 6% relative improvement in code acceptance rate, building upon the productivity gains developers have come to expect while using GitHub Copilot. With these improvements, developers can expect to stay in the flow and work more efficiently than ever, leading to faster innovation with better code. They’ll also find more satisfaction with their work, given research that shows minimizing disruptions and staying in the flow have a tangible impact on developer happiness. At GitHub, we’re committed to continuing to improve the developer experience with GitHub Copilot. We have some exciting plans in the works and will continue to share news on our blog and Changelog. Whether you’re a seasoned pro or just starting out, GitHub Copilot can help you take your coding skills to the next level–and we can’t wait to see what you’ll build with it!

Read More

Introducing code referencing for GitHub Copilot

Make more informed decisions about the code you use. In the rare case where a GitHub Copilot suggestion matches public code, this update will show a list of repositories where that code appears and their licenses. Sign up for the private beta today. Over the course of the last year, GitHub Copilot, the world’s first at-scale AI pair programmer trained on billions of lines of public code, has attracted more than 1 million developers and helped over 27,000 organizations build faster and more productively. During that time, many developers told us they want to see when GitHub Copilot’s suggestions match public code. Today, we’re announcing a private beta of GitHub Copilot with code referencing that includes an updated filter which detects and shows context of code suggestions matching public code on GitHub. When the filter is enabled, GitHub Copilot checks code suggestions with surrounding code of about 150 characters and compares it against an index of all the public code on GitHub.com. Matches—along with information about every repository in which they appear—are displayed right in the editor. Developers can now choose whether to block suggestions containing matching code, or allow those suggestions with information about matches. Why? Some want to learn from others’ work, others may want to take a dependency rather than introduce new app logic, and still others want to give or receive credit for similar work. Whatever the reason, it’s nice to know when similar code is out there. Let’s see how it works. How GitHub Copilot code referencing works With billions of files to index and a latency budget of only 10-20ms, it’s a miracle of engineering that this is even possible. Still, if there’s a match, a notification appears in the editor showing: (1) the matching code, (2) the repositories where that code appears, and (3) the license governing each repository. Why code referencing matters In our journey to create a code referencing tool, we discovered a few interesting things: First, our previous research suggests that matches occur in less than one percent of GitHub Copilot suggestions. But that one percent isn’t evenly distributed across all use cases. In the context of an existing application with surrounding code, we almost never see a match. But in an empty or nearly empty file, we see matches far more often. Suggestions are heavily biased toward the prompt so GitHub Copilot can provide suggestions tailor-made for your current task. That means, in an existing app with lots of context, you’ll get a suggestion customized for your code. But in an empty, or nearly empty file, there’s little to no context. So, you’re more likely to get a suggestion that matches public code. We’ve also found that when suggestions match public code, those matches frequently appear in dozens, if not hundreds of repositories. In some ways, this isn’t surprising because the models that power GitHub Copilot are akin to giant probability machines. A code fragment that appears in many repositories is more likely to be a “pattern” detected by the model—similar to the patterns we see elsewhere in public code. For example, research on Java projects finds that up to 11% of repositories may contain code that resembles solutions posted to Stack Overflow, and the vast majority of those snippets appear without attribution. Another study on Python found that many […]

Read More

A checklist and guide to get your repository collaboration-ready

Want the TL;DR, or you’ve already been using GitHub for awhile? Skip to the end for a printable checklist that you can use to ensure that you’ve covered all aspects of making your repository collaboration-ready. My daughter has a pair of pet gerbils. They’re awesome, but not the most complex creatures to care for. They need their cage cleaned occasionally, their food and water refilled, and may need a neighbor to check in on them if we’re away for a while. But someday, she may have a pet that requires more care and attention–a cat or dog perhaps, which needs to be played with and nurtured every day–so she’ll want to have a few good friends who know her pet and can be their companion whenever she’s away. And someday, she may even have a child of her own, making her connections to community and family ever more important. As the saying goes, it takes a village to raise a child. So it goes with code projects. My colleagues and I often refer to our projects as “pets” or even “children” (sometimes jokingly, sometimes obsessively). We pour a lot of our own care and attention into them, but it can be easy to forget how important the community’s contributions can be to their success. In the world of software development, collaboration can make the difference between a brittle last-minute release and a reliable, maintainable, pain-free project. Whether you’ve been coding for a day or a decade, your colleagues are there to help strengthen your work. But they can only help if you’ve given them the tools to do so. Your primary responsibility as the creator or maintainer of a repository is to ensure that others can appropriately use, understand, and even contribute to the project. GitHub is here to support that mission, but ensuring that a repository is collaboration-ready takes a bit more effort than using git clone. So read on to learn the settings, content, and behaviors that will help you succeed. 1. Repository settings The settings of your repository lay the foundation for collaboration. They determine who can see and contribute to your project, how contributions are reviewed, and what happens to those contributions once they are submitted. Properly used, they can foster an environment in which contributors across the globe will find, make use of, and help build your project. In a corporate setting, they help shift developers from a siloed way of thinking and building to a “search-first, collaborate-first” mindset. This practice, known as innersourcing, reduces redundant work and accelerates the whole company. Visibility You’re aiming to maximize contributions and reuse, but that doesn’t always mean making your repository public, especially in a corporate setting where information privacy is a consideration. You have several options available in the “Settings” tab of your repository. Public lets anyone in the world see and copy your code, and generally allows them to create issues or pull requests, so they can provide feedback about whether it works well, or even suggest (but not force) changes to improve it. This is generally great for personal projects containing no protected information (those tokens are all stored separately, right?) but only for certain “blessed” company projects. Internal is a special visibility level used by GitHub Enterprise, allowing anyone inside your organization to see […]

Read More

Introducing the new, Apple silicon powered M1 macOS larger runner for GitHub Actions

Today, GitHub is releasing a public beta for the new, Apple silicon powered M1 macOS larger runner for GitHub Actions. Apple silicon powered M1 macOS larger runners Apple developers require the latest chipset to take advantage of features in the latest versions of iOS and macOS. They also want increased performance by leveraging the on-chip GPU capabilities of the M1 processor. The M1 macOS runner comes with GPU hardware acceleration enabled by default. Workloads are transferred from the CPU to the GPU for improved performance and efficiency. The runner is equipped with a 6-core CPU, 8-core GPU, 14 GB of RAM, and 14 GB of storage. It can reduce build times by up to 80% compared to the existing 3-core Intel standard runner, and up to 43% compared to the existing 12-core Intel runner. How GitHub uses the M1 runner to build GitHub mobile for iOS The GitHub mobile iOS team leverages the new M1 runner for 10k+ minutes to deliver updates of the GitHub iOS app to the Apple App Store every week. The transition from the 12-core Intel runner to the M1 runner resulted in a 44% build time improvement, from 42 minutes to 23 minutes. While the time spent testing the binary remained constant for single target runs, code compilation improved by 51%, along with UI tests improving by 55% across the entire GitHub mobile test suite. The transition to the M1 runner was seamless, with updating the YAML workflow label being the only requirement to access it. However, due to differences in UI rendering between M1 Macs and Intel Macs, the team had to re-record images for snapshot tests, which compare new UI images with recorded reference images on a pixel-by-pixel basis. The M1 runner has proven to be advantageous for iOS teams, as it provides access to the VMs GPU and speeds up the App Store review process. Faster approval and publishing of apps can be achieved, which reduces the time spent on submitting to the Apple app store. How to use the runner To try the new Apple silicon macOS larger runner, update the runs-on: key in your GitHub Actions YAML workflow YAML file to target macos-latest-xlarge or macos-13-xlarge. The 12-core macOS larger runner is moving from xlarge to large, and is still available by updating the runs-on: key to macos-latest-large, macos-12-large, or macos-13-large. There is no sign-up required for the beta and the runner is immediately available to all developers, teams, and enterprises. New macOS runner pricing As part of GitHub’s continued commitment to deliver the best developer experience we are excited to share with you that we will be decreasing the price of our macOS larger runners. We understand the importance of achieving both cost-efficiency and high performance and this price decrease reflects our dedication to supporting your success. With today’s launch, our macOS larger runners will be priced at $0.16/minute for XL and $0.12/minute for large. To learn more about runner per job minute pricing, check out the docs. Additionally, if you’re interested in using larger macOS runners and understanding the difference between them and larger Linux and Windows runners, you can find more details in the “About larger runners” section of our documentation. What’s next? You can track progress towards the general availability of macOS larger runners by following this […]

Read More

Microsoft kills Python 3.7 ¦ … and VBScript ¦ Exascaling ARM on Jupiter

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. This week: VS Code drops support for Python 3.7, Windows drops VBScript, and Europe plans the fastest ARM supercomputer. 1. Python Extension for Visual Studio Code Kills 3.7 First up this week: Microsoft deprecates Python 3.7 support in Visual Studio Code’s Python extension. It’ll probably continue to work for a while, though (emphasis on the “probably”). Analysis: Obsolete scripting language is obsolete If you’re still using 3.7, why? It’s time to move on: 3.12 is the new hotness. Even 3.8 is living on borrowed time. Priya Walia: Microsoft Bids Farewell To Python 3.7 “Growing influence of the Python language”Python 3.7, despite reaching its end of life in June, remains a highly popular version among developers. … Microsoft expects the extension to continue functioning unofficially with Python 3.7 for the foreseeable future, but there are no guarantees that everything will work smoothly without the backing of official support.…Microsoft’s recent launch of Python scripting within Excel underscores the growing influence of the Python language across various domains. The move opens up new avenues for Python developers to work with data within the popular spreadsheet software. However, it’s not all smooth sailing, as recent security flaws in certain Python packages have posed challenges. Python? Isn’t that a toy language? This Anonymous Coward says otherwise: Ha, tell that to Instagram, or Spotify, or Nextdoor, or Disqus, or BitBucket, or DropBox, or Pinterest, or YouTube. Or to the data science field, or mathematicians, or the Artificial Intelligence crowd.…Our current production is running 3.10 but we’re looking forward to moving it to Python 3.11 (3.12 being a little too new) because [of] the speed increases of up to 60%. … If you’re still somewhere pre 3.11, try to jump straight to 3.11.6.…The main improvements … are interpreter and compiler improvements to create faster bytecode for execution, sometimes new features to write code more efficiently, and the occasional fix to remove ambiguity. I’ve been running Python in production for four years now migrating from 3.8 -> 3.9 -> 3.10 and soon to 3.11 and so far we have never had to make any changes to our codebase to work with a new update of the language. And sodul says Python’s reputation for breaking backward compatibility is old news: Most … code that was written for Python 3.7 will run just fine in 3.12. … We upgrade once a year and most issues we have are related to third party SDKs that are too opinionated about their own dependencies. We do have breaking changes, but mostly we find pre-existing bugs that get uncovered thanks to better type annotation, which is vital in larger Python projects. 2. Windows Kills VBScript Microsoft is also deprecating VBScript in the Windows client. It’ll probably continue to work for a while as an on-demand feature, though (emphasis on the “probably”). Analysis: Obsolete scripting language is obsolete If you’re still using VBScript, why? It’s time to move on: PowerShell is the new hotness—it’s even cross platform. Sergiu Gatlan: Microsoft to kill off VBScript in Windows “Malware campaigns”VBScript (also known as Visual Basic Script or Microsoft Visual Basic Scripting Edition) is a programming language similar to Visual Basic or Visual Basic for Applications (VBA) and […]

Read More

What Is std::quoted Quoted String In Modern C++?

Sometimes we want to preserve the string format especially when we use string in a string with /”. In C++14 and above, there is a std::quoted template that allows handling strings safely where they may contain spaces and special characters and it keeps their formatting intact. In this post, we explain std::quoted quoted strings in modern C++. What Is Quoted String In Modern C++? The std::quoted template is included in  header, and it is used to handle strings (i.e. “Let’s learn from ”LearnCPlusPlus.org!” “) safely where they may contain spaces and special characters and it keeps their formatting intact.  Here is the syntax,   std::quoted( )   Here are C++14 templates defined in where the string is used as input,   template quoted( const CharT* s, CharT delim = CharT(‘”‘), CharT escape = CharT(‘\’) );   or   template quoted(   const std::basic_string& s,   CharT delim = CharT(‘”‘), CharT escape = CharT(‘\’) );   Here is a C++14 template defined in where the string is used as output,   template quoted( std::basic_string& s, CharT delim=CharT(‘”‘), CharT escape=CharT(‘\’) );   Note that this feature is is using std::basic_string and it improved in C++17 by the  std::basic_string_view support. What Is std::quoted quoted string in modern C++? Here is an example that uses input string and outputs into a stringstream:   const std::string str  = “I say “LearnCPlusPlus!””; std::stringstream sstr; sstr std::quoted(str_out);  // output sstr to str_out   Is there a full example about std::quoted quoted string in modern C++? Here is a full example about std::quoted in modern C++. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31   #include #include #include   int main() { std::stringstream sstr; const std::string str  = “Let’s learn from “LearnCPlusPlus.org!” “; sstr

Read More

Structured Diagnostics in the New Problem Details Window

Structured Diagnostics in the New Problem Details Window Sy Brand October 11th, 20230 2 Massive compiler errors which seem impossible to navigate are the bane of many C++ developers’ lives. It’s up to tools to provide a better experience to help you comprehend diagnostics and understand how to fix the root issue. I wrote Concepts Error Messages for Humans to explore some of the design space and now, due to the hard work of many folks working on Visual Studio, we have a better experience to share with you all. You can read about some of the work which has led up to these changes in Xiang Fan’s blog post on the future of C++ diagnostics in MSVC and Visual Studio. In Visual Studio 2022 version 17.8 Preview 3, if you run a build using MSVC and an MSBuild project, entries in the Error List which have additional information available will show an icon in a new column named Details: If you click this button, a new Problem Details window will open up. By default this will be in the same place as the Error List, but if you move it around, Visual Studio will remember where you put it. This Problem Details window provides you with detailed, structured information about why a given problem occurred. If we look at this information we might think, okay, why could void pet(dog) not be called? If you click the arrow next to it, you can see why: In a similar way we can expand out other arrows to find out more information about our errors. This example is produced from code which uses C++20 Concepts and the Problem Details window gives you a way to understand the structure of Concepts errors. For those who would like to play around with this example, the code required to produce these errors is: struct dog {}; struct cat {}; void pet(dog); void pet(cat); template concept has_member_pet = requires(T t) { t.pet(); }; template concept has_default_pet = T::is_pettable; template concept pettable = has_member_pet or has_default_pet; void pet(pettable auto t); struct lizard {}; int main() { pet(lizard{}); } Make sure you compile with /std:c++20 to enable Concepts support. Output Window As part of this work we have also made the Output Window visualize any hierarchical structure in the output diagnostics. For example, here in an excerpt produced by building the previous example: 1>Source.cpp(18,6): 1>or ‘void pet(_T0)’ 1>Source.cpp(23,5): 1>the associated constraints are not satisfied 1> Source.cpp(18,10): 1> the concept ‘pettable’ evaluated to false 1> Source.cpp(16,20): 1> the concept ‘has_member_pet’ evaluated to false 1> Source.cpp(10,44): 1> ‘pet’: is not a member of ‘lizard’ 1> Source.cpp(20,8): 1> see declaration of ‘lizard’ 1> Source.cpp(16,41): 1> the concept ‘has_default_pet’ evaluated to false 1> Source.cpp(13,30): 1> ‘is_pettable’: is not a member of ‘lizard’ 1> Source.cpp(20,8): 1> see declaration of ‘lizard’ This change makes it much easier to scan large sets of diagnostics without getting lost. Code Analysis The Problem Details window is now also used for code analysis warnings which have associated Key Events. For example, consider this code which could potentially result in a use-after-move: #include void eat_string(std::string&&); void use_string(std::string); void oh_no(bool should_eat, bool should_reset) { std::string my_string{ “meow” }; bool did_reset{ false }; if (should_eat) { eat_string(std::move(my_string)); } if (should_reset) { did_reset = true; my_string = “the string is reset”; } […]

Read More

What Is Decltype (auto) In Modern C++ And How To Use It?

The auto keyword arrives with the new features of the C++11 and the standards above. In C++14, there is a new decltype that is used with auto keyword. In modern C++, the decltype(auto) type-specifier deduces return types while keeping their references and cv-qualifiers, while auto does not. In this post, we explain what decltype (auto) is in modern C++ and how to use it. What is auto in modern C++? The auto keyword is used to define variable types automatically, it is a placeholder type specifier (an auto-typed variable), or it can be used in a function declaration, or a structured binding declaration. If you want to learn more about auto keyword, here it is, What is decltype in modern C++? The decltype keyword and operator represents the type of a given entity or expression. This feature is one of the C++11 features added to compilers (including BCC32 and other CLANG compilers). In a way you are saying “I am declaring this variable to be the same type as this other variable“. Here are more details about how you can use it, How to use decltype (auto) in modern C++? In C++14, there is a new decltype feature that allows you to use with the auto keyword. In C++14 and standards above, the decltype(auto) type-specifier deduces return types while keeping their references and cv-qualifiers, while auto does not. Since C++14, here is the syntax,   type_constraint (optional) decltype ( auto )   In this syntax, the type is decltype(expr) and expr can be an initializer or a return statement. Here is a simple example how we can use it,   decltype(auto) x = i;   Are there some simple examples about decltype (auto) in modern C++? Here are some simple examples that shows difference between auto and decltype(auto), In C++14 and above, we can use decltype(auto) with const int values as below,   const int x = 4096; auto xa = x;    // xa : int decltype(auto) xb = x;  // xb : const int   In C++14 and above, we can use decltype(auto) with int& values as below,   int y = 2048; int& y0 = y;  // y_ : int auto ya = y0;  // yc2 : int decltype(auto) yb = y0;  // yb : int&   In C++14 and above, we can use decltype(auto) with int values as below,   int&& z = 1024; auto zm = std::move(z);   // zm : int decltype(auto) zm2 = std::move(z);  // zm2 : int&&   In C++11 and above, we can use auto for return types,   auto myf(const int& i) { return i; // auto return type : int }   In C++14 and above, we can use decltype(auto) for return types,   decltype(auto) myf2(const int& i) {    return i; // decltype(auto) return type : const int& }   Is there a full example about decltype (auto) in modern C++? Here is a full example that shows how you can use auto and decltype(auto) in different int types. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54   #include   auto myf(const int& i) { return i; // auto […]

Read More

Introducing secret scanning validity checks for major cloud services

At GitHub, we launched secret scanning with the mission of eliminating all credential leaks. In support of this mission, this year we’ve made secret scanning and secret scanning push protection free on public repositories to help open source users detect and prevent secret leaks. We also shipped push protection metrics for GitHub Advanced Security customers to better understand trends across their organization. But a good security experience isn’t just about reducing noise and delivering high-confidence alerts–it should make your remediation efforts simpler and faster. A key component of remediation is assessing whether a token is active or not. To that end, we introduced validity checks for GitHub tokens earlier this year, which removes manual effort and friction from the process. You can see a token’s status within the UI, saving you time and allowing you to prioritize remediation efforts more efficiently. This is especially useful when you have to comb through hundreds or even thousands of alerts. Today, we’re excited to announce that we have extended validity checks for select tokens from AWS, Microsoft, Google, and Slack. These account for some of the most common types of secrets detected across repositories on GitHub. This is just the beginning–we’ll continuously expand validation support on more tokens in our secret scanning partner program. You can keep up to date on our progress via our list of supported patterns. How to get started Enterprise or organization owners and repository administrators can activate validity checks by going to “Settings” and “Code security and analysis.” Scroll down to “Secret scanning” and check the box for “Automatically verify if a secret is valid by sending it to the relevant partner” to activate validity checks for non-GitHub tokens. Once the setting is enabled, you can see within alerts whether the token is active or not. We perform checks periodically in the background, but you can also conduct a manual refresh by clicking ‘Verify secret’ in the top right corner. Validity checks are another piece of information at your disposal when investigating a secret scanning alert. We hope this feature will provide greater speed and efficiency in triaging alerts and remediation efforts. If you have feedback to share, please reach out to us in the Code Security community discussion.

Read More

GitHub Copilot Chat beta now available for all individuals

In July, we introduced a public beta of GitHub Copilot Chat, a pivotal component of our vision for the future of AI-powered software development, for all GitHub Copilot for Business users. Today, we’re thrilled to take the next step forward in our GitHub Copilot X journey by releasing a public beta of GitHub Copilot Chat for all GitHub Copilot individual users across Visual Studio and VS Code. Integrated together, GitHub Copilot Chat and the GitHub Copilot pair programmer form a powerful AI-assistant capable of helping every developer build at the speed of their minds in the natural language of their choice. We believe this cohesion will form the new centerpiece of the software development experience, fundamentally reducing boilerplate work and designating natural language as a new universal programming language for every developer on the planet. Let’s jump in and take a deeper look into what this announcement means for individual developers and how you can get started. How developers can access GitHub Copilot Chat beta GitHub Copilot Chat beta has been enabled for all Copilot for Individual users for free. Currently, GitHub Copilot Chat is supported in both Visual Studio and Visual Studio Code editors. GitHub Copilot for Individual users will also receive an email notification to guide them on the next steps. If you’re not already part of our beta program and would like to get started, please refer to our comprehensive getting started guide, which is conveniently linked in your email notification. What GitHub Copilot Chat can do for you Now, anyone signed up for GitHub Copilot for individuals can access the powerful AI assistant that major enterprises are leveraging to turbocharge developer productivity and happiness. Now, teams of developers and individuals alike can use GitHub Copilot Chat to learn new languages or frameworks, troubleshoot bugs, or get answers to coding questions in simple, natural language outputs—all without leaving the IDE. By reducing the need for context switching, it streamlines the development process, which helps developers maintain their focus and momentum. GitHub Copilot Chat also empowers individual contributors to suggest security patches, enhancing the overall security of open-source projects—and we think that’s pretty exciting news for the developer community. Here are some of GitHub Copilot Chat’s other powerful features: Real-time guidance. GitHub Copilot Chat can suggest best practices, tips, and solutions tailored to specific coding challenges—all in real time. Developers can use GitHub Copilot Chat to learn a new language or upskill at speed. Code analysis. With GitHub Copilot Chat, you can break down complex concepts and get explanations of code snippets. Fixing security issues. GitHub Copilot Chat can make suggestions for remediation and help reduce the number of vulnerabilities found during security scans. Simple troubleshooting. Trying to debug code? GitHub Copilot Chat not only identifies issues, but also offers suggestions, explanations, and alternative approaches. Democratizing software development for a new generation Today, developers are no longer just people building software for technology companies. They’re an increasingly diverse and global group of folks across industries, who are tinkering with code, design, and documentation in their free time; contributing to open source projects; conducting scientific research; and more. Whether you are a young developer in Brazil learning how to execute a unit test for the first time or a professor in Germany who needs help documenting data, GitHub Copilot […]

Read More