From the blog

Introducing auto-triage rules for Dependabot

Since the May beta release of our GitHub-curated Dependabot policies that detect and close false positive alerts, over 250k repositories have manually opted in, with an average improvement of over 1 in 10 alerts. The impact so far: auto-dismissal of millions of alerts that would have otherwise demanded a developer’s attention to manually assess and triage. Starting today, you can create your own custom rules to control how Dependabot auto-dismisses and reopens alerts, so you can focus on the alerts that matter without worrying about the alerts that don’t. Today’s ship—our public beta of custom auto-triage rules—makes that engine available for everyone, so you can specify and delegate specific decision making tasks to Dependabot with your own custom rules. Today’s release is part of a series of ships that make it easier to scale your security strategy, whether you’re an open source maintainer or an application developer on a centralized security team. Custom auto-triage rules for Dependabot are free for public repositories and available as part of GitHub Advanced Security for private repositories. Together with auto-triage presets and a renewed investment in alert metadata, custom auto-triage rules relieve developers from the overhead of alert management tasks so they can focus on creating great code. What are auto-triage rules? Auto-triage rules are a powerful tool to help you reduce false positives and alert fatigue substantially, while better managing your alerts at scale. Rules contain criteria that match the targeted alerts, plus the decision that Dependabot will perform on your behalf. What can you do with rules? With auto-triage rules, you can proactively filter out false positives, snooze alerts until patch release, and – as rules apply to both future and current alerts – manage existing alerts in bulk. What behaviors can Dependabot perform? For any existing or future alerts that match a custom rule, Dependabot will perform the selected behavior accordingly. Our first public beta release covers ignore and snooze-until-patch functionality with repository-level rules. We will follow-up soon with support for managing rules at the organization-level. Both are managed via the auto-dismiss alert resolution, which provides visibility into automated decisions, integrates with existing reporting systems and workflows, and ensures that alerts can be reintroduced if alert metadata changes. What alert criteria are supported by custom rules? Custom rules can target alerts based on multiple criteria, including the below attributes as of today. Attribute Description severity Alert severity, based on CVSS base score, across the following values: low, medium, high, and critical. scope Scope of the dependency: development (devDependency) or runtime (production). package-name Packages, listed by package name. cwe CWEs, listed by CWE ID. ecosystem Ecosystems, listed by ecosystem name. manifest Manifest files, listed by manifest path. Who can use this feature? GitHub-curated presets–such as auto-dismissal of false positives–are free for everyone and on all repositories. Custom auto-triage rules are available for free on all public repositories, and available as a feature of GitHub Advanced Security for private repositories. What’s next for Dependabot? In addition to gathering your feedback during the public beta, we’re working to support additional alert metadata and enforcement options to expand the capabilities of custom rules. We’re also working on new configurability options for Dependabot security updates to give you more control over remediation flows. Keep an eye on the GitHub Changelog for more! In the meantime, try out […]

Read More

Announcing general availability of GitHub Advanced Security for Azure DevOps

We live in a world fully consumed by software. According to the IDC, around 750 million applications will be shipped globally by 2025, meaning the feat of securing the world’s software is growing at an unprecedented rate at a time when digital trust has never been more important. At GitHub, we’re committed to empowering developers to not only create software, but ship secure products. GitHub Advanced Security (GHAS) was built to minimize context switching, reduce tooling, and allow you to rapidly find and fix vulnerabilities at the speed of innovation. Our application security testing solutions are natively embedded in the developer workflow and empowers DevSecOps teams to prioritize innovation and enhance developer productivity without sacrificing security. But what does this look like in practice? Code scanning, our native SAST solution, surfaces the right alerts at the right time. When a security alert is triggered, it’s shown incrementally in the pull request. This is different from more traditional SAST tools, which may provide a long list of alerts to sort through when a scan is complete, lacking specific context. With this approach, users engage with almost 80% engagement of alerts surfaced by code scanning, leading to a 50% real-time fix rate in. This is 3.8X more effective than third-party alerts, where the engagement rate is around 16% and the fix rate is around 13%. Today, with the general availability of GitHub Advanced Security for Azure DevOps, we are bringing GHAS’s native security features to the Azure DevOps workflow, meaning Azure DevOps users can benefit from the same advantages seen by GitHub Enterprise users. To get started today, any Azure DevOps Project Collection Administrator (PCA) can enable GitHub Advanced Security protections through the Azure DevOps configuration settings. Rapidly deploy and scale your security program With general availability, we’ve added new functionality to help you quickly enable GitHub Advanced Security to cover your organizations repositories. You can now choose to enable GHAS at the organization or project level, as well as on individual repositories. This should allow you to quickly deploy GHAS, when you want it, where you want it. When you enable GitHub Advanced Security for Azure DevOps, you’ll receive a prompt that will alert you that this is a billable event and give you an estimate of the number of committers. You can also now choose for Advanced Security to be automatically enabled for any future repositories you and your teams create. View all your alerts in a single pane of glass A key part of any successful application security program is a way to view all your alerts, across your organization, in a single pane of glass. This ensures you and your team have maximum visibility into your application security posture. We’ve taken this necessary feature, and built on it with our partnership with Microsoft Defender for Cloud (MDC). You can now not only view all Advanced Security alerts across your Azure repositories within MDC, but can also view alerts from GitHub as well. This functionality is available in the free tier of MDC, ensuring any team can take advantage of this powerful integration. Getting started with GitHub Advanced Security for Azure DevOps If you’re interested in getting started with GitHub Advanced Security for Azure DevOps, please see our documentation. Need more information? We will also be hosting a webinar […]

Read More

Passkeys are generally available

Passkeys are a new form of sign-in and phishing resistant credential that make it easier to protect your GitHub account by reducing use of passwords and other, more easily phishable authentication methods. Since the launch of passkeys in beta in July, tens of thousands of developers have adopted them. Now, all users on GitHub.com can use passkeys to protect their account. This continues our commitment to securing all contributors with 2FA by the end of 2023 and strengthening security across the platform—without compromising user experience. Read on to learn about how to enable passkeys on your GitHub account. And, for those of you also building authentication experiences, we share what we’ve learned about how to roll out and make passkeys easy to use to help you integrate this powerful new credential method more quickly in your service. Get started with passkeys To register one or more passkeys on your account, head to your account security settings and click “Add a passkey.” If you already have security keys set up, you might see an “Upgrade” option next to those, if they are usable as passkeys. For more details about upgrading security keys to passkeys, read the passkeys docs. What we’ve learned building support for passkeys Passkeys are a relatively new credential type, and we’ve learned a lot about how to make them easy to use for large communities. Here we’ll share some of what we learned, and how we responded to that feedback—to help other authentication teams looking to adopt passkeys into their product. Not every platform is passkey ready, but they can still be supported We found that Linux and Firefox users struggled to use passkeys, as those platforms don’t yet have strong support for passkeys. As a result, we decided to enable cross-device registration of passkeys. That means, you can register a passkey on your phone while you’re using your desktop. The passkey lives in the phone, but users can connect it to their desktop and set-up and authenticate through the desktop’s browser. This enables Linux and Firefox users to set up passkeys. At a technical level, this meant ignoring the results of the IsUserVerifyingPlatformAuthenticator API (IsUVPAA) call, and instead just relying on the presence of the webAuthN APIs. This meant that both hardware keys (which don’t cause the IsUVPAA check to return true) and cross-device usage could still be accessed even when the underlying combination of OS and browser didn’t result in IsUVPAA returning true. Make it easy to upgrade compatible security keys There are a lot of security keys already registered on GitHub.com, many of them hardware keys. We had predicted that most hardware key owners wouldn’t want to upgrade to a passkey, due to threat models and lack of sync support. But a lot of users opted to upgrade a compatible security key to a passkey. So, we invested in making it easier to do so. In the account security settings, a new “upgrade” arrow next to compatible security keys enables immediate upgrade. Note that not every compatible security key will have this button next to it, as we only started checking for compatibility in late February this year. We also learned that some browser, OS, and security key combinations don’t support the re-registration flow that we use for upgrades, and will error if GitHub attempts […]

Read More

The GitHub Security Lab’s journey to disclosing 500 CVEs in open source projects

When I stepped onto the scale this morning, I remembered that there are some numbers that feel awkward to celebrate, while perhaps some others are worth celebrating! Recently, the GitHub Security Lab passed the milestone of 500 CVEs disclosed to open source projects. What’s a CVE? In short, it’s the record of a security vulnerability, under the CVE program, intended to inform impacted users. So, finding more vulnerabilities in open source shouldn’t be good news, right? Even as developer communities are getting better at keeping themselves secure, security issues may still slip through their defenses. This means that there will always be a need for security researchers, like the Security Lab, to discover and help fix them. If you’re not familiar with the Security Lab, we’re a team of security experts who work with the broader open source community to help fix security issues in their projects, with the goal of improving the overall security posture of open source. Our core activity is to audit open source projects, not only the ones hosted on GitHub–and help their maintainers fix the vulnerabilities we find, for free. This research is foundational for our other activities, such as education, improvement of our open source static analysis rules, and tooling. And now we are celebrating more than 500 CVEs disclosed. ???? How did we get here? The history of the Security Lab dates back to Semmle, the company that created CodeQL, and which was later acquired by GitHub. 2017 was a pivotal year, as we realized how powerful our product could be for finding security vulnerabilities. Unlike many other static analysis tools, CodeQL efficiently codifies insecure patterns and responds urgently to new security threats at scale. To showcase this capability, Semmle created a small security research team who used CodeQL to search for vulnerabilities in open source projects, and a web portal named LGTM.com where all open source projects could run CodeQL for free and be alerted of potential security flaws directly within their pull requests. This approach grew into an important company objective: find and fix vulnerabilities at scale in open source. This was a way of giving back to the open source community, just like any software company should. In September 2019, GitHub acquired Semmle, providing an ideal home for advancing the goal of improving open source security at scale. This led to the creation of the Security Lab, with a larger team and new initiatives, including curating the GitHub Advisory Database. The GitHub Advisory Database provides developers with the most accurate information about known security issues in their open source dependencies. GitHub also incorporated CodeQL as a foundation of code scanning and a core pillar of GitHub Advanced Security (GHAS), keeping it free for open source. Code scanning reached parity with LGTM.com in 2022. We have also expanded beyond CodeQL and now use a variety of tools in our audit activities, such as fuzzing. But CodeQL remains one of the most effective tools in our toolbox, because it enables us to conduct variant analysis at scale, and allows us to share our knowledge of insecure patterns with the community, in the form of executable CodeQL queries. The secret? Our maintainers-first approach Not all reports get a CVE. CVE records are useful for informing downstream consumers, so when there is no downstream […]

Read More

Will ChatGPT Replace Human Software Developers? Probably Not

Since the release of ChatGPT, there has been a great deal of hype around generative AI and how companies can leverage it to cut costs and democratize software and application development. Naturally, with discussions of cost-cutting and democratization come whispers about what will happen to software developers. This is a real and valid concern, but software developers’ skills, expertise and creativity are still very much needed. While generative AI and AI code generation tools like ChatGPT have shown some promise and potential benefits, they are still in their infancy—like many other innovative technological advancements. We also don’t know what scenarios they may present down the road or their true abilities when the technology matures. For instance, how will it integrate with other technologies? We don’t know what will happen when a ChatGPT-generated line of code breaks or needs to be changed. We don’t know if it can provide a novel solution to a unique problem or what security threats it will present. Given these unknowns, technology executives should think twice about replacing experienced and creative technology talent, such as software developers, with AI code generators. Will ChatGPT create novel code to solve a unique problem never encountered before? Probably not. A Tale as Old as Time (Or at Least a Decade) The technology industry has been searching for and developing new ways to make certain software development tasks much easier and more streamlined for years. One example of this is low-code/no-code. The notion of simplifying application development and replacing software developers with laypeople (citizen developers) has been around for more than a decade now, as low-code and no-code solutions have grown more popular. These solutions have promised that companies don’t need technical talent to get their software and app projects off the ground. However, if you look at the impact of these solutions today, their use can result in large amounts of technical debt and almost always require the skill of experienced software developers. The reason? Building complex software and applications is extremely difficult; it’s an art. Low-code and no-code solutions have their rightful place and can make things easier if a company is looking to launch a simple app or static web page. These solutions can increase the pace of development and time-to-market and enable everyday people without any development skills to facilitate them. However, they are not actually a complete solution and often overlook aspects of development that a human software developer would typically address. Without a skilled expert involved, low-code/no-code platforms often can’t solve a unique problem a company has. So, how does this relate to AI code generators like ChatGPT? Here’s how. A Similar Situation—With One Key Difference When thinking about their place in the development process, AI code generators are not that different from low-code or no-code solutions. The thinking is that they will also enable non-technical individuals to create software and applications with ease. Yet, there is one key difference—they promise expertise, too. But is the expertise coming from the AI code generator or the person piloting it? The answer is simple; it is not from the code generator. There have been examples of companies and individuals that have tried using ChatGPT to build code, and they have appeared to be successful. However, without the input of the individuals using it, it never would […]

Read More

Google De-Recruits 100s of Recruiters ¦ ARM Valued at $45½B in IPO

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. This week: Google fires hundreds of recruiters, and ARM gets a sky-high valuation. 1. Layoffs for the recruiters themselves First up this week: Google’s hiring has slowed to such an extent that it has far too many in-house recruiters. Boo hoo? Analysis: Don’t shed a tear at task shedding I get it. Many reading this care little for the typical recruiter. All too often they seem like pointless brokers—adding no value to the process yet receiving a huge bonuses. But this news is the latest indication that DevOps jobs are harder to come by. Louise Matsakis has the scoop: Google lays off hundreds on recruiting team “Hard decision”Google is laying off hundreds of people across its global recruiting team as hiring at the tech giant continues to slow. … Workers who were laid off began learning their roles had been eliminated earlier today, according to posts on social media.…Google began reducing the speed of its hiring last year, after adding tens of thousands of workers in 2020 and 2021. … Google spokesperson Courtenay Mencini said, … “In order to continue our important work to ensure we operate efficiently, we’ve made the hard decision to reduce the size of our recruiting team.” Bring in the RecruitBot 4000. galaxytachyon explains: How likely is it that this is because of AI taking over the jobs? Sift through resumes, contact candidates, schedule some interviews, connect the hiring manager to the candidate, even getting some extra information from the candidate via email or phone calls are all things an LLM can efficiently do. They may actually do it even better than a regular human, since they might “know” more about the role and the technical requirements than an average [recruiter]. AI recruiters—and AI developers, too. Here’s Qbertino: I don’t expect those jobs to return. … After 23 years in IT I’m looking into a … career switch myself. Our industry is fully industrialized, custom coding is by now only for mostly totally broken legacy **** that will be replaced by SOA subscriptions within the next few years and what’s still left to code will be mostly done by AI quite soon I suspect.…Time to move on. It was an awesome ride but we’ve now finally built the bots that will replace us. Nice. This will spell more wealth for everyone in the long run even if we are out of cushy jobs with obscene salaries. When Google catches a cold, do other DevOps shops sneeze? Not in gijames1225’s experience: It’s weird being at a midsize company that has only accelerated hiring for engineers while the big players all go through these layoff cycles. The cynic in me sees them as token displays of fiscal responsibility being made for shareholders and a weird performativity of not wanting to be outdone by other tech giants. Another bit of me wonders about general productivity at these places if they can layoff so many people and nothing really appears to change (from a consumer perspective). All of which makes this Anonymous Coward wonder: I wonder what happens now to those who have threatened to quit or were reluctant to come in to physical offices. Meanwhile, u/saracenraider has questions: Do […]

Read More

CloudBees CEO: State of Software Development is a Disaster

CloudBees CEO Anuj Kapur told attendees at a DevOps World event today that with developers spending only 30% of their time writing code the current state of software development in enterprise IT organizations is a disaster. After more than 14 years of effort, the promise of DevOps—in terms of being able to accelerate the rate at which applications are being deployed—remains largely academic, said Kapur. In fact, the effort to shift more responsibility for application security further left toward developers has only increased the amount of cognitive load and reduced the amount of time available to write code, he noted. However, with the rise of generative artificial intelligence (AI), an inflection point that will dramatically increase the velocity at which applications are being built and deployed has arrived, said Kapur. The challenge will be achieving that goal without increasing the cognitive load on developers. That cognitive overload results in 70% of developers’ time not being productive within organizations that often hire thousands of developers, he noted. Despite all the DevOps issues that need to be addressed, AI advances promise improvement. The overall DevOps market is still relatively young, given the current level of adoption, said Kapur. “We continue to believe the market is early,” he said. Today, CloudBees took the wraps off the first major update to the open source Jenkins continuous integration/continuous delivery (CI/CD) platform to have been made in the past several years. At the same time, the company also unveiled a DevSecOps platform based on Kubernetes that is optimized for building and deploying cloud-native applications based on portable Tekton pipelines. That latter platform provides the foundation through which CloudBees will, in the months ahead, apply generative AI to software engineering to, for example, create unit tests on the fly and automate rollbacks. In addition, DevSecOps capabilities will be extended all the way out to the integrated development environments (IDE) to reduce the cognitive load of developers. The overall goal is to reduce the number of manual processes that create bottlenecks that make it challenging to manage DevOps at scale. Criticism of the level of developer productivity enabled by DevOps compared to other development approaches needs to be tempered, said Tapabrata Pal, vice president of architecture for Fidelity Investments, because it still represents a significant advance. There is still clearly too much toil, but the issues that impede the pace at which developers can effectively write code tend to be more cultural than technical, he added. Organizations are not inclined to automatically trust the code created by developers, so consequently, there is still a lot of friction in the DevOps process, noted Pal. In theory, advances in AI should reduce that friction, but it’s still early days in terms of the large language models (LLMs) that drive generative AI platforms and their ability to create reliable code, he added. That should improve as LLMs are specifically trained using high-quality code, but in the meantime, the pace at which substandard code might be generated could overwhelm DevOps processes until AI is applied there as well, said Pal. Thomas Haver, master software engineer for M&T Bank, added that while assisted AI technologies will have a major impact, it’s not reasonable to expect large organizations to absorb them overnight. Patience will be required to ensure advances are made in ways that […]

Read More

JFrog swampUP: Addressing the Advent of AI

At JFrog SwampUp 2023, the buzz with all about AI, especially with JFrog’s announcement of Machine Learning (ML) Model Management capabilities. These conversations around AI and ML reflected these technologies’ growing influence and importance in the DevOps world. How much of the generative AI conversation is just hype, though? And what does that mean for AI and ML as a whole? Alan Shimel, CEO of Techstrong Group, and I sat down with Stephen Chin, VP of DevRel at JFrog, to find out. As far as Chin is concerned, even as more companies create and leverage AI models, these models must be managed like any other software component. Chin said JFrog Artifactory acts as a staging ground to operationalize models using DevSecOps best practices. Algorithms and models will continue to grow in size and complexity, and they will require robust processes around deployment and management – just like any other software artifact. The key, Chin said, is to think of ML as just another development language and leverage tools that standardize and streamline working with it. Compared to traditional enterprise applications, though, DevOps workflows for AI/ML are still relatively immature, but Chin said JFrog’s new model management capabilities aim to provide that missing automation and governance using DevSecOps best practices for governance, security, and licensing compliance. Additionally, Chin noted that AI/ML have become essential for development teams to keep up with the explosive demand for code. At this point, AI has become table stakes. In the AI arms race, the winners are those who understand AI has become a vital development tool to enhance productivity. In terms of job security, the losers are the ones who can’t keep up with the volume of code. According to Chin, you are out of the running if you don’t embrace AI. Looking ahead, AI will not make developers obsolete, though – quite the opposite. Given the quasi-unlimited appetite for new code, Chin emphasized that developers who embrace AI will have more work than ever. One way to think of it is that AI provides a new form of “outsourcing” to boost human productivity, much like previous waves of innovation in computer science. When it comes to security, there are still challenges that need to be addressed; code generated by today’s AI solutions still has significant drawbacks like potential data bias, lack of explainability and simple errors. In the long term, though, Chin believes AI itself will provide the solution to secure an exponentially growing codebase, given its superior scale. Just as AI will make individual developers more productive, it can also supercharge security teams – but it can also empower bad actors. The key will be continuing to democratize the benefits to even the playing field. AI promises to be a transformative technology on the scale of the Bronze Age or Quantum computing, Chin said, but the path forward will require humans and machines working together to ensure it’s used for good. It’s clear that the pace of innovation in AI and ML is rapidly accelerating. As these technologies become further democratized and integrated into developer workflows, they promise to transform how software is built and secured, Chin said. Companies must take advantage of this technology innovation by providing the pipelines and governance for this software revolution, he added. Chin believes the future will […]

Read More

Learn To Use Generalized Lambda Captures In C++

Impact-Site-Verification: 9054929b-bb39-4974-9313-38176602ccee The Lambda Expression construct is introduced in C++ 11 and further developed in the C++14, C++17, and C++20 standards. C++14 standard introduced the generalized lambda capture (init capture) that allows you to specify generalized captures, it allows to use move capture in lambdas. In C++14, lambda expressions are improved by the generalized lambda (generic lambda) and by this generalized lambda captures. In this post, we explain how to use generalized lambda captures in modern C++ What is a lambda in C++? A Lambda Expression defines an anonymous function or a closure at the point where it is used. You can think of a lambda expression as an unnamed function (that’s why it’s called “anonymous”). Lambda expressions help make code cleaner, and more concise and allow you to see behavior inline where it’s defined instead of referring to an external method, like a function. Lambda Expressions are an expression that returns a function object, and they are assignable to a variable whose data type is usually auto and defines a function object. The syntax for a lambda expression consists of specific punctuation with = [ ] ( ) { … } series. If you are new to lambdas or want to know more about them, please check these two posts that we released before. How to use a generalized lambda function in C++ Before C++14, lambda function parameters need to be declared with concrete types, as in the given example above. C++14 has a new generic lambda feature that allows lambda function parameters to be declared with the auto-type specifier. The basic syntax of a Lambda Expression in C++ is; Datatype Lambda Expression = [Capture Clause] (Parameter List) -> Return Type { Body } Generalized lambdas can be defined with the auto keyword that comes with C++11. We can define a generic lambda with the auto keyword as below. auto add_things = []( auto a, auto b ) {    return a + b; }; We can use this lambda with an int type, int x = add_things( 10, 20 ); or we can use it with a float type, float f = add_things( 10.f, 20.f ); or we can use it with bool, char, double, long long double,… etc types. This is why it is called as ‘generalized‘, it is a general form that we can use with any types. Very useful and powerful. What are generalized lambda captures in C++? The Lambda Expression construct is introduced in C++ 11 and further developed in the C++14, C++17, and C++20 standards. C++14 standard introduced the generalized lambda capture (also known as init capture) that allows you to specify generalized captures. In C++14, lambda expressions are improved by the generalized lambda (generic lambda) and by this generalized lambda captures, it allows to use move capture in lambdas. A generalized capture is a new and more general capture mechanism, it allows us to specify the name of a data member in the closure class generated from the lambda, and an expression initializing that data member. Lambdas can capture expressions, rather than just variables as in functions, and this feature allows lambdas to store move-only types. How to use generalized lambda captures in C++ Let’s see how we can use generalized lambda captures in modern C++, std::move can be used to move objects in lambda captures as below, struct […]

Read More

What Is Auto Return Type Deduction In C++?

C++11 allowed lambda functions to deduce the return type based on the type of the expression given to the return statement. The C++14 standard provides return type deduction in all functions, templates, and lambdas. C++14 allows return type deduction for functions that are not of the form return expressions. In this post, we explain the auto keyword, what is an auto type deduction, and how we can use it in functions, templates, and lambdas. Here are some very simple examples. What is the auto keyword in C++? The auto keyword arrives with the new features of the C++11 standard and above. It can be used as a placeholder type specifier (an auto-typed variable), or it can be used in a function declaration, or a structured binding declaration. The auto keyword can be used with other new CLANG standards like C++14, C++17, etc. Here is a simple example of how to use auto-typed variables in C++.   unsigned long int L;   auto a = L; // a is automatically unsigned long int   The auto keyword was being used as an automatic data specifier (storage class specifier) until C++11. This feature was removed by the C++11 standard. What is auto return type deduction in C++? In object oriented programming, type inference or deduction means the automatic detection of the data type of an expression and its conversion to a new type in a programming language and auto return type deduction may deduce return. type (i.e. float parameters to int return values). By the C++14 standard, we can use auto return type deduction in functions, templates, in lambdas. Now let’s see some simple examples that show how we can use them. If you want to use return type deduction in functions, templates, and lambdas, it must be declared with auto as the return type, but without the trailing return type specifier in C++11. The auto return type deduction feature can be used with parameter types in functions. Here is a simple example,   auto sq(int r)   // auto return type deduction in function { return r*r; }   The auto return type deduction feature can be used on modified parameters. Here is a simple example.   auto inc(int r)  // auto return type deduction in function { return ++r; }   The auto return type deduction feature can be used with references in functions.   auto& zero(int& r) // auto return type deduction in function { r = 0; return r; }   The auto return type deduction can be used with templates. Here is an example:   template auto template_sq(T t) // auto return type deduction in template { return (int)(t*t); // deduce to int }   The auto keyword is very useful in lambda declarations and it has an auto return type deduction feature too. See a simple lambda example below.   auto lambda_sq = [](int r)  // auto return type deduction in lambda { return r*r; };   Note that one of the important differences between lambdas and normal functions is that normal functions can refer to themselves by name but lambdas cannot. Is there a full example of auto return type deduction in C++? Here is a full example about auto return type deduction in functions, a template and a lambda. 1 2 3 4 5 6 7 8 9 10 11 […]

Read More