From the blog

How Event-Driven Architectures Drive Real-Time Operations

People, events, the human brain—in fact, the whole world operate in real-time, but businesses have struggled to keep up. With the help of event-driven architecture (EDA) and the Open API economy, businesses can now do the same. The power of an event-driven world means that, after years of geopolitical events affecting how businesses operate, many businesses are starting to uncover real value by truly being able to operate in real-time. Whether it be retail and manufacturing or energy and resources and financial services, locating and responding to vital issues within a company’s supply chains or product lines in real-time, is key to success. Amazon’s CTO, Dr. Werner Vogels, said that “the world is event driven” in his keynote speech at AWS re:Invent in December 2022. Now, new IDC research unveils that nine out of 10 of the world’s largest companies will deploy real-time intelligence driven by event-streaming technologies by 2025. But What’s the Secret Behind Such Success? A recent IDC Infobrief, sponsored by Solace, surveyed over 300 enterprise IT professionals in North America, Asia and Europe, all of whom work for large companies implementing or considering EDA. The results are quite telling–an overwhelming 93% of respondents at companies that have deployed EDA across multiple use cases said EDA has either met or exceeded their expectations. In addition to technical advantages from EDA, most businesses also see clear business benefits: A full 23% of respondents reported increasing productivity; 22% said better customer acquisition and 18% saw revenues increase as a result of EDA efforts. 1. Get Support From the Top to Ensure Alignment Throughout Expanding the footprint of EDA across the enterprise is a journey, and every journey starts by assembling those that are critical to its overall success. Business sponsorship and engaging key stakeholders is vital, especially in the early days of EDA adoption – 56% of respondents in the early EDA stages cited this as a priority when ROI and business benefits may not be immediately clear. The impact of well-aligned C-suite, operational and technical teams is reflective of business-level digital maturity, too. As 35% of respondents at an advanced stage of EDA rollout felt C-level support was critical, it comes as no surprise that respondents with higher levels of EDA maturity also have higher levels of overall digital maturity, including digital strategy and change management support. 2. Tackle Complexities Head-On With the Backing of IT As EDA becomes more pervasive across an organization, demands on IT become more sophisticated, requiring a deepening of EDA skills in the IT organization, notably with DevOps teams, developers and architects. More than one-third (36.1%) of respondents cited the lack of skills to execute EDA as a hurdle to adoption. Approaches to logging, governance and oversight (30.7%) can also become increasingly challenging and must be thought through carefully. This is where EDA providers themselves need to step up and provide adequate training and a certification path for architects DevOps and developers looking to gain the fundamental knowledge and skills to design and implement event-driven systems. This should include technical details such as understanding various design patterns for EDA, microservices choreography versus orchestration, the saga pattern and RESTful microservices. Education should also clearly define and demonstrate key concepts and tools for EDA success, such as event portal, topic hierarchy best practices and event mesh. […]

Read More

Learn How to Use New And Delete Elisions In C++

The C++14 standard (and later), brings a lot of useful smart pointers and improvements on them. They help us to avoid the mistake of incorrectly freeing the memory addressed by pointers. In modern C++, we have the new and delete operators that are used to allocate and free objects from memory, and in this post, we explain how to use new and delete operators. Can we use new and delete elisions in modern C++ or is it obsolete? Allocating memory and freeing it safely is hard if you are programming a very big application. In Modern C++, there are smart pointers that help avoid the mistake of incorrectly freeing the memory addressed by pointers. Smart pointers make it easy to define pointers, they came with C++11. The most used types of C++ smart pointer are unique_ptr, auto_ptr, shared_ptr, and weak_ptr. Smart pointers are preferable to raw pointers in many different scenarios, but there is still a lot of need to use for the new and delete methods in C++14 and above. When we develop code that requires in-place construction, we need to use new and, possibly, delete operations. They are useful, in a memory pool, as an allocator, as a tagged variant, as a buffer, or as a binary message to a buffer. Sometimes we can use new and delete for some containers if we want to use raw pointers for storage. Modern C++ has a lot of modern choices for faster and safer memory operations. Generally, developers choose to use unique_ptr/make_unique and make_shared rather than raw calls to new and delete. Even though we have a lot of standard smart pointers and abilities, we still need new and/or delete operators. How can we use new and delete elisions in C++? What is the new operator in C++? The new operator in C++, denotes a request for memory allocation on the free memory. If there is sufficient memory, a new operator initializes the memory and returns the address of the newly allocated and initialized memory to the pointer variable.    = new ;   Here, the data_type could be any built-in data type, i.e. basic C / C++ types, array, or any class types i.e. class, structure, union.  Now, let’s see how we can use new operator. We can simply use auto and new to create a new pointer and space for its buffer as below,   auto *ptr = new long int;     or in old style still you can use its type in definition instead of auto as below,   long int *ptr = new long int;     one of the great feature of pointers is we don’t need to allocate them at the beginning, we can set them NULL, and we can allocate them in some steps as below,   long int *ptr = NULL; … ptr = new long int;     What is the delete operator in C++? The delete operator in C++ is used to free the dynamically allocated array pointed by pointer variable. Here is the syntax for the delete operator,   delete ;   we can use delete[] for the pointer_arrays, here is the syntax for them,   delete[] ;   here is how we can use delete to delete a pointer, and this is how we can delete pointer arrays,   int *arr […]

Read More

3 strategies to expand your threat model and secure your supply chain

As GitHub’s Chief Security Officer and SVP of Engineering, one of the most common discussions I have with other engineering and security leaders is the state of supply chain security. We all know it’s been an interesting few years, and supply chain security has rocketed into the mainstream—but where should one start when it comes to securing the supply chain? There are many acronyms and security “solutions” out there. How can teams get the bigger security picture? I recently talked about this problem at the BlackHat CISO Summit and want to share a few prompts you can discuss with your teams and customers to broaden your perspective on supply chain security. These prompts can help open up your aperture for thinking about the breadth and complexity of supply chain security while realizing some quick wins that you can do today—without any extra tooling or purchases. Strategy #1: Understand and account for your build pipelines The SolarWinds incident was a watershed moment that woke the world up to the threat of supply chain attacks. It involved a sophisticated attack on various organizations and government agencies by exploiting vulnerabilities in SolarWinds’ Orion platform, a widely used network management software suite. This incident showed us that the pipelines we use to produce software applications are just as important to secure as the application code itself. Build systems are production systems, period. They are extensions of your production environment and must be protected with the same level of rigor as you protect your most sensitive operations. The problem is that many organizations don’t know the sprawl of their build systems and don’t treat the ones they know about as production systems. Ask yourself: what controls do you have in place for all your code and artifact systems? How many build systems do you have? How many tech stacks do you use? As we saw with SolarWinds, we need to understand exactly what inputs are coming into the software artifacts we’re producing and account for them in the build process. Strategy #2: Require users to use 2FA As an industry, we still need help with basic security hygiene and controls, like adopting 2FA. At GitHub, security starts with the developer, and as such, we now require 2FA for all code contributors on GitHub.com. Empowering developers to prevent open source ecosystem attacks by better securing their accounts from theft or takeover is one of the most critical steps we can take to secure the supply chain. We made this decision after rolling out the npm registry for high-impact package maintainers. By requiring 2FA on the accounts of code contributors, maintainers, and publishers, we’re working to address one of the top, long-standing security threats: phishing. While parts of the security industry love to focus on more exotic threats and more complex capabilities, the reality is we need to start with the basics. With 2FA, GitHub dramatically reduces the likelihood of account takeover of popular package maintainers on npm and GitHub.com contributors—and by extension, mitigates the risk to other developers who depend on that code. You should be using 2FA everywhere you can. We have resources that can help you easily set up 2FA for your account or require 2FA for your organization. This simple step will go a long way in preventing your accounts from being compromised […]

Read More

Why AIOps is Critical for Networks

Speaker 1: This is Techstrong TV. Mitch Ashley: With great pleasure of being joined by Andrew Colby. Andrew is VP of AIOps at Vitria. Welcome, Andrew. Andrew Colby: Good afternoon, Mitch. And thank you. Mitch Ashley: It’s a great topic. I’m excited to talk with you about it. We could go down the share war stories in telco experience, which really could be about 10 episodes of a different show, but today in the telco environment, or just in the business environment in general, the economic conditions, competitive pressures, looking for areas where we can get more for less, there are a lot of different parameters that have shifted or changed or maybe tightened that we’re currently working within. I’d love to get your perspective on that. Andrew Colby: Certainly, and thank you. Yeah, I’d say we see cautious optimism. Obviously, I’m based in the US in the DC Metro area, Maryland. And the US, the government entities and quasi-governmental entities have been tightening the economic structure in order to tame inflation. Fortunately, that has not driven our economy and had the potential recessionary effect that was feared, but people are still cautious, businesses are still cautious. That said, it’s hard to hire people and it’s really hard to hire technical people. So a lot of companies are continuing to look towards how to leverage technologies and automation to build efficiency so that they can do more with either the same number of people or re-task their people to higher value purposes, and let the technology do some of the more menial and mundane tasks. And we can explore this a little bit, especially in these new complex service delivery and network environments. It’s very difficult for me to imagine how an engineer who’s gone through anywhere from two to eight years of college education is going to really be happy going and spending their days collecting a lot of data across network, container management VM and other infrastructure systems to figure out what’s going on. I mean, really that’s where a lot of the automation provides a significant amount of value to let the engineers do the smart, difficult things that we want humans to do. Mitch Ashley: And a lot of pressures around meantime to recovery, even looking at resiliency, how do we stand up under a test-full situation, whether it be a security attack that might be going on or some unobserved condition that our systems and networks have never been under? Andrew Colby: Oh, there’s so much of that. So much is changing. It’s not just a person like you or me behind a smartphone that can actually report that there’s a problem, but it’s sensors and equipment that won’t necessarily report right away, so it needs to be detected. So that’s a whole nother additional dimension that service providers, large enterprise IT organizations are under, which is to be able to have this kind of real-time awareness of what’s going on. Whether the service is real time, like the video conference that we’re on or not, there really is a desire and expectation to have real time awareness of the service delivery to be able to detect what’s going on, react to it, address it before the user, whoever that is, the customer, the employee […]

Read More

What Is make_unique And How To Use With unique_ptr In C++

Allocating memory and freeing it safely is hard if you are programming a very big application. In Modern C++ there are smart pointers that help avoid the mistake of incorrectly freeing the memory addressed by pointers. Smart pointers make it easier to define and manage pointers, they came with C++11. The most used types of C++ smart pointer are unique_ptr, auto_ptr, shared_ptr, and weak_ptr. In C++14, there is make_unique (std::make_unique) that can be used to allocate memory for the unique_ptr smart pointers. What is a smart pointer and what is unique_ptr in C++? In computer science, all data and operations during runtime are stored in the memory of our computation machines (computers, IoTs, or other microdevices). This memory is generally RAM (Random Access Memory) that allows data items to be read or written in almost the same amount of time, irrespective of the physical location of data inside the memory. Allocating memory and freeing it safely is really hard if you are programming a very big application. C and C++ were notorious for making this problem even more difficult since earlier coding styles and C++ standards had fewer ways of helping developers deal with pointers and it was fairly easy to make a mistake of manipulating a pointer which had become invalid. In Modern C++, there are smart pointers that help avoid the mistake of incorrectly freeing the memory addressed by pointers which was often the source of bugs in code which could often be difficult to track down. Smart pointers make it easier to manage pointers, they came with the release of the C++11 standard. The most used types of C++ smart pointer are unique_ptr, auto_ptr, shared_ptr, and weak_ptr. std::unique_ptr is a smart pointer that manages and owns another object with a pointer, when the unique_ptr goes out of scope C++ disposes of that object automatically. If you want to do some magic in memory operations use std::unique_ptr. Here are more details on how you can use unique_ptr in C++, What is std::make_unique in C++? The std::make_unique is a new feature that comes with C++14. It is used to allocate memory for the unique_ptr smart pointers, thus managing the lifetime of dynamically allocated objects. std::make_unique template constructs an array of the given dynamic size in memory, and these array elements are value-initialized. Since C++14 to C++20, the std::make_unique function is declared as a template as below,   template unique_ptr make_unique( std::size_t size );     template unique_ptr make_unique( Args&&… args );   The std::make_unique function does not have an allocator-aware counterpart; it returns unique_ptr which would contain an allocator object and invoke both destroy and deallocate in its operator(). How to use std::make_unique with std::unique_ptr in C++? Let’s assume you have a simple class named myclass, We can create a unique pointer (a smart pointer) with this class as below,   std::unique_ptr p = std::make_unique();   Or it can be used with struct as below:   struct st_xy {     int x,y; };   We can create a unique pointer as we illustrate in the example below.   std::unique_ptr xy = std::make_unique();   We can use the auto keyword and we can define the maximum array elements like so:   auto ui = std::make_unique(5);   What are the advantages to use std::make_unique in C++? The std::make_unique function is useful when allocating smart pointer, because: We don’t need to think about new/delete or new[]/delete[] elisions. When […]

Read More

How To Use std::exchange In C++

C++ is a very precise programming language that allows you to use memory operations in a wide variety of ways. Mastering efficient memory handling will improve your app performance at run time and can result in faster applications that have optimal memory usage. One of the many useful features that come with the C++14 standard is the std::exchange algorithm that is defined in the utility header. In this post, we explain how to use std::exchange in C++. What is std::exchange in C++? The std::exchange algorithm is defined in the header and it is a built-in function that comes with C++14. The std::exchange algorithm copies the new value to a given object and it will return the old value of that object. Here is the template definition syntax (since C++14, until C++20):   template T exchange( T& object, U&& new_value );   How to use std::exchange in C++? We can use std::exchange to exchange variables values as below, and we can obtain old value from the return value of std::exchange.   int x = 500; int y = std::exchange(x, 333);   After these lines, x is now 333 and y is the old value of x , 500. We can use std::exchange to set values of object lists (i.e vectors) as shown below:   std::vector vec; std::exchange( vec, { 10, 20, 30, 40, 50 } );   In addition, we can copy old values to another object list as we show in the following example:   std::vector vec, vec0; vec0 = std::exchange( vec, { 10, 20, 30, 40, 50 } );   We can use std::exchange to change function definitions too, for example assume we have myf1() function, now we want to exchange myf() function to myf1() function, this is how we can do:   void (*myf)(); std::exchange(myf, myf1); // myf is now myf1 myf();   Is there a full example of how to use std::exchange in C++? Here is a full example of how to use std::exchange in C++. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46   #include #include   void myf1() { std::cout

Read More

Open Sourcing IFC SDK for C++ Modules

Open Sourcing IFC SDK for C++ Modules GDR October 3rd, 20234 3 Back with VS2019 version 16.10, we announced a complete implementation of C++ Modules (and, generally, of all C++20 features) across the MSVC compiler toolset, static analysis, IntelliSense, and debugger. Implementing Modules requires principled intermediate representation of C++ source programs. Today, we are thrilled to announce the availability of the IFC SDK, a Microsoft implementation of the IFC Specification. This is an open-source project under the Apache 2-with-LLVM-exception license. The IFC Specification formalizes C++ programs as data amenable to semantics-based manipulation. We are open sourcing the IFC SDK in the hope of accelerating widespread use and implementation of C++ Modules across the C++ community and the C++ tools ecosystem. What Is an IFC and What Is It Good for? A popular compilation strategy of a C++ Module interface (or header unit) source file is to translate the source file, exactly once, into an internal representation suitable for reuse in other source files. That intermediate form is generally referred to as a Built Module Interface (BMI). An IFC file is an implementation of the BMI concept. An IFC file is a persistent representation of all information pertaining to the semantics of a C++ program. In addition to direct use by compilers to implement C++ Modules, an IFC can also be inspected by non-compiler tools for semantics-based exploration of C++ source files. Such tools are useful for explorers in understanding how C++ source-level constructs can be represented in a form suitable for C++ Modules implementation. IFC SDK Scope The IFC SDK is currently an experimental project. It focuses on providing datatypes and source code supporting the read and write of IFC files. This SDK is the same tested implementation used in the MSVC compiler front-end to implement C++ Modules.  Consequently, while the GitHub OSS repo is “experimental”, the code is far from it. Those datatypes can be memory-mapped directly for scalable and efficient processing. The project also features utilities for formatting or viewing IFC files. We welcome, and we are looking for, contributions that fix gaps between the SDK and the Specification, and for changes required to support C++ standards starting from C++20 and upwards. Call to Action We encourage you to check out the IFC SDK repo, build it, experiment with it, and contribute. We would like to hear from you. If you’re a C++ compiler writer interested in supporting the IFC file format, please reach out at the IFC Specification repo and share feedback, suggestions, or bug fixes.  You can also reach us on Twitter (@VisualC), or via email at visualcpp@microsoft.com.   GDR Software Architect, DevDiv Follow

Read More

Getting RCE in Chrome with incorrect side effect in the JIT compiler

In this post, I’ll explain how to exploit CVE-2023-3420, a type confusion vulnerability in v8 (the Javascript engine of Chrome), that I reported in June 2023 as bug 1452137. The bug was fixed in version 114.0.5735.198/199. It allows remote code execution (RCE) in the renderer sandbox of Chrome by a single visit to a malicious site. Vulnerabilities like this are often the starting point for a “one-click” exploit, which compromise the victim’s device when they visit a malicious website. A renderer RCE in Chrome allows an attacker to compromise and execute arbitrary code in the Chrome renderer process. The renderer process has limited privilege though, so the attacker then needs to chain such a vulnerability with a second “sandbox escape” vulnerability: either another vulnerability in the Chrome browser process, or a vulnerability in the operating system to compromise either Chrome itself or the device. For example, a chain consisting of a renderer RCE (CVE-2022-3723), a Chrome sandbox escape (CVE-2022-4135), and a kernel bug (CVE-2022-38181) was discovered to be exploited in-the-wild in “Spyware vendors use 0-days and n-days against popular platforms” by Clement Lecigne of the Google Threat Analysis Group. While many of the most powerful and sophisticated “one-click” attacks are highly targeted and average users may be more at risk from less sophisticated attacks such as phishing, users should still keep Chrome up-to-date and enable automatic updates, as vulnerabilities in v8 can often be exploited relatively quickly by analyzing patches once these are released. The current vulnerability exists in the JIT compiler in Chrome, which optimizes Javascript functions based on previous knowledge of the input types (for example, number types, array types, etc.). This is called speculative optimization and care must be taken to make sure that these assumptions on the inputs are still valid when the optimized code is used. The complexity of the JIT engine has led to many security issues in the past and has been a popular target for attackers. The phrack article, “Exploiting Logic Bugs in JavaScript JIT Engines” by Samuel Groß is a very good introduction to the topic. The JIT compiler in Chrome The JIT compiler in Chrome’s v8 Javascript engine is called TurboFan. Javascript functions in Chrome are optimized according to how often they are used. When a Javascript function is first run, bytecode is generated by the interpreter. As the function is called repeatedly with different inputs, feedback about these inputs, such as their types (for example, are they integers, or objects, etc.), is collected. After the function is run enough times, TurboFan uses this feedback to compile optimized code for the function, where assumptions are made based on the feedback to optimize the bytecode. After this, the compiled optimized code is used to execute the function. If these assumptions become incorrect after the function is optimized (for example, new input is used with a type that is different to the feedback), then the function will be deoptimized, and the slower bytecode is used again. Readers can consult, for example, “An Introduction to Speculative Optimization in V8” by Benedikt Meurer for more details of how the compilation process works. TurboFan itself is a well-studied subject and there is a vast amount of literature out there documenting its inner workings, so I’ll only go through the background that is needed for this […]

Read More

Cybersecurity spotlight on bug bounty researcher @inspector-ambitious

As we kick off Cybersecurity Awareness Month, the GitHub bug bounty team is excited to spotlight one of the top performing security researchers who participates in the GitHub Security Bug Bounty Program, @inspector-ambitious! As home to over 100 million developers and 372 million repositories, GitHub maintains a strong dedication to ensuring the security and reliability of the code that powers daily development activities. GitHub’s Bug Bounty Program continues to play a pivotal role in advancing the security of the software ecosystem, empowering developers to create and build confidently on our platform and with our products. We firmly believe that the foundation of a successful bug bounty program is built on collaboration with skilled security researchers. Since its inception nine years ago, our bug bounty program has been a fundamental component of GitHub’s security strategy. This dedication is manifested through live hacking events, the revamp of our VIP bounty program, limited disclosures on HackerOne, expanding bounty targets, over $3.8 million in total rewards via HackerOne since 2016, and much more! As we continue to explore opportunities to make our program more exciting for the researchers to hack on, we also heard the feedback from our community and launched the GitHub Bug Bounty Merch Shop earlier this year, so now every submission can potentially also receive a swag bonus along with the bounty! To celebrate Cybersecurity Awareness Month this October, we’re interviewing one of the top contributing researchers to our bug bounty program and learning more about their methodology, techniques and experiences hacking on GitHub. @inspector-ambitious specializes in application-level bugs and has found some unique and intricate issues throughout their research. Despite the intricacy of their submissions, they skillfully outline easily understandable reproduction steps, effectively streamlining the investigation process and consequently reducing the triage time. Can you share some insights into your journey as a bug bounty researcher? What motivated you to start and what has kept you coming back to it? I’ve been passionate about cybersecurity since the age of 10. During the 1990s, I didn’t see it as a viable career option, so I decided to shift to programming around age 16. I dedicated myself to coding until just a few months ago, when we underwent a two-day offensive security training at work. The trainer suggested that I explore bug bounty programs. A couple of weeks later, I joined GitHub’s Bug Bounty Program and was immediately hooked. There is nothing as cute as an Octocat. What do you enjoy doing when you aren’t hacking? Trying to be a good husband and dad is my top priority. When I have time left (it’s not that often), I try to improve my knowledge of mindfulness and Stoic philosophy. How do you keep up with and learn about vulnerability trends? I listen to the Critical Thinking – Bug Bounty Podcast by Justin Gardner (Rhynorater) and Joel Margolis (teknogeek); it’s an amazing podcast. I also check Twitter/X from time to time. What are your favorite classes of bugs to research and why? I would say I have been focusing mostly on application-level logic errors so far since my skill set is still fairly limited as I’m newer to bug hunting. What tools or techniques do you find most effective for discovering security vulnerabilities? I use Kali Linux and VSCode for code review. I don’t […]

Read More

Build Reliable and Secure C++ programs — Microsoft Learn

Build Reliable and Secure C++ programs — Microsoft Learn Herb Sutter October 2nd, 20230 1 “The world is built on C and C++” is no overstatement — C and C++ are foundational languages for our global society and are always in the world’s top 10 most heavily used languages now and for the foreseeable future. Visual Studio has always supported many programming languages and we encourage new languages and experiments; diversity and innovation are healthy and help progress the state of the art in software engineering. In Visual Studio we also remain heavily invested long-term in providing the best C and C++ tools and continuing to actively participate in ISO C++ standardization, because we know that our internal and external customers are relying on these languages for their success, now and for many years to come. As cyberattacks and cybercrimes increase, we have to defend our software that is under attack. Malicious actors target many attack vectors, one of which is memory safety vulnerabilities. C and C++ do not guarantee memory safety by default, but you can build reliable and secure software in C++ using additional libraries and tools. And regardless of the programming languages your project uses, you need to know how to defend against the many other attack vectors besides memory safety that bad actors are using daily to attack software written in all languages. To that end, we’ve just published a new document on Microsoft Learn to help our customers know and use the tools we provide in Visual Studio, Azure DevOps, and GitHub to ship reliable and secure software. Most of the advice applies to all languages, but this document has a specific focus on C and C++. It was coauthored by many subject-matter experts in programming languages and software security from across Microsoft: Build reliable and secure C++ programs | Microsoft Learn This is a section-by-section companion to the United States government publication NISTIR 8397: Guidelines on Minimum Standards for Developer Verification of Software. NISTIR 8397 contains excellent guidance on how to build reliable and secure software in any programming language, arranged in 11 sections or topics. For each NISTIR 8397 section, this Learn document summarizes how to use Microsoft developer products for C++ and other languages to meet that section’s security needs, and provides guidance to get the most value in each area. Most of NISTIR 8397’s guidance applies to all software; for example, all software should protect its secrets, use the latest versions of tools, do automated testing, use CI/CD, verify its bill of materials, and so on. But for C++ memory safety specifically, see our Learn document’s information in sections 2.3, 2.5, and 2.9 for a detailed list of analyses (e.g., CodeQL, Binskim, /analyze), safe libraries (e.g., GSL, SafeInt), hardening compiler switches (e.g., /sdl, /GS, /guard, /W4, /WX, /Qspectre), and tools (e.g., Address Sanitizer, LibFuzzer) that your project can and should use regularly to harden your C++ programs. If your project uses C++ and you find items listed that you’re not using yet, don’t wait — start adding them to your project today! Safety and security are essential. We want to help all of our customers to know about and use the state-of-the-art tools we provide in Visual Studio, Azure DevOps, and GitHub for writing reliable and secure software in our device-and-cloud […]

Read More