Noutați

Insider newsletter digest: How to use GitHub Copilot

This is abridged content from July 2023’s Insider newsletter. Like what you see? Sign up to receive complete, unabridged content in your inbox every month. Sign up now > Welcome to our rebranded GitHub Insider newsletter with tips, technical guides, and best practices to help you boost your productivity and happiness. We heard your feedback and refreshed the newsletter experience. Now, each month, Insider will deliver deep dives into one of GitHub’s products, and provide tips and tricks to take your development to the next level. This month, we’re delving into GitHub Copilot. 92% of developers are already using AI coding tools both in and outside of work (according to our latest survey), and AI could remove major disruptions, delays, and cognitive load that developers previously had to endure. So, we wanted to break down our guide to using GitHub Copilot, and share some prompts, tips, and use cases. Here’s what you’ll find in this post: Three best practices for prompting GitHub Copilot Some additional tips for communicating with the AI pair programmer Once you’ve installed the GitHub Copilot extension in your preferred IDE, it’s best to experiment with how to communicate with the AI programmer to get your desired results. Let’s get started. Setting the stage with a high-level goal is useful when you’re starting with a blank file or empty codebase. It provides GitHub Copilot with the context of what you want to build or accomplish, and it primes the tool with a big picture description of what you want it to generate before you jump in with the details. /* Create a basic markdown editor in Next.js with the following features: – Use react hooks – Create state for markdown with default text “type markdown here” – A text area where users can write markdown – Show a live preview of the markdown text as I type – Support for basic markdown syntax like headers, bold, italics – Use React markdown npm package – The markdown text and resulting HTML should be saved in the component’s state and updated in real time */ A detailed comment like the one above can prompt GitHub Copilot to generate a very simple, unstyled, but functional, markdown editor in less than 30 seconds. Keep in mind, though, that outputs from a generative AI tool are non-deterministic, so the responses may vary. A simple, specific ask goes a long way. Though this might result in shorter outputs from GitHub Copilot, it helps to break down the steps and logic that the AI pair programmer needs to follow to achieve your goal. Then, let GitHub Copilot generate the code after each step instead of asking it to generate a bunch of code all at once. Think of it as writing a recipe: You break the cooking process down into simple, succinct steps, rather than writing a paragraph that describes the dish you want to make. Learning from examples is not only useful for humans, but also for an AI pair programmer, so throw a bone or two to GitHub Copilot. Let’s say you want to extract the names from the array of data below and store it in a new array. A prompt that doesn’t provide an example might look something like this: // Map through an array of arrays of objects to […]

Read More

PagerDuty Previews Generative AI Copilot for ITSM

Under an early access program, PagerDuty, Inc. is making available a tool that brings generative artificial intelligence (AI) capabilities to its IT service management (ITSM) platform. Jonathan Rende, senior vice president of products for PagerDuty, said PagerDuty Copilot for the PagerDuty Operations Cloud extends previous investments in machine learning algorithms the company has made as part of an ongoing effort to apply AI to ITSM. Designed to be invoked via Slack, PagerDuty Copilot makes use of multiple large language models (LLMs) to automate tasks ranging from providing summarization of IT incidents to creating code to automate workflows. PagerDuty plans to transparently swap LLMs in and out of its platforms as AI advances continue to be rapidly made, noted Rende. PagerDuty Copilot provides a level of abstraction for invoking AI models along with appropriate guardrails that make it simpler to manage IT operations without IT teams needing to have AI expertise, said Rende. The overall goal is to boost the productivity of IT operations teams by eliminating much of the drudgery and toil that conspires to make working in IT tedious, noted Rende. AI technologies are not likely to replace the need for IT personnel as much as they will enable IT teams to focus on tasks that add more value to the business, he added. It’s now only a matter of time before generative AI capabilities are pervasively applied across both ITSM and DevOps workflows. Less clear is the impact those capabilities will have on the best practices currently relied on to manage those workflows as more tasks are automated. Ultimately, however, AI should make it easier for more organizations to embrace those best practices as the level of skill and expertise required to manage IT at scale is reduced. In addition, the whole concept of issuing tickets to manage tasks tracked by a central system of record may need to evolve simply because AI has automated requests for service. There will naturally need to be some system of record for tracking requests. Still, ultimately that process will be managed via copilots rather than by a ticket created by an end user that is then tracked via an ITSM platform. Savvy IT teams, in the meantime, are already moving to determine which tasks and workflows will be automated in anticipation of AI becoming more widely embedded in ITSM and DevOps platforms. Roles and responsibilities within IT teams will inevitably evolve as AI increasingly automates mainly mundane tasks, such as creating reports that many IT professionals would rather not spend time writing. The biggest IT management platform challenge in the future might not necessarily be adjusting to AI as much as it will be orchestrating requests that are likely to be generated by multiple types of copilots that have been embedded into almost every application. One way or another, AI is about to transform how IT operations are managed. As disruptive as those advances will be, AI, more importantly, will also enable organizations to manage IT at levels of scale that were previously unimaginable.

Read More

How to Use Basic String and Unicode String in Modern C++

In programming, one of the most used variable types are text strings, and they are sometimes really important when storing and retrieving valuable data. It is important to store your data safely in its language and localization. Most programming languages have issues when storing texts and letters. In C++, there is very old well-known string type (arrays of chars) and modern types of std::basic_string types such as std::string, and std::wstring. In addition to these modern string types, C++ Builder has another amazing string feature, UnicodeString. In this post, we explain what a basic string and UnicodeString are in modern C++ and how to use them. What are the string types in C++? In general there are 3 type of alphanumeric string declarations in C++; Array of chars (See Fundamental Types)chars are shaped in ASCII forms which means each character has 1 byte (8 bits) size (that means you have 0 to 255 characters) Basic String (std::basic_string)The basic_string (std::basic_string and std::pmr::basic_string) is a class template that stores and manipulates sequences of alpha numeric string objects (char, w_char,…). A basic string can be used to define string, wstring, u8string, u16string and u32string data types. String or UnicodeStringThe UnicodeString string type is a default String type of RAD Studio, C++ Builder, Delphi that is in UTF-16 format that means characters in UTF-16 may be 2 or 4 bytes. In C++ Builder and Delphi; Char and PChar types are now WideChar and PWideChar, respectively. There is a good article about Unicode in RadStudio. In addition, there were some old string types that we used in C++ Builder and Delphi before, AnsiStringPreviously, String was an alias for AnsiString. For RAD Studio, C++ Builder and Delphi, the format of AnsiString has changed. CodePage and ElemSize fields have been added. This makes the format for AnsiString identical for the new UnicodeString. WideStringWideStrings were previously used for Unicode character data. Its format is essentially the same as the Windows BSTR. WideString is still appropriate for use in COM applications. What is basic_string? The basic_string (std::basic_string and std::pmr::basic_string) is a class template that stores and manipulates sequences of alpha numeric string objects (char, w_char,…). For example, str::string and std::wstring are the data types defined by the std::basic_string. In other words, basic_string is used to define different data_types which means a basic_string is not a string only, it is a namespace for a general string format. A basic string can be used to define string, wstring, u8string, u16string and u32string data types. The basic_string class is dependent neither on the character type nor on the nature of operations on that type. The definitions of the operations are supplied via the Traits template parameter (i.e. a specialization of std::char_traits) or a compatible traits class. The basic_string  stores the elements contiguously. Several string types for common character types are provided by basic string definitions as below. String Type Basic String Definition Standard std::string std::basic_string std::wstring std::basic_string std::u8string std::basic_string (C++20) std::u16string std::basic_string (C++11) std::u32string std::basic_string (C++11) Several string type in std::pmr namespace for common character types are provided by the basic string definitions too. Here are more details about basic string types and their literals. Note that you can use both std::basic_string (std::string, std::wstring, std::u16string, …) and UnicodeString in C++ Builder. Here are more details about basic string types and their literals. What is UnicodeString (String) in C++ Builder? The Unicode standard for UnicodeString provides a unique number for every character (8, 16 or 32 bits) more […]

Read More

Coexisting With AI: The Future of Software Testing

If 2023 was the year of artificial intelligence (AI), then 2024 is going to be the year of human coexistence with the technology. Since the release of Open AI’s ChatGPT in November 2022, there has been a steady stream of competing large language models (LLMs) and integrated applications for specific tasks, including content, image processing and code production. It’s no longer a question of if AI will be adopted; we have moved on to the question of how best to bring this technology into our daily lives. These are my predictions for the software quality assurance testing industry for 2024. Automated testing will become a necessity, not a choice. Developers will lean heavily on AI-powered copilot tools, producing more code faster. That means huge increases in the risk profile of every software release. In 2024, testers will respond by embracing AI-powered testing tools to keep up with developers using AI-powered tools and not become the bottleneck in the software development life cycle (SDLC). The role of the tester will increase and evolve. While AI is helping software engineers and test automation engineers produce more code faster, it still requires the highly skilled eye of an experienced engineer to determine how good and usable the code or test is. In 2024, there will be a high demand for skilled workers with specific domain knowledge who can parse through the AI-generated output and determine if it’s coherent and useful within the specific business application. Although this is necessary for developers and testers to start trusting what the AI generates, they should be wary of spending inefficient amounts of time constructing AI prompts, as this can ultimately lead to decreased levels of performance. For instance, a developer could easily spend most of their time validating the AI-generated output instead of testing the release that will be deployed to users. Being able to distinguish between when to rely on AI and when to forego AI’s help will be key to streamlining the workflow. Eventually, we’re going to start seeing AI-powered testing tools for non-coders that focus on achieving repeatability, dependability and scalability so that testers can truly use AI as their primary testing tool and ultimately boost their productivity. The rise of protected, offline LLMs and the manual tester. As enterprise companies show signs they don’t trust public LLMs (e.g., ChatGPT, Bard, etc.) with their data and intellectual property (IP), they’re starting to build and deploy their own private LLMs behind secured firewalls. Fine-tuning those LLMs with domain-specific data (e.g., banking, health care, etc.) will require a great volume of testing. This promises a resurgence in the role of the manual tester as they will have an increasingly important role to play in that process since they possess deep domain knowledge that is increasingly scarce across enterprises. As we stand on the brink of 2024, it is evident that the synergy between artificial intelligence and human expertise will be the cornerstone of software quality engineering. Human testers must learn to harness the power of AI while contributing the irreplaceable nuance of human judgment. The year ahead promises to be one where human ingenuity collaborates with AI’s efficiency to ensure that the software we rely on is not only functional but also reliable and secure. There will likely be a concerted effort to refine these […]

Read More

The architecture of today’s LLM applications

We want to empower you to experiment with LLM models, build your own applications, and discover untapped problem spaces. That’s why we sat down with GitHub’s Alireza Goudarzi, a senior machine learning researcher, and Albert Ziegler, a principal machine learning engineer, to discuss the emerging architecture of today’s LLMs. In this post, we’ll cover five major steps to building your own LLM app, the emerging architecture of today’s LLM apps, and problem areas that you can start exploring today. Five steps to building an LLM app Building software with LLMs, or any machine learning (ML) model, is fundamentally different from building software without them. For one, rather than compiling source code into binary to run a series of commands, developers need to navigate datasets, embeddings, and parameter weights to generate consistent and accurate outputs. After all, LLM outputs are probabilistic and don’t produce the same predictable outcomes. Click on diagram to enlarge and save. Let’s break down, at a high level, the steps to build an LLM app today. ???? 1. Focus on a single problem, first. The key? Find a problem that’s the right size: one that’s focused enough so you can quickly iterate and make progress, but also big enough so that the right solution will wow users. For instance, rather than trying to address all developer problems with AI, the GitHub Copilot team initially focused on one part of the software development lifecycle: coding functions in the IDE. 2. Choose the right LLM. You’re saving costs by building an LLM app with a pre-trained model, but how do you pick the right one? Here are some factors to consider: Licensing. If you hope to eventually sell your LLM app, you’ll need to use a model that has an API licensed for commercial use. To get you started on your search, here’s a community-sourced list of open LLMs that are licensed for commercial use. Model size. The size of LLMs can range from 7 to 175 billion parameters—and some, like Ada, are even as small as 350 million parameters. Most LLMs (at the time of writing this post) range in size from 7-13 billion parameters. Conventional wisdom tells us that if a model has more parameters (variables that can be adjusted to improve a model’s output), the better the model is at learning new information and providing predictions. However, the improved performance of smaller models is challenging that belief. Smaller models are also usually faster and cheaper, so improvements to the quality of their predictions make them a viable contender compared to big-name models that might be out of scope for many apps. Looking for open source LLMs? Check out our developer’s guide to open source LLMs and generative AI, which includes a list of models like OpenLLaMA and Falcon-Series. Model performance. Before you customize your LLM using techniques like fine-tuning and in-context learning (which we’ll cover below), evaluate how well and fast—and how consistently—the model generates your desired output. To measure model performance, you can use offline evaluations. What are offline evaluations? They’re tests that assess the model and ensure it meets a performance standard before advancing it to the next step of interacting with a human. These tests measure latency, accuracy, and contextual relevance of a model’s outputs by asking it questions, to which there are […]

Read More

CircleCI Extends CI/CD Reach to AI Models

CircleCI this week revealed it is extending the reach of its continuous integration/continuous delivery (CI/CD) platform to make it simpler to incorporate artificial intelligence (AI) models into DevOps workflows. In addition to providing access to the latest generation of graphical processor units (GPUs) from NVIDIA via the Amazon Web Services (AWS) cloud, Circle CI has added inbound webhooks to access AI model curation services from providers such as Hugging Face and integrations with LangSmith, a debugging tool for generative AI applications and the Amazon SageMaker service for building AI applications. CircleCI CEO Jim Rose said while there is clearly a lot of enthusiasm for incorporating AI models into applications, the processes being used are still immature, especially in terms of automating workflows that include testing of probabilistic AI models. Most AI models are built by small teams of data scientists that create a software artifact that needs to be integrated within a DevOps workflow just like any other artifact, noted Rose. The challenge is that most data science teams have not yet defined a set of workflows for automating the delivery of those artifacts as part of a larger DevOps workflow, he added. DevOps teams will also need to make adjustments to a version control-centric approach to managing applications to trigger pipelines to pull AI software artifacts that exist outside of traditional software repositories. For example, the inbound webhooks provided by CircleCI now make it possible to automatically create a pipeline whenever an AI model residing on Hugging Face changes. It’s still early days as far as the deployment of AI models in production environments is concerned, but there is no doubt generative AI will have a major impact on how software is developed. AI models are a different class of software artifacts that are retrained instead of being updated, a process that occurs intermittently. As such, DevOps teams need to keep track of each time an AI model is being retrained to ensure applications are updated. At the same time, generative AI will also increase the pace at which other software artifacts are being created and deployed. Many of the manual tasks that today slow down the rate at which applications are built and deployed will be eliminated. That doesn’t mean there will be no need for software engineers, but it does mean the role they play in developing and deploying software is about to rapidly evolve. DevOps teams need to evaluate both how generative AI will impact the tasks they manage as well as the way the overall software development life cycle (SDLC) process needs to evolve. Each organization, as always, will need to decide for itself how best to achieve those goals depending on the use cases for AI,  but the changes that generative AI will bring about are now all but inevitable. The longer it takes to adjust, the harder it will become to overcome the cultural and technical challenges that will be encountered along the way.

Read More

When Is The CMath Mathematical Special Functions Header Used in Modern C++?

The math library  in C language is designed to be used in mathematical operations. From the first C language to the latest C++ Builder 12, there have been many changes and improvements in both hardware and software. We were able to use this math.h library in C++ applications. After the C++17 standard, this library is modernized in the cmath library, Functions are declared in  header for compatibility reasons in modern C++. In this post, we explain what are the math.h and cmath libraries. What is the math.h math library in C++? In the early days of computers there was an FPU (Floating Point Unit) in addition to a CPU (Central Processing Unit). While the CPUs were slower in floating point operations (especially in trigonometric functions) FPUs were faster than CPUs in those days. The math library  in the C language is designed to be used in mathematical operations with these FPUs and CPUs. From the first C language to the latest CLANG C++ compiler, there have been many changes and improvements in both hardware and software. We were able to use this math.h library in C++ applications. The math library library functions are declared in math.h header file and it is in the standard library of the C programming language. Most of the functions are trigonometric and basic math functions, and they mostly use floating point numbers such as float, double, or long double variables. Trigonometric functions use radians in angular parameters and all functions take doubles for floating-point arguments unless otherwise specified. In C++ (C++98, C++11, C++14), these C functions were begin used by the same header . For example, if you want to use sin(), cos(), tan(), exp(), log(), and pow() functions you have to add library to the C and C++11, C++14 applications. Note that, some mathematical library functions that operate on integers are instead specified in the  header, such as abs, labs, div, and ldiv. Here is a simple C example using the sin function.   #include #include   int main() { double x = sin(1.0); }   What is the cmath mathematical special functions library in C++? In C++11 and C++14, we were able to use the math.h library in C++ applications. After the C++17 standard, this library is modernized in the cmath library, and functions are declared in  header for compatibility reasons in modern C++, and the is an optional old header to support some old codes. The CMath Mathematical Special Functions Header  defines mathematical functions and symbols in the std namespace, and previous math functions are also included, it may also define them in the global namespace. You have to add a std namespace with using namespace std; or you should use the std:: prefix for each math function. Some of the mathematical special functions are added to the C++17 cmath library header by the contents of the former international standard ISO/IEC 29124:2010 and math.h functions added too. These are only available in namespace std. If you do not use namespace you should add std:: prefix to use these modern math functions. Here is a simple C++ example using the sin function.   #include #include   int main() { double x = std::sin(1.0); }   What is the difference between math.h and cmath in modern C++? The CMath Mathematical Special Functions Header  defines mathematical functions and symbols in the std namespace, and previous math functions are also […]

Read More

DevOps Best Practices for Faster and More Reliable Software Delivery

Imagine a scenario where teams creating the software and delivery aren’t just passing information but sitting together, brainstorming and solving problems in real time. That’s the core of DevOps. It’s not a one-click software solution, but teams working together to provide a reliable solution for seamless and faster software delivery. Let’s take an example of an app or software update; users would expect it to work seamlessly. The secret here for that seamless experience is often a well-structured DevOps strategy. DevOps isn’t just about speeding things up, it’s about balancing the need for speed with the need for stability. According to research, 99% of organizations witnessed a positive impact after implementing DevOps in their business delivery processes. They’re deploying updates far more frequently, their failure recovery is lightning-fast, and they see fewer issues when they launch new features. Using DevOps for Efficient Software Delivery DevOps is crucial for organizations looking to resolve the complexities of modern software delivery. It bridges the gap between ‘code complete’ and ‘code in production,’ ensuring that software isn’t just created but delivered swiftly and effectively to the end-user. This approach not only accelerates time-to-market but also enhances product quality and customer satisfaction. By adopting continuous integration and continuous delivery (CI/CD), automation, and constant feedback, DevOps empowers teams to respond to market changes with agility and confidence. It’s about balancing processes, people and technology that work together to unlock higher efficiency, innovation and success. Implementing Continuous Integration and Continuous Deployment (CI/CD) Continuous integration and continuous deployment (CI/CD) are core practices in the DevOps approach, designed to streamline and automate the steps in getting software from development to deployment. CI/CD establishes a framework for development teams that supports frequent code changes while maintaining system stability and high-quality output. This method depends on automation to detect problems early, reduce manual errors and speed up the delivery process, ensuring that new features, updates and fixes are available to users quickly and reliably. Teams should follow several best practices: • Commit to Version Control Rigorously: Every piece of code, from application to configuration scripts, should be version-controlled. It ensures that any changes can be tracked, rolled back or branched out at any point, providing a solid foundation for collaborative development and deployment.• Automate the Build for Consistency: Automation is the key to CI/CD. By automating the build process, one can ensure that the software can be reliably built at any time. This automation includes compiling code, running database migrations, and executing any necessary scripts to move from source code to a working program.• Incorporate Comprehensive Automated Testing: A robust suite of automated tests, including unit, integration, acceptance, and regression tests, should be run against every build to catch bugs early. Automated tests act as a safety net that helps maintain code quality throughout the rapid pace of DevOps cycles.• Replicate Production in Staging: A staging environment replicates your production environment and is crucial for pre-deployment testing. It should mimic production as closely as possible to surface any environment-specific issues that could otherwise cause unexpected behavior after release. • Ensure Quick and Safe Rollbacks: The ability to roll back to a previous state quickly is essential. This safety measure minimizes downtime by swiftly reversing failed deployments or critical issues without going through a prolonged troubleshooting process during peak hours.• Monitor Relentlessly […]

Read More

What Are The CMath Mathematical Special Functions in Modern C++?

In C++11 and C++14, we were able to use this math.h library in C++ applications. After the C++17 standard, this library modernized math operations with the cmath library. Functions are declared in  header. For compatibility reasons the is an optional alternative to support older code. In this post, we list most of these mathematical functions declared in the header of modern C++. What is cmath mathematical functions library in C++? The CMath Mathematical Special Functions Header  defines mathematical functions and symbols in the std namespace. It includes the previous math functions. It may also define them in the global namespace. You have to add a std namespace using namespace std; or you should use the std:: prefix for each math function. Some of the mathematical special functions are added to the C++17 cmath library header by the contents of the former international standard ISO/IEC 29124:2010 and math.h functions added too. These are only available in namespace std. If you do not use namespace you should add std:: prefix to use these modern math functions. In general, they are mostly double functions and can be slower, but they have more accurate results. For example sin() uses double variables, sinf() uses a float variable (same as C++11, faster, less accurate), while sinl() is used with long double variables (same as C++11, also slower, but more accurate). Here is a simple C++ example using the sin function.   #include #include   int main() { double d = std::sin(1.0); // double float  f = std::sinf(1.0); // float (C++11) long double  l = std::sinl(1.0); // long double (C++11)   }   What are the CMath mathematical special functions in modern C++ 17? There are many new modern mathematical special functions in the C++17 cmath header. Such as functions for associated Laguerre polynomials, elliptic integral of the first kind functions, Cylindrical Bessel functions (of the first kind), Cylindrical Neumann functions, Exponential integral functions, Hermite polynomials functions, Legendre polynomials functions, Laguerre polynomials, Riemann zeta function, and some spherical functions. Here is a list of the CMath special functions. Description double float long double Associated Laguerre polynomials assoc_laguerre assoc_laguerref assoc_laguerrel Associated Legendre polynomials assoc_legendre assoc_legendref assoc_legendrel Beta function beta betaf betal Elliptic integral of the first kind (complete) comp_ellint_1 comp_ellint_1f comp_ellint_1l Elliptic integral of the second kind (complete) comp_ellint_2 comp_ellint_2f comp_ellint_1l Elliptic integral of the third kind (complete) comp_ellint_3 comp_ellint_3f comp_ellint_1l Regular modified cylindrical Bessel functions cyl_bessel_i cyl_bessel_if cyl_bessel_il Cylindrical Bessel functions (of the first kind) cyl_bessel_j cyl_bessel_jf cyl_bessel_jl Irregular modified cylindrical Bessel functions cyl_bessel_k cyl_bessel_kf cyl_bessel_kl Cylindrical Neumann functions cyl_neumann cyl_neumannf cyl_neumannl Elliptic integral of the first kind (incomplete) ellint_1 ellint_1f ellint_1l Elliptic integral of the second kind (incomplete) ellint_2 ellint_2f ellint_2l Elliptic integral of the third kind (incomplete) ellint_3 ellint_3f ellint_3l Exponential integral expint expint expint Hermite polynomials hermite hermitef hermitel Legendre polynomials legendre legendref legendrel Laguerre polynomials laguerre laguerref laguerrel Riemann zeta function riemann_zeta riemann_zetaf riemann_zetal spherical associated Legendre functions sph_legendre sph_legendref sph_legendrel spherical Bessel functions (of the first kind) sph_bessel sph_besself sph_bessell spherical Neumann functions sph_neumann sph_neumannf sph_neumannl Note that, by the C++20 standard, only default names of math functions are used. For example, the laguerre() is used for the float, double and long double versions. For more details about changes in C++17 standard, please see this https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0226r1.pdf C++ Builder is the easiest and fastest C and C++ compiler […]

Read More

What’s New in the vcpkg 2023.11.20 Release

What’s New in the vcpkg 2023.11.20 Release Augustin Popa November 28th, 20230 0 The 2023.11.20 release of the vcpkg package manager is available. This blog post summarizes changes from October 19th, 2023 to November 19th, 2023 for the Microsoft/vcpkg, Microsoft/vcpkg-tool, and Microsoft/vcpkg-docs GitHub repos. Some stats for this period: 34 new ports were added to the open-source registry. A port is a versioned recipe for building a package from source, such as a C or C++ library. 268 updates were made to existing ports. As always, we validate each change to a port by building all other ports that depend on or are depended by the library that is being updated for our nine main triplets. There are now 2,352 total libraries available in the vcpkg public registry. 22 contributors submitted PRs, issues, or participated in discussions in the main repo. The main vcpkg repo has over 5,800 forks and 20,200 stars on GitHub.   Key changes This vcpkg update includes some bugfixes, documentation improvements, as well as a new community triplet. Notable changes for this release are summarized below.   Added mips64-linux community triplet A community contributor has added a mips64-linux community triplet. MIPS stands for Microprocessor without Interlocked Pipelined Stages and is a “family of reduced instruction set computer (RISC) instruction set architectures (ISA)” (Source: MIPS architecture on Wikipedia). As someone who took courses in university where we wrote code targeting MIPS, I thought this was pretty neat! Also, as implied by the triplet name, this support is specifically for 64-bit MIPS. PR: Microsoft/vcpkg#34392, Microsoft/vcpkg-tool#1226 (thanks @capric8416!)   Documentation changes This month, our documentation changes at learn.microsoft.com/vcpkg include a glossary of terms and two new tutorials. The first tutorial covers exporting compiled dependencies, which is useful when you want to share libraries across multiple projects in a portable manner, without requiring the end user to install vcpkg to receive them. The second tutorial describes how to update an open-source vcpkg dependency to a new version and submit the changes to the vcpkg repo. Documentation changelog: Added Glossary of terms. Added Tutorial: Export compiled dependencies. Describes how to export compiled dependencies using the vcpkg export command. Added Tutorial: Update an existing vcpkg dependency. Updated MSBuild integration article to describe properties for app-local DLL deployment. Fixed incorrect spelling to an “env” macro in a CMakePresets.json snippet (PR: Microsoft/vcpkg-docs#215, thanks @oraqlle!) Fixed a couple of links in the CMake integration page (PR: Microsoft/vcpkg-docs#212, thanks @randallpittman!) Other minor edits / typo fixes.   Bug fixes / performance improvements Fixed vcpkg activate failing when run with the –no-color switch in Visual Studio (PR: Microsoft/vcpkg-tool#1247). Fixed crash when running “vcpkg add port sqlite3[core]” (PR: Microsoft/vcpkg-tool#1163, thanks @autoantwort!) Other minor bugfixes.   Total ports available for tested triplets We are re-building our ports for arm64-windows and x64-windows due to an error that occurred in the last CI run. The numbers for these will be updated shortly. triplet ports available x64-windows Building… x86-windows 2,122 x64-windows-static 2,084 x64-windows-static-md 2,108 arm64-windows Building… x64-uwp 1,217 arm64-uwp 1,184 x64-linux 2,158 x64-osx 2,050 arm-neon-android 1,496 x64-android 1,555 arm64-android 1,513 While vcpkg supports a much larger variety of target platforms and architectures, the list above is validated exhaustively to ensure updated ports don’t break other ports in the catalog.   Thank you to our contributors vcpkg couldn’t be where it is today […]

Read More