Noutați

What Is Heterogeneous Lookup In Associative Containers In C++?

Containers are data storage arrays in modern C++ and they are very useful to iterate and search data with their amazing methods and properties. The C++ Standard Library defines four different main container types and one of them is associative containers such as std::map, and std::set. These class types allow us to use the look-up method “find()” by a value based on a value of that type. C++14 introduced the “Heterogeneous Lookup In Associative Containers” feature that allows the lookup to be done via an arbitrary type, so long as the comparison operator can compare that type with the actual key type. In this post, we explain containers, associative containers, and heterogeneous lookup in associative containers. What is a container in C++? Containers are data storage arrays in modern C++, and they are very useful to iterate and search data with their amazing methods and properties. In C++, there are four main types of containers: Sequence Containers (vectors, arrays, …) Associative Containers (maps, sets, …) Unordered Associative Containers (unordered_set, unordered_map, …) Container Adapters (stack, queue, priority_queue) If you want to know more about containers, here are more details about their types: What are associative containers in C++? Associative Containers are class templates of container types that can be used to implement sorted data structures where can be quickly searched. They are sorted by keys. We can say they are about O(log n) complexity data structures. The associative containers are: std::map : a class template for the collection of key-value pairs, its keys are unique and it is sorted by keys std::set : a class template for the collection of unique keys, it is sorted by keys  multiset : a class template for the collection of keys, it is sorted by keys multimap : a class template for the collection of key-value pairs, it is sorted by keys  What is heterogeneous lookup in associative containers in C++? The C++ Standard Library defines 4 associative container types. These class types allow us to use the look-up method find() by a value based on a value of that type. C++14 introduced the “Heterogeneous Lookup In Associative Containers” feature that allows the lookup to be done by an arbitrary type, so the comparison operator can compare types with the actual key type. The heterogeneous lookup in associative containers gives us to use std::map, std::set, and other associative containers. In example, let’s have some strings and have some values in a std::map (which is a associative container),   std::map mymap { { “Hello Developers”, 10 }, { “Please Visit Us”, 20 }, { “LearnCPlusPlus.org”, 30 } };   we can use find method of map as shown below:   auto m = mymap.find(std::string(“LearnCPlusPlus.org”)); // m iterator   In the heterogeneous lookup we can use less or other features. Is there a full example about heterogeneous lookup in associative containers in C++? When we want to do heterogeneous lookup, all we have to do is to use std::less or other heterogeneous lookup features and we should implement correct comparison methods. Here is an example,   std::map mymap2 { { “Hello Developers”, 10 }, { “Please Visit Us”, 20 }, { “LearnCPlusPlus.org”, 30 } };   The interesting thing is, it is straightforward to enable and we can use find method like so:   auto m = mymap2.find(std::string(“LearnCPlusPlus.org”)); // m iterator   Note that here m […]

Read More

[Yukon Beta Blog] C++ and Visual Assist in RAD Studio 12.0

This blog post is based on a pre-release version of the RAD Studio software and it has been written with specific permission by Embarcadero. No feature is committed until the product GA release. RAD Studio 12 is just around the corner, with our release webinar this Thursday! Back in August, we gave a preview webinar of what is being worked on internally for C++, covering a preview of the updated Clang compiler, and our initial integration of Visual Assist. The thing is, we held out a little. The preview was quite conservative. We didn’t share everything. In fact, we only shared about a third of what we’re shipping for Visual Assist. So what’s coming? In the August webinar, we shared that there are three areas for features in the initial implementation: code completion, refactoring and navigation. So, here’s a few teasers for what you’ll see on Thursday and what you’ll be able to use when you install C++Builder 12.0. Table of Contents Code completion Refactoring Navigation C++Builder and RAD Studio 12 are out any day now! Code completion What’s interesting about this image? Hint: It shows code completion doing something that it could never do before in C++Builder. Something pretty magical. Lots more to say about code completion, and Code Insight in general, but we’ll show it on Thursday! Let’s move right along… Refactoring In the preview webinar, we showed one refactoring, Rename. And a rename refactoring is really useful, it would be great to have just that in the first version! But here’s the Refactor menu in C++Builder 12.0: Rename and… three other items. That’s four items when we previewed one. So what are they? Well, the remaining three are not quite refactorings per se (they won’t rewrite or move around code) so much as a kind of code generation, doing or creating something useful for you. The first one is pretty amazing – magical, in fact. I smile every time I use it. The third and fourth are smaller, and are two versions of the same feature (each operation is the inverse of the other one.) What could they be? Navigation In the preview webinar, we previewed Find References — an incredibly useful feature to find where any symbol (method, class etc) is used, or referred to, in your code — and Find Symbol, a feature to find any symbol anywhere in your project group or the headers it uses or… anywhere. But look at this. Four other menu items. The bottom two are small: a Delphi feature that happens to be one of the most-requested navigation features for C++Builder. (They’re two aspects of the same feature, one being the inverse of the other. And though small we think those of you who’ve asked for it will love seeing it added. It’s not actually an existing Visual Assist feature, we added it into Visual Assist specially for C++Builder!) The top two? They’re bigger. What could they be? Well, one opens a dialog. A useful dialog. One, however, opens another menu. This menu: Look at it. Lots more functionality appears. Not only that, but there are submenus. What could they contain? This menu can be invoked another way too, by the way. And for what it’s worth, in the launch webinar I refer to this feature as the most […]

Read More

I first met Philippe Kahn and Turbo Pascal 40 years ago this month

In 1983, I was working for Softsel Computer Products (Softsel) in the product evaluation, support and training group. Softsel had a booth at the Fall 1983 COMDEX (Computer Dealer Expo) conference (November 28 to December 2) in the Las Vegas Convention Center. I sat at a pod in the booth to answer questions about Softsel, products we distributed and to talk with software and device manufacturers that might be looking to have their products distributed to computer stores. During the convention Philippe Kahn (PK) walked by the Softsel booth and stopped for a moment. I said hello to him (not knowing anything about him nor his company). During the conversation, we talked about programming and developer tools. PK mentioned that he had a Pascal compiler that he was selling but that he was not looking to have it distributed (he was selling it direct to programmers using direct mail and an ad in Byte Magazine). Before he left the booth, he gave me two floppy disks containing copies of Turbo Pascal 1.0 (8″ CPM-80 and 5.25 PC-DOS). On one of my breaks, I took the floppy disks into a booth “office” that we had for meetings that also had an IBM PC. I was very excited to see what PK had since I had learned Pascal in 1972 while I was a Computer Science major at Cal Poly San Luis Obispo. I put in the 5.25 floppy disk and started the Turbo.com executable and up popped a menu with a few options. I selected the editor and typed in a short command line “Hello World” program and tried to run it. Amazingly it compiled blazingly fast and the app started up. I had to tell my co-worker and friend Spencer Leyton about this Pascal compiler and how important it was for the CPM and PC programming world. From that day on, Spencer talked with PK to try to convince him to allow Softsel to distribute Turbo Pascal to its network of computer store accounts. While it took awhile to convince PK, Spencer eventually got PK to agree to a distribution contract. Spencer went on to get a job at Borland. I continued working at Softsel for awhile and eventually Spencer convinced PK to interview me for a job. My job interview was on PK’s racing sailboat in Monterey Bay. We had dinner afterwards at the Crow’s Nest restaurant at the Santa Cruz harbor. I went back to Los Angeles and was given a job offer. I accepted the offer and started in June of 1985 (a little less that 2 years after I first met PK). I enjoyed the privilege of working with Anders Hejlsberg and a talented global team of dedicated employees for more than three decades (and about 4 million air miles). I seems unreal that its been almost 40 years since I first met PK and first tried Turbo Pascal. It’s also been more than 50 years since I first tried the Pascal language while I was in college. Back then you could build programs for two platforms: PC-DOS and CPM-80. Most amazingly, you can still create “textbook” Pascal applications with every release of Turbo, Borland, Kylix and Delphi Pascal compilers. And, with Delphi, you can create modern applications that can run on desktops, web servers, clouds […]

Read More

MongoDB Allies With AWS to Generate Code Using Generative AI

MongoDB and Amazon Web Services (AWS) announced today that they have extended their existing alliance to provide examples of curated code to train the Amazon CodeWhisperer generative artificial intelligence (AI) tool. Amazon CodeWhisperer is a free tool that generates code suggestions based on natural-language comments or existing code found in integrated development environments (IDEs). Andrew Davidson, senior vice president of product for MongoDB, said developers that build applications on MongoDB databases will now receive suggestions that reflect MongoDB best practices. The overall goal is to increase the pace at which a Cambrian explosion of high-quality applications can be developed, he added. Generative AI is already fundamentally changing the way applications are developed. Instead of requiring a developer to create a level of abstraction to communicate with a machine, it’s now possible for machines to understand the language humans use to communicate with each other. Developers, via a natural language interface, will soon be asking generative AI platforms to not only surface suggestions but also test and debug applications. The challenge developers are encountering is that generative AI platforms such as ChatGPT are based on large language models (LLMs) that were trained using code of varying quality collected from across the web. As a result, the code suggested can contain vulnerabilities or may simply not be especially efficient, resulting in increased costs because more infrastructure resources are required. In addition, the suggestions that surfaced can vary widely from one query to the next. As an alternative, AWS is looking to partner with organizations like MongoDB that have curated code to establish best practices that can be used to ensure better outcomes. These optimizations are available for C#, Go, Java, JavaScript and Python, the five most common programming languages used to build MongoDB applications. In addition, Amazon CodeWhisperer enables built-in security scanning and a reference tracker that provides information about the origin of a code suggestion. There’s little doubt at this point that generative AI will improve developer productivity, especially for developers who have limited expertise. DevOps teams, however, may soon find themselves overwhelmed by the amount of code moving through their pipelines. The hope is AI technologies will also one day help software engineers find ways to manage that volume of code. On the plus side, the quality of that code should improve thanks to recommendations from LLMs that, for example, will identify vulnerabilities long before an application is deployed in a production environment. Like it or not, the generative AI genie is now out of the proverbial bottle. Just about every job function imaginable will be impacted to varying degrees. In the case of DevOps teams, the ultimate impact should involve less drudgery as many of the manual tasks that conspire to make managing DevOps workflows tedious are eliminated. In the meantime, organizations should pay closer attention to which LLMs are being used to create code. After all, regardless of whether a human or machine created it, that code still needs to be thoroughly tested before being deployed in production environments.

Read More

What Are Integral_constant And () Operator In C++?

Modern C++ has base class features that can be used with other modern features of C++. The std::integral_constant is the base class for the C++ type traits in C++11, and in C++14, std::integral_constant gained an operator () overload to return the constant value. In this post, we explain what integral_constant and () operator are in C++14. What is integral_constant in C++? The std::integral_constant  is the base class for the C++ type traits in header that wraps a static constant of specified type. The behavior in a code part that adds specializations for std::integral_constant is undefined. Here is the definition in header since C++11,   template struct integral_constant;   Here is a very simple example to how can we use std::integral_constant, in C++11 we can use ::value to retrieve its value,   typedef std::integral_constant five;   std::cout

Read More

Demystifying LLMs: How they can do things they weren’t trained to do

Large language models (LLMs) are revolutionizing the way we interact with software by combining deep learning techniques with powerful computational resources. While this technology is exciting, many are also concerned about how LLMs can generate false, outdated, or problematic information, and how they sometimes even hallucinate (generating information that doesn’t exist) so convincingly. Thankfully, we can immediately put one rumor to rest. According to Alireza Goudarzi, senior researcher of machine learning (ML) for GitHub Copilot: “LLMs are not trained to reason. They’re not trying to understand science, literature, code, or anything else. They’re simply trained to predict the next token in the text.” Let’s dive into how LLMs come to do the unexpected, and why. This blog post will provide comprehensive insights into LLMs, including their training methods and ethical considerations. Our goal is to help you gain a better understanding of LLM capabilities and how they’ve learned to master language, seemingly, without reasoning. What are large language models? LLMs are AI systems that are trained on massive amounts of text data, allowing them to generate human-like responses and understand natural language in a way that traditional ML models can’t. “These models use advanced techniques from the field of deep learning, which involves training deep neural networks with many layers to learn complex patterns and relationships,” explains John Berryman, a senior researcher of ML on the GitHub Copilot team. What sets LLMs apart is their proficiency at generalizing and understanding context. They’re not limited to pre-defined rules or patterns, but instead learn from large amounts of data to develop their own understanding of language. This allows them to generate coherent and contextually appropriate responses to a wide range of prompts and queries. And while LLMs can be incredibly powerful and flexible tools because of this, the ML methods used to train them, and the quality—or limitations—of their training data, can also lead to occasional lapses in generating accurate, useful, and trustworthy information. Deep learning The advent of modern ML practices, such as deep learning, has been a game-changer when it comes to unlocking the potential of LLMs. Unlike the earliest language models that relied on predefined rules and patterns, deep learning allows these models to create natural language outputs in a more human-like way. “The entire discipline of deep learning and neural networks—which underlies all of this—is ‘how simple can we make the rule and get as close to the behavior of a human brain as possible?’” says Goudarzi. By using neural networks with many layers, deep learning enables LLMs to analyze and learn complex patterns and relationships in language data. This means that these models can generate coherent and contextually appropriate responses, even in the face of complex sentence structures, idiomatic expressions, and subtle nuances in language. While the initial pre-training equips LLMs with a broad language understanding, fine-tuning is where they become versatile and adaptable. “When developers want these models to perform specific tasks, they provide task descriptions and examples (few-shot learning) or task descriptions alone (zero-shot learning). The model then fine-tunes its pre-trained weights based on this information,” says Goudarzi. This process helps it adapt to the specific task while retaining the knowledge it gained from its extensive pre-training. But even with deep learning’s multiple layers and attention mechanisms enabling LLMs to generate human-like text, it can […]

Read More

vcpkg 2023.10.19 Release: Export for Manifests, Documentation Improvements, and More…

vcpkg 2023.10.19 Release: Export for Manifests, Documentation Improvements, and More… Augustin Popa November 3rd, 20230 0 The 2023.10.19 release of the vcpkg package manager is available. This blog post summarizes changes from August 10th, 2023 to October 19th, 2023 for the Microsoft/vcpkg, Microsoft/vcpkg-tool, and Microsoft/vcpkg-docs GitHub repos. Some stats for this period: 53 new ports were added to the open-source registry. If you are unfamiliar with the term ‘port’, they are packages that are built from source and are typically C/C++ libraries. 729 updates were made to existing ports. As always, we validate each change to a port by building all other ports that depend on or are depended by the library that is being updated for our nine main triplets. There are now 2,318 total libraries available in the vcpkg public registry. 34 contributors submitted PRs, issues, or participated in discussions in the main repo. The main vcpkg repo has over 5,700 forks and 19,900 stars on GitHub.   Key changes Notable changes for this release are summarized below. vcpkg export now supports manifest mode The vcpkg export command can be used to export built packages from the installed directory to a standalone SDK. A variety of formats are supported, including NuGet, a zip, or a raw directory. The SDK contains all prebuilt binaries for the selected packages, their transitive dependencies, and integration files such as CMake toolchain or MSBuild props/targets. This command is useful for developers who want to export their dependencies to a portable format for their end users to consume, when those end users do not have vcpkg. Now, this command is supported for manifest-based (vcpkg.json) projects. Summary of changes: vcpkg export in manifest mode exports everything in the vcpkg_installed directory In manifest mode, the export command emits an error and fails when port:triplet arguments are provided, as they are not allowed in manifest mode. Added a guard to exit with an error message when the installed directory is empty. Previously, it just failed silently. Made –output-dir mandatory in manifest mode. Documentation for vcpkg export in manifest mode PR: Microsoft/vcpkg-tool#1136   Implemented default triplet changes announced earlier this year In a previous blog post, we announced that we would be changing the default behavior for commands that accept a triplet as an option but are not provided one. This change is now live. The default triplet assumed is no longer always x86-windows but will instead use a triplet inferred from your CPU architecture and operating system. PR: Microsoft/vcpkg-tool#1180   Improvements to vcpkg help The documentation provided when running vcpkg help or vcpkg help has been updated. This should make it easier to explore vcpkg without having to go to the documentation on Microsoft Learn and also improves the autocompletion (Tab) experience for the tool. Below are some screenshots of the new experience. Before (left) and after (right) when running vcpkg help (or not providing any commands to vcpkg): Summary of changes: We now show a more complete list of available commands and options. Available commands now organized by category and sorted alphabetically. Added a link to our online vcpkg documentation. Cleaned up some of the wording.   Before (left) and after (right) when running vcpkg help install: Summary of changes: Added synopsis describing what the command does. In some cases, added additional examples for different usage […]

Read More

What Are The Useful Mutex, Shared Mutex and Tuple Features In Modern C++

Hello C++ Developers, Embarcadero and Whole Tomato developer teams are working hard on to release of RAD Studio 12, and it seems like we may (or not) see the early released version of the new C++ compiler before 2024. The new 64-bit Clang Toolchain in RAD Studio 12 may have a new bcc64x C++ compiler. The news is amazing. Before it comes, let’s keep learning some modern C++ features to heat up from now on. This week, we have 3 more modern C++ features that can be used in C++ Builder. The concurrency support library in modern C++ is designed to allow your programs to read and write data securely in thread operations allowing us to develop faster multi-thread code. There are differences between mutual exclusion (std::mutex) and shared mutex (std::shared_mutex) and in another post, we explain these. The tuple (std::tuple) was introduced in C++11 and improved in C++14. In the last post, we explain tuple addressing via type features that come with the C++14 standard. Our educational LearnCPlusPlus.org site has a broad selection of new and unique posts with examples suitable for everyone from beginners to professionals alike. It is growing well thanks to you, and we have many new readers, thanks to your support! The site features a treasure-trove of posts that are great for learning the features of modern C++ compilers with very simple explanations and examples. RAD Studio’s C++ Builder, Delphi, and their free community editions C++ Builder CE, and Delphi CE are powerful tools for modern application development. Table of Contents Do you want to know some news about C++ Builder 12? How to use modern C++ with C++ Builder? How to learn modern C++ for free using C++ Builder? Where I can I learn C++ and test these examples with a free C++ compiler? Do you want to know some news about C++ Builder 12? RAD Studio, C++ Builder 12 may come with a new C++ compiler that has 64bits Windows CLANG 15.x based compiler. If you want to discover what’s coming in the next RAD Studio 12. Secure your spot from this link now: https://ow.ly/NZFQ50PVL13 Some of the technical features are: Uses Clang 15 Uses LLVM’s lld as the linker Emits COFF64 object files (note this means Delphi can also emit COFF64 object files in 12.0: the compiler option “-jf:coff” is specified by our targets files when the “-JL” or “-JPHN[E]” options are specified) Emits PDB format debug info Uses the Itanium ABI (not Microsoft ABI) Uses LLVM’s libc++ STL Uses UCRT for the C runtime Uses a mix of LLVM and MinGW for the C++ runtime Targets Win64 Named bcc64x Here are more details about it, Win64 Clang Toolchains in RAD Studio 12 How to use modern C++ with C++ Builder? In modern C++, the concurrency support library is designed to solve read and write data securely in thread operations that allow us to develop faster multi-thread apps. This library includes built-in support for threads (std::thread), atomic operations (std::atomic), mutual exclusion (std::mutex), condition variables (std::condition_variable), and many other features.  In C++14, in addition to mutex, there is a shared_mutex which is an instance of the class located in  header. In the first post, we explain using shared mutexes locking in Modern C++. There are differences between mutual exclusion (std::mutex) which comes with C++11 and shared mutex (std::shared_mutex) which comes with C++14 standards. In the next post, we explain a frequently asked mutex question in modern C++, what are […]

Read More

Atlassian Brings Generative AI to ITSM

Atlassian today added generative artificial intelligence (AI) capabilities to Jira Service Management, an IT service management (ITSM) platform built on top of Jira project management software already used widely by DevOps teams. Generative AI is at the core of a virtual agent that analyzes and understands intent, sentiment, context and profile information to personalize interactions. Based on the same natural language processing (NLP) engine that Atlassian is embedding across its portfolio, the virtual agent dynamically generates answers from sources such as knowledge base articles, onboarding guides and frequently asked questions (FAQs) documents. In addition, it can facilitate conversations with human experts any time additional expertise is required to respond to more complex inquiries. Atlassian is also extending the reach of Atlassian Intelligence, a generative AI solution launched earlier this year, to provide concise summaries of all conversations, knowledge base articles and other resolution paths recommended by previous agents that have handled similar issues. It will also help IT staff craft better responses and adjust their tone to be more professional or empathetic if needed. During setup, support teams can easily configure the virtual agent experiences to match how they deliver service without writing a single line of code. Edwin Wong, head of product for IT solutions at Atlassian, said these additions are part of a larger commitment Atlassian is making to unify the helpdesk experience. The company plans to leverage Atlassian Intelligence to coordinate routing of all employee requests to the right tools as it aggregates requests from multiple communications channels such as web portals, email, chat and from within third-party applications, he noted. The overall goal is to reduce the number of tickets generated by leveraging AI as much as possible to handle service requests in a way that costs less to implement and maintain, Wong said. In the longer term, Atlassian will also apply generative AI to enable organizations to automate IT asset management further, he added. There is little doubt at this juncture that AI will be pervasively applied across both ITSM and DevOps workflows. As those advances are made, it should also become easier to address issues that arrive either programmatically or by generating a ticket for a service request that is then processed by an ITSM platform such as Jira Service Management. Each organization will need to decide how quickly to incorporate AI into ITSM, but hopefully, the level of burnout experienced by IT personnel will be sharply reduced as more tasks are automated. Less clear is the impact AI will have on the size of IT teams required to provide those services, but for the foreseeable future, there will always be a need for some level of human supervision. In the meantime, IT teams should take an inventory of the processes that are likely to be automated by AI today with an eye toward restructuring teams as more tasks are automated. Ultimately, the goal should be to let machines handle the tasks they do best so humans can provide higher levels of service that deliver more value to the business.

Read More

What Are The Standard User-Defined Literals In C++14?

C++11 introduced new forms of literals using modified syntax and semantics to provide User-Defined Literals (UDL) also known as Extensible Literals. While there was the ability to use them the standard library did not use any of them. In C++14, the commission added some standard literals. In this post, we explain user-defined literals operators and we explain some of these standard literals added in C++14. What are the user defined literal operators in C++? C++11 introduced new forms of literals using modified syntax and semantics in order to provide User-Defined Literals (UDL) also known as Extensible Literals. Using user-defined literals, user-defined classes can provide new literal syntax and they can be used with the operator “” to combine values with conversion operators. Here below, we explain how to use user-defined literals in C++. What are the standard user-defined literals in C++14? In C++14, we have some standard user-defined literal operators that comes with standard library. These are literals for basic strings, for chrono types, and for complex number types. We can access to these operators by: using namespace std::literals; using namespace std::string_literals; using namespace std::literals::string_literals; C++14 adds the following standard literals below, For the string types there is an operator”” s() for basic string, s : std::basic_string types for creating the various string types std::string, std::wstring, etc. here how we can use it with auto,   auto str = “LearnCPlusPlus.org”s; // auto deduction to string   Suffixes for std::chrono::duration values, h : hour type for the std::chrono::duration time intervals m : minute type for the std::chrono::duration time intervals s : second type for the std::chrono::duration time intervals ms : millisecond type for the std::chrono::duration time intervals ns : nanosecond type for the std::chrono::duration time intervals us : u.second type for the std::chrono::duration time intervals here how we can use them with auto,   auto durh = 24h;            // auto deduction to chrono::hours auto durm = 60min;            // auto deduction to chrono::minutes auto durs = 120s;            // auto deduction to chrono::seconds auto durms = 1000ms;         // auto deduction to chrono::milliseconds auto durns = 2000ns;         // auto deduction to chrono::nanoseconds   Suffixes for complex number literals, if : imaginary number for the std::complex types i : imaginary number for the std::complex types il : imaginary number for the std::complex types here how we can use them with auto,   auto zi   = 5i;             // auto deduction to complex auto zif  = 7if;             // auto deduction to complex auto zil  = 9il;             // auto deduction to complex   there are more definitions. Is there a full example of how to use standard user-defined literals in C++14? Here is a full example about standard user-defined literals in C++. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27   #include #include #include #include #include   using namespace std::literals; using namespace std::string_literals; // using namespace std::literals::string_literals;   int main() { auto str = “LearnCPlusPlus.org”s; // auto deduction to string   auto durh = 24h;            // auto deduction to chrono::hours auto durm = 60min;            // auto deduction to chrono::minutes auto durs = 120s;            // auto deduction to chrono::seconds auto durms = 1000ms;         // auto deduction to chrono::milliseconds auto durns = 2000ns;         // auto deduction to chrono::nanoseconds   auto zi   = 5i;             // auto deduction to complex auto zif  = 7if;             // auto deduction to complex auto zil  = 9il;             // auto deduction to complex   }   For more information about the standard user-defined literals, please […]

Read More