Noutați

What Are The Elementary String Conversions That Come With C++ 17?

In addition to many beneficial features of C++ 17, there are elementary string conversions introduced in that specification. The std::to_chars() and std::from_chars() are defined in header to do conversions between numeric values to strings or strings to numeric values without considering locale-specific conversions. In this post, we explain the std::to_chars() and std::from_chars() that come with C++17. What Are The Elementary String Conversions That Come With C++ 17 ? What is std::to_chars()? The std::to_chars() is defined in header and is used to convert numeric values into a character string within the given valid range. The std::to_chars() is designed to copy the numeric value into a string buffer, with a given specific format, with the only overhead of making sure that the buffer is big enough. You don’t need to consider locale-specific conversions. Here is the syntax of std::to_chars in C++ 17.   std::to_chars_result  to_chars( char* first, char* last, value, int base = 10 );   Here is a simple example.   std::string str= “abcdefgh”; const int ival = 10001000; const auto con = std::to_chars( str.data(), str.data() + str.size(), ival);   In floating point number conversions, std::chars_format types can be used (i.e. std::chars_format::fixed, std::chars_format::scientific, std::chars_format::general,…) What is std::from_chars()? The std::from_chars() is defined in the header and used to convert the string data in a given range to a value (string to int, string to float operations) if no string characters match the pattern or if the obtained value is not representable in a given type of value, then the value has remains unchanged. The std::from_chars() is a lightweight parser that does not need to create dynamic allocation, and you don’t need to consider locale-specific conversions. Here is the syntax of std::from_chars in C++ 17.   std::from_chars_result from_chars( const char* first, const char* last, &value, int base = 10 );   Here is a simple example.   std::string str= “10001000”; int vali; auto [ptr, ec] = std::from_chars(str.data(), str.data() + str.size(), vali);   Is there a full example to elementary string conversions that comes with C++ 17? Here is a full example about std::to_chars() and std::from_chars() in C++ 17. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38   #include #include #include   int main() { std::string str= “abcdefgh”;   // INT TO CHAR const int ival = 10001000; const auto con = std::to_chars( str.data(), str.data() + str.size(), ival);   std::cout

Read More

From Reaction to Robots: Riding the AI Wave in 2024

As we navigate another year of consistent zero-day breaches, legislative pivots, the explosion of AI tooling and threat actors growing bolder and more desperate, it’s safe to say that getting comfortable with change is a requirement for thriving in the technology industry. We occupy a notoriously unpredictable space, but that’s half the fun. Compared to many other verticals, technology—especially cybersecurity—is relatively youthful, and the future should be something we can all look forward to blossoming in sophistication alongside the technology we swear to protect. So, what can we expect in the industry in 2024? We put our heads together, looked into our crystal ball, and these were the results: Government Regulations Around AI Will Turn the Industry Upside Down It was the talk of the conference circuit in 2023, with several high-profile presentations at Black Hat, DEF CON, Infosecurity Europe and many more warning of the explosive changes we can expect from AI implementation across every industry, especially cybersecurity. As tends to happen with low barriers to entry for such transformative technology, adoption has outpaced any official regulation or mandates at the government level. With significant movements in general cybersecurity guidelines and benchmarks around the world, including CISA’s Secure-by-Design and -Default principles in the U.S. and similar initiatives from the UK and Australian governments, it is essentially a foregone conclusion that regulations around AI use will be announced sooner rather than later. While much of the debate surrounding the mainstream use of AI tooling and LLMs has centered around copyright issues with training data, another perspective delves into how AI is best used in cybersecurity practices. When it comes to coding, perhaps its most human quality is its similar hardship in displaying contextual security awareness, and this factor is deeply concerning as more developers are adopting AI coding assistants in the construction of software. This has not gone unnoticed, and in a time of increased scrutiny for software vendors adopting security best practices, government-level intervention certainly would not surprise. … And Demand for AI/ML Coding Tools Will Create a Need for More Developers, not Less! Much has been written about the AI takeover, and for the better part of a year, we have been subject to a plethora of clickbait headlines that spell doom and destruction for just about every white-collar profession out there, and developers were not spared. After months of speculation and experimentation with LLMs in a coding context, we remain entirely unconvinced that development jobs are at collective risk. There is no doubt that AI/ML coding tools represent a new era of powerful assistive technology for developers, but they are trained on human-created input and data, and that has rendered the results far from perfect. Perhaps if every developer on the planet was a top-tier, security-minded engineer, we might see genuine cause for concern. However, just as the average adult driver vastly overshoots their ability (notice how everyone says they’re a great driver, and it’s always other people who lack skill? That’s a classic example of the Dunning-Kruger effect!), so too does the development community, especially when it comes to security best practices. According to one Stanford study into developer use of AI tooling, it is likely that unskilled developers using this technology will become dangerous. The study claimed that participants who had access to AI assistants […]

Read More

What Is The Class Template Variant (std::variant) in C++ 17?

In C++ Builder 12, and modern C++ the std::variant is one of the powerful features that comes with C++17. The std::variant is a discriminated union that we can work with multiple data types. It represents a type-safe union and holds one of its types in definition. What is the class template std::variant in C++ 17? The std::variant is a class template defined in  header that represents a disjoint union (or discriminated union). A value of variant contains one of an A, a B, or a C at any one time. It can be used as a multi-type variable, for example, a variable can be a float, an int, or a string. Here is the template definition since C++17:   template class variant;   Here is a simple example that shows how we can use it.   std::variant myvar; myvar = 100; // int   To get value of a variant we can use std::get (std::variant), here is how we can use it:   std::variant myvar2; myvar2 = std::get(myvar);   std::variant has many useful methods and properties that can be used in modern C++, such as index, valueless_by_exception, emplace, swap, get_if, visit, variant_size, variant_size_v, variant_npos, monostate, std::hash, and operators ( =, ==, !=, , =, ) Is there a full example about the class template variant in C++ 17? Here is an example. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23   #include #include #include   int main() { std::variant myvar;   //myvar = true; // bool myvar = 100; // int     if ( std::holds_alternative(myvar) ) std::cout

Read More

#include Diagnostics in Visual Studio

#include Diagnostics in Visual Studio Mryam Girmay January 8th, 20241 3 We’re excited to announce that the #include Diagnostics feature is now available in Visual Studio 2022 17.9 Preview 2. This new feature allows you to better understand the behavior of #include directives by providing detailed information on each directive’s references and build time.  Download Visual Studio Preview To begin utilizing this feature, activate #include diagnostics by performing a right-click in your editor to display the context menu. Then, navigate to the ‘Include Directives’ option and choose ‘Enable #include Diagnostics’.  #include References  The feature in Visual Studio allows you to analyze the usage of #include directives in your code. It shows where and how often each directive is referenced, which can be particularly useful when dealing with a long list of directives. If you find an #include directive that is infrequently used but significantly impacts your compile time, this tool will help you identify it quickly so you can take the necessary steps to optimize your code.  Once you enable #include diagnostics, you should see a line of text above each #include directive. This is the CodeLens feature in action. The text above the #include directive indicates the count of its references in your current file. Clicking this count opens a window listing these references. Selecting any reference from this list will direct you to its corresponding line of code in your project.  #include Build Time This feature presents the build time for each #include directive. To activate this, you’ll need to run Build Insights by navigating to Build -> Run Build Insights. This action will generate the necessary build time data. This allows you to easily visualize and evaluate the build time for each #include directive by comparing its usage and compilation time. The information provided by #include Diagnostics can be utilized to optimize your #include directives and improve compilation time. In addition to the information provided by the new #include diagnostics you may also want to consider checking out our documentation on C++ modules as an alternative to #include to further improve compilation time. Send us your feedback  Your feedback is invaluable to us as we strive to enhance your experience. Please feel free to leave your comments below. Alternatively, you can share your thoughts through the Visual Studio Developer Community. We’re also available on Twitter (@VisualC) and can be reached via email at visualcpp@microsoft.com. We look forward to hearing from you!    Mryam Girmay Program Manager, C++ Follow

Read More

AI a Key Driver Behind HPE’s $14 Billion Deal for Juniper

Hewlett Packard Enterprise is looking to become a more significant player in the networking space through its planned $14 billion acquisition of Juniper, a deal that it hopes will make it a more formidable rival to longtime market leader Cisco Systems. The deal, announced Tuesday after the markets closed, is a big deal in the early days of the new year for a networking industry that has become central in an IT sector that is becoming more distributed and more cloud-native. During a virtual briefing with analysts and journalists this morning, HPE CEO Antonio Neri described an HPE centered around its networking business that has AI capabilities and its GreenLake edge-to-cloud platform of IT services at its foundation. “HPE will be a new company where networking will be the core foundation of everything we do,” Neri said. “We’re going to accelerate what we call an AI-driven agenda, and that will allow us to capture this massive inflection point.” Even when the deal closes – which is expected to happen later this year or in early 2025 – HPE will still likely be in third place in the global networking space behind Cisco and Huawei, but will have a stronger portfolio that will not only include greater AI capabilities but also a stronger presence in both the enterprise and telecom spaces. Once it closes, Juniper CEO Rami Rahim will lead the combined HPE networking business and report to Neri. Juniper’s Mist AI is at the Center Unsurprisingly, AI was a key component of the deal. In a research note, Will Townsend and Patrick Moorhead, analysts with Moor Insights and Strategy, wrote that their thinking after initial news reports about a possible deal circulated was that HPE likely was looking for a “strong AI anchor” for its portfolio of hardware, software, and GreenLake IT consumption services. “AI is hot, ignited by the attention being directed toward generative AI, the underlying large language models, and many promising use cases,” Townsend and Moorhead wrote. “One could argue that beyond the AIOps capability found in the HPE Aruba Networking portfolio today, HPE needs further AI depth to remain competitive and continue to grow its top-line revenue and profitability. Juniper could deliver on that front.” Rahim called AI “the biggest inflection since the dawn of the internet itself” and added that the combination of HPE and Juniper “will be able to bring the depth and the breadth of the portfolios necessary to capture the full market opportunity that AI presents in front of us.” AI in networking is a strength for Juniper, which in 2019 bought Mist Systems and its AI technologies, including the Mavis virtual network assistant, which the analyst wrote serves “as the tip of the spear for Juniper’s reinvigorated efforts within the enterprise for WLAN, LAN, WAN, and SD-WAN solutions.” “By all measures, the Mist acquisition has been a success, with Juniper growing its enterprise install base at a faster rate than its service provider business over the last 12 to 18 months,” Townsend and Moorhead wrote. AI and Networking AI will play an increasingly important role in networking going forward, from dynamically adjusting bandwidth and self-correcting in the network for maximum uptime to quickly finding root causes for problems and deploying virtual network assistants. In a blog post last month, Liz Centoni, […]

Read More

What Is The New std::sample Algorithm In C++ 17?

The C++ 17 standard bring us a lot of useful methods, templates and algorithms. One of the great algorithms is std::sample defined in the header that samples at most n elements uniformly from a given range. In this post, we explain the std::sample algorithm and how we can use it with an mt19937 random generator. What is the std::sample algorithm in C++ 17 and beyond? The std::sample algorithm is defined in header that samples at most n elements uniformly from a given range into the output iterator and random numbers can be generated by using a random number generator function. Generally std::mt19937{} is used as random generator and std::random_device{}() is used random generator device. The std::sample algorithm is defined as a template algorithm in C++17 as shown below.   template SampleIterator sample( PopulationIterator first_in, PopulationIterator last_in,                        SampleIterator output, Distance n, URBG&& function );   Here, first_in and last_in are the iterators that defines range of input. n is number of elements to be sampled into output iterator, function is random number generator function. Is there a full example about the std::sample algorithm in C++ 17 and beyond? Here is a full example about std::sample in C++ 17. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28   #include #include #include #include #include #include   int main() { std::vector vec_in {“This”, “LearnCPlusPlus.org”, “is”, “really”, “amazing”,“!”}; std::vector vec_out;   std::cout

Read More

MSVC ARM64 optimizations in Visual Studio 2022 17.8 

MSVC ARM64 optimizations in Visual Studio 2022 17.8  Jiong Wang (ARM Ltd) Hongyon Suauthai (ARM) January 9th, 20240 1 Visual Studio 2022 17.8 has been released recently (download it here). While there is already a blog “Visual Studio 17.8 now available!” covering new features and improvements, we would like to share more information with you about what is new for the MSVC ARM64 backend in this blog. In the last couple of months, we have been improving code-generation for the auto-vectorizer so that it can generate Neon instructions for more cases. Also, we have optimized instruction selection for a few scalar code-generation scenarios, for example short circuit evaluation, comparison against immediate, and smarter immediate split for logic instruction. Auto-Vectorizer supports conversions between floating-point and integer The following conversions between floating-point and integer types are common in real-world code. Now, they are all enabled in the ARM64 backend and hooked up with the auto-vectorizer. From To Instruction double float fcvtn double int64_t fcvtzs double uint64_t fcvtzu float double fcvtl float int32_t fcvtzs float uint32_t fcvtzu int64_t double scvtf uint64_t double ucvtf int32_t float scvtf uint32_t float ucvtf For example: void test (double * __restrict a, unsigned long long * __restrict b) { for (int i = 0; i < 2; i++) { a[i] = (double)b[i]; } } In Visual Studio 2022 17.7, the code-generation was the following in which both the computing throughput and load/store bandwidth utilization were suboptimal due to scalar instructions being used. ldp x9, x8, [x1] ucvtf d17, x9 ucvtf d16, x8 stp d17, d16, [x0] In Visual Studio 2022 17.8.2, the code-generation has been optimized into: ldr q16,[x1] ucvtf v16.2d,v16.2d str q16,[x0] A single pair of Q register load & store plus SIMD instructions are used now. The above example is a conversion between double and 64-bit integer, so both types are the same size. There was another issue in the ARM64 backend preventing auto-vectorization on conversion between different sized types and it has been fixed as well. MSVC also auto-vectorizes the following example now: void test_df_to_sf (float * __restrict a, double * __restrict b, int * __restrict c) { for (int i = 0; i < 4; i++) { a[i] = (float) b[i]; c[i] = ((int)a[i])

Read More

Best of 2023: Copilots For Everyone: Microsoft Brings Copilots to the Masses

As we close out 2023, we at DevOps.com wanted to highlight the most popular articles of the year. Following is the latest in our series of the Best of 2023. Microsoft has been doing a lot to extend the coding ‘copilot’ concept into new areas. And at its Build 2023 conference, Microsoft leadership unveiled new capabilities in Azure AI Studio that will empower individual developers to create copilots of their own. This news is exciting, as it will enable engineers to craft copilots that are more knowledgeable about specific domains. Below, we’ll cover some of the major points from the Microsoft Build keynote from Tuesday, May 23, 2023, and explore what the announcement means for developers. We’ll examine the copilot stack and consider why you might want to build copilots of your own. What is Copilot? A copilot is an artificial intelligence tool that assists you with cognitive tasks. To date, the idea of a copilot has been mostly associated with GitHub Copilot, which debuted in late 2021 to bring real-time auto-suggestions right into your code editor. “GitHub Copilot was the first solution that we built using the new transformational large language models developed by OpenAI, and Copilot provides an AI pair programmer that works with all popular programming languages and dramatically accelerates your productivity,” said Scott Guthrie, executive vice president at Microsoft. However, Microsoft recently launched Copilot X, powered by GPT-4 models. A newer feature also offers chat functionality with GitHub Copilot Chat to accept prompts in natural language. But the Copilot craze hasn’t stopped there—Microsoft is actively integrating Copilot into other areas, like Windows and even Microsoft 365. This means end users can write natural language prompts to spin up documents across the Microsoft suite of Word, Teams, PowerPoint and other applications. Microsoft has also built Dynamics 365 Copilot, Power Platform Copilot, Security Copilot, Nuance and Bing. With this momentum, it’s easy to imagine copilots for many other development environments. Having built out these copilots, Microsoft began to see commonalities between them. This led to the creation of a common framework for copilot construction built on Azure AI. At Build, Microsoft unveiled how developers can use this framework to build out their own copilots. Building Your Own Copilot Foundational AI models are powerful, but they can’t do everything. One limitation is that they often lack access to real-time context and private data. One way to get around this is by extending models through plugins with REST API endpoints to grab context for the tasks at hand. With Azure, this could be accomplished by building a ChatGPT plugin inside VS Code and GitHub Codespaces to help connect apps and data to AI. But you can also take this further by creating copilots of your own and even leveraging bespoke LLMs. Understanding The Azure Copilot Stack Part of the Azure OpenAI service is the new Azure AI Studio. This service enables developers to combine AI models like ChatGPT and GPT-4 with their own data. This could be used to build copilot experiences that are more intelligent and contextually aware. Users can tap into an open source LLM, Azure OpenAI or bring their own AI model. The next step is creating a “meta-prompt” that provides a role for how the copilot should function. So, what’s the process like? Well, first, you […]

Read More

Everything You Need To Know About The Copy Assignment Operator In C++ Classes

Classes and Objects are part of object-oriented methods and typically provide features such as properties and methods. One of the great features of an object orientated language like C++ is a copy assignment operator that is used with operator= to create a new object from an existing one. In this post, we explain what a copy assignment operator is and its types in usage with some C++ examples. What is a copy assignment operator in C++? The Copy Assignment Operator in a class is a non-template non-static member function that is declared with the operator=. When you create a class or a type that is copy assignable (that you can copy with the = operator symbol), it must have a public copy assignment operator. Here is a simple syntax for the typical declaration of a copy assignment operator which is defaulted: Syntax (Since C++11).   class_name & class_name :: operator= ( const class_name& ) = default;   Here is an example in a class.   Tmyclass& operator=(const Tmyclass& other) = default; // Copy Assignment Operator   Is there a simple example of using the copy assignment operator in C++? The forced copy assignment operator is default in any class declarations. This means you don’t need to declare it as above. Let’s give examples without using it. Let’s give a simple C++ example to copy assignment operator with default option, here is a simple class:   class myclass {   public:   std::string str;   };   Because this is default in any class declaration, and it is automatically declared. This class is same as below.   class myclass {   public:   std::string str;     Tmyclass& operator=(const Tmyclass& other) = default; // Copy Assignment Operator };   And here is how you can use this “=” copy assignment operator with both class examples above.   Tmyclass o1, o2;   o2 = o1; // Using Copy Assignment Operator   now let’s see different usage types in C++, 1. Typical Declaration of A Copy Assignment Operator with Swap 2. Typical Declaration of A Copy Assignment Operator ( No Swap) 3. Forced Copy Assignment Operator 4. Avoiding Implicit Copy Assignment 5. Implicitly-declared copy assignment operator 6. Deleted implicitly declared copy assignment operator 7. Trivial copy assignment operator 8. Eligible copy assignment operator 9. Implicitly-defined copy assignment operator C++ Builder is the easiest and fastest C and C++ compiler and IDE for building simple or professional applications on the Windows operating system. It is also easy for beginners to learn with its wide range of samples, tutorials, help files, and LSP support for code. RAD Studio’s C++ Builder version comes with the award-winning VCL framework for high-performance native Windows apps and the powerful FireMonkey (FMX) framework for UIs. There is a free C++ Builder Community Edition for students, beginners, and startups; it can be downloaded from here. For professional developers, there are Professional, Architect, or Enterprise versions of C++ Builder and there is a trial version you can download from here.

Read More

Skillsoft Survey Sees AI Driving Increased Need to Retrain IT Teams

More organizations than ever will need to invest in IT training as advances in artificial intelligence (AI) transform roles and responsibilities in the coming year. A survey of 2,740 IT decision-makers conducted by Skillsoft, a provider of an online training platform, finds two-thirds (66%) were already dealing with skills gaps in 2023. As AI becomes more pervasively applied to the management of IT, that skills gap is only going to widen given the limited pool of IT professionals that have any experience using AI to manage IT. Skillsoft CIO Orla Daly said it’s already apparent AI creates an imperative for training because there are simply not enough IT people with the requisite skills required. In fact, the survey finds nearly half of IT decision-makers (45%) plan to close skills gaps by training their existing teams. That training is crucial because the primary reason IT staff change jobs is a lack of growth and development opportunities, noted Daly. While there is naturally a lot of consternation over the potential elimination of IT jobs, in the final analysis, AI will add more different types of jobs than it eliminates, adds Daly. Each of those jobs will require new skills that will need to be acquired and honed, she notes. “Training is the price of innovation,” said Daly. In the meantime, there is much interest in finding ways to automate existing IT processes to create more time for IT teams to experiment with AI technologies, said Daly. The report also finds well over half of IT decision-makers (56%) expect their IT budgets to increase to help pay for new platforms and tools, compared with only 12% expecting a decrease. It’s not clear to what degree AI will transform the management of AI, but it’s already apparent that many manual tasks involving, for example, generating reports are about to be automated. The instant summarization capabilities that generative AI enables also promise to dramatically reduce the time required to onboard new members to an incident response team. Rather than having to allocate someone on the team to bring new members up to speed, each new member of the team will use queries framed in natural language to determine for themselves the extent of the crisis at hand. In addition, many tasks that today require expensive specialists to perform might become more accessible to a wider range of organizations as, for example, more DevOps processes are automated. That doesn’t necessarily mean that DevOps as an IT discipline disappears as much as it leads to the democratization of best DevOps practices. Each IT organization in the year ahead will need to determine to what degree to rely on AI to manage IT processes. It may take a while before IT teams have enough confidence in AI to rely on it to manage mission-critical applications but many of the tasks that today conspire to make the management of IT tedious will undoubtedly fade away. The challenge and the opportunity now is identify those tasks today with an eye toward revamping how IT might be managed in the age of AI tomorrow.

Read More