Noutați

Three Important C++14 Features That You Can Use In C++ Builder 12

Hello C++ Developers, Yilmaz here, community manager for LearnCPlusPlus.org. This week was another milestone for the C++ developers, the new RAD Studio 12, the new C++ Builder 12, and the new Delphi 12 were released packed full of great features, optimizations, and improvements. There was an amazing 2.5 hours of presentation about RAD Studio 12 (I think it was the longest release presentation in the history of RAD Studio), so many new features, and some big changes in both the Delphi and C++ side. One of the great features of C++ Builder 12 was the new Visual Assist (VA) with Code Completion, refactoring, navigation, and many other useful features for developers. The inclusion of an initial CLANG C++ compiler is another big step introducing a new 64bit bcc64x CLANG (15.x) compiler (Version 7.60), which supports C++11, C++14, C++17, and partially C++20 standards. There were many new features in IDE, libs, components, and compilers. Please see below for details. I love the new logo designs too. They officially released all new C++ Builder, Delphi, and RAD Studio logos here:  https://www.embarcadero.com/news/logo This week we have 3 more post picks from LearnCPLusPlus.org that can be used with the new C++ Builder 12. The first post pick is about std::integral_constant and its () operator that comes with C++14. In the second post, we explain the standard user-defined literals in C++14 and in the third post, we explain containers, associative containers, and heterogeneous lookup in associative containers. Our educational LearnCPlusPlus.org site has a broad selection of new and unique posts with examples suitable for everyone from beginners to professionals alike. It is growing well thanks to you, and we have many new readers, thanks to your support! The site features a treasure-trove of posts that are great for learning the features of modern C++ compilers with very simple explanations and examples. RAD Studio’s C++ Builder, Delphi, and their free community editions C++ Builder CE, and Delphi CE are powerful tools for modern application development. Where I can I learn C++ and test these examples with a free C++ compiler? If you don’t know anything about C++ or the C++ Builder IDE, don’t worry, we have a lot of great, easy to understand examples on the LearnCPlusPlus.org website and they’re all completely free. Just visit this site and copy and paste any examples there into a new Console, VCL, or FMX project, depending on the type of post. We keep adding more C and C++ posts with sample code. In today’s round-up of recent posts on LearnCPlusPlus.org, we have new articles with very simple examples that can be used with: The free version of C++ Builder 11 CE Community Edition or a professional version of C++ Builder  or free BCC32C C++ Compiler and BCC32X C++ Compiler or the free Dev-C++ Read the FAQ notes on the CE license and then simply fill out the form to download C++ Builder 11 CE. How to use modern C++ with C++ Builder? Modern C++ has base class features that can be used with other modern features of C++. The std::integral_constant is the base class for the C++ type traits in C++11, and in C++14, std::integral_constant gained an operator () overload to return the constant value. In the first post, we explain what integral_constant and () operator are in C++14. C++11 introduced new forms of literals using modified syntax and semantics to provide User-Defined Literals (UDL) also known as Extensible Literals. While there was the ability to use them the standard library […]

Read More

GitHub Aims to Expand Copilot Scope and Reach in 2024

GitHub is gearing up to launch Copilot Workspace next year, a platform that will leverage generative artificial intelligence (AI) to automatically propose a plan for building an application based on natural language descriptions typed into the GitHub Issues project management software. The platform, revealed at the GitHub Universe 2023 conference, Copilot Workspace will generate editable documents via a single click that can be used to create code that developers can then visually inspect, edit and validate. Any errors discovered by application developers or the Copilot Workspace platform can also be automatically fixed. In addition, summaries of the project can automatically be created and shared across an application development team. GitHub CEO Thomas Dohnke told conference attendees this “revolutionary” approach will enable developers to employ AI as a “second brain.” In the meantime, GitHub is making an enterprise edition of Copilot available that can be trained using code connected to a private repository to ensure intellectual property is protected. GitHub also moving to integrate GitHub Copilot with third-party developer tools, online services and knowledge outside GitHub by collaborating with, for example, Datastax, LaunchDarkly, Postman, Hashicorp and Datadog. GitHub is moving to make the generative AI capabilities it provides accessible beyond text editors. Copilot Chat, starting next month, can be accessed via a mobile application to foster collaboration by explaining concepts, suggesting code based on your open files and windows, detecting security vulnerabilities and finding and fixing code errors. Copilot Chat, based on Chat GPT 4, can also be accessible across the GitHub website in addition to integrated development environments (IDEs) such as JetBrains and via a command line interface (CLI). Generative AI is already having a massive impact on the rate at which applications are developed, but that code still needs to be reviewed. Chat GPT is based on a general-purpose large language model (LLM) that is trained by pulling in code of varying quality from all across the web. As a result, code generated by the platform might contain vulnerabilities or be inefficient. In many cases, professional developers still prefer to write their own code. Of course, not every programming task requires the same level of coding expertise. In many instances, ChatGPT will generate, for example, a script that can be reused with confidence across a DevOps workflow. There is no shortage of mediocre developers who are now writing better code thanks to tools such as GitHub Copilot, and soon, domain-specific LLMs will make it possible to consistently write better code based on validated examples of code. The one thing that is certain is the volume of code written by machines is only going to increase. The challenge will be managing all the DevOps pipelines that will be needed to move increased volumes of code into a production environment. There is no doubt that AI will be applied to the management of DevOps pipelines, but for the moment, at least, the pace at which AI is being applied to writing code is already exceeding the ability of DevOps teams to manage it.

Read More

How Can We Use The is_final Type Trait In C++ 14?

In C++11, the final specifier is used for a function or for a class that cannot be overridden by derived classes, and there was no way to check if that class or method is the final. In C++14, there is a std::is_final type trait that can be used to detect if a class or a method is marked as a final or not. In this post, we explain how we can use the std::is_final type trait in C++14 and C++17. What is the final specifier in modern C++? The final specifier (keyword) is used for a function or for a class that cannot be overridden by derived classes. Regarding virtual overrides, C++11 tends to tighten the rules, to detect some problems that often arise. To achieve this goal C++11 introduced a new contextual keyword, the final specifier. The final keyword specifies that a method cannot be overridden, or a class cannot be derived. If you want to learn more about it, here it is, What is the std::is_final type trait in C++ 14? The std::is_final type trait (UnaryTypeTrait) defined in detects if a class is marked final and returns true or false boolean. If a class or method is final, it returns the member constant value equal to true, if not returns the value is false. Here is the syntax (since C++14).   template struct is_final   How can we use the std::is_final type trait in C++ 14? We can use the std::is_final type trait to check classes if it is marked as a final or not. Here is a simple example.   class myclass final { };   if(  std::is_final::value ) std::cout

Read More

Unreal Engine and C++ Game Development Made Easy with Visual Studio 2022

Unreal Engine and C++ Game Development Made Easy with Visual Studio 2022 David Li November 14th, 20230 0 Introduction Creating amazing games just got easier. We are very happy to announce the latest Unreal Engine integrations and powerful C++ productivity features in Visual Studio 2022. Our team has been tirelessly working to incorporate your feedback and bring even more features that will enhance your game development experience whether you work on Unreal Engine or a proprietary engine. In this blog, we will explore how you can leverage the new Unreal Engine Test Adapter, which helps to streamline your testing process without leaving the IDE. Then, we will also show you how you can code faster with Unreal Engine snippets and macro specifier suggestions, as well as view in-memory bitmaps. Next, we have included a range of core C++ productivity features and debugger enhancements that will benefit not only those working on Unreal Engine but also anyone who works on their own engines. Lastly, we will round out the blog with updates on C++ IntelliSense and debugger launch performance improvements. Most of these productivity features are available in Visual Studio 2022 version 17.8, while some are available in the latest previews. We are confident that these features will help you be more productive and enable you to create even more amazing games. Download Visual Studio 2022 17.8 Latest Unreal Engine Integrations Setting Up Unreal Engine Integrations Unreal Engine integrations will only show up when you are working on an Unreal Engine project. To ensure these features are active, double check that the “IDE support for Unreal Engine” component is enabled in the “Game development for C++” workload in the Visual Studio Installer. Some integrations such as Blueprints support and Test Adapter will require the free “Visual Studio Integration Tool” Unreal Engine Plugin. Please see Visual Studio Tools for Unreal Engine for detailed setup instructions. Unreal Engine Test Adapter Special thanks to the folks at Rare who contributed tremendously to this feature. Streamline your testing process without leaving the IDE with Unreal Engine Test Adapter. You can now discover, run, manage, and debug your Unreal Engine tests. In Visual Studio 2022 version 17.8, you will automatically see your Unreal Engine Tests when you open Visual Studio. To see your tests, you can open Test Explorer with View > Test Explorer. The latest version of our free Visual Studio Tools for Unreal Engine plugin is required to use Unreal Engine Test Adapter. In addition, ensure the “Unreal Engine Test Adapter” component in the “Game development with C++” workload is enabled in the Visual Studio Installer. Unreal Engine Code Snippets Write code more efficiently with Unreal Engine Code Snippets. In Visual Studio 2022 version 17.8, you can find common Unreal Engine constructs as snippets in your member list. To begin, enter the name of any Unreal Engine construct, such as “uclass”. Then, press Tab or Enter to expand the snippet. We have also included exported versions of UCLASS (uclass, uclassexported), UINTERFACE (uinterface, uinterfaceexported), and USTRUCT (ustruct, ustructexported) for those working with exported APIs and plugins. In addition, we have included macros such as SWidget (swidget), TActorRange (tactorrange), TObjectRange (tobjectrage), and WITH_EDITOR (witheditor) based on your feedback. List of Supported Snippets uclass uclassexported uenum ufunction uinterface uinterfaceexported uproperty ustruct ustructexported uelog swidget tactoreange tobjectrange witheditor […]

Read More

NetApp Extends Microsoft Alliance to Include CloudOps Tools

NetApp this week extended its alliance with Microsoft to now include its CloudOps portfolio of tools for optimizing cloud computing environments. Previously, the alliance between the two companies focused on data management but is now expanding to include tools to deploy workloads, improve performance and reduce costs using machine learning algorithms across both instances of virtual machines and the Azure Kubernetes Service (AKS). Kevin McGrath, vice president of Spot by NetApp, said in more challenging economic times, there’s a lot more focus on programmatically reining cloud costs using FinOps best practices within the context of a DevOps workflow. Organizations are also starting to create platform engineering teams to more efficiently manage DevOps workflows at scale across hybrid cloud computing environments, he added. For years, developers have been provisioning cloud infrastructure resources with little to no oversight. Unfortunately, developers are also prone to over-provision infrastructure resources to ensure maximum application availability. Many of those infrastructure resources never wind up being consumed by the application, so the cost of cloud computing winds up becoming inflated. IT leaders are also being increasingly required to make sure cloud costs are more predictable. Sudden spikes in consumption resulting in higher monthly bills are an unwelcome surprise to finance teams that are now required to manage costs more closely. Ongoing advances in artificial intelligence (AI) should make it easier to predict costs across highly dynamic cloud computing environments. Navigating all the pricing options that cloud service providers make available is challenging. IT teams need to clearly understand the attributes of each workload to ensure optimal usage of cloud infrastructure resources. Less clear is the degree to which IT teams are pitting cloud service providers against one another. Pricing across the cloud services that most organizations use today is fairly consistent. Most organizations that deploy workloads in the cloud tend to run the bulk of them on the same service because they lack the internal expertise needed to manage multiple clouds equally well. There may be some workloads running on additional clouds, but enterprise licensing agreements reward customers for running more workloads on a cloud. The only way to really optimize cloud spending is to shift workloads to less expensive tiers of service that might only be available for a relatively limited amount of time. One way or another, the management of cloud computing is finally starting to mature. As the percentage of workloads that organizations have running in the cloud steadily increases, IT teams are becoming more adept at both maximizing application performance and the associated return on investment (ROI). Each IT organization will need to decide for itself how best to manage cloud computing environments as it continues to build and deploy cloud-native applications alongside legacy monolithic applications running on virtual machines, but NetApp is betting the need for tools such as CloudOps will increase as cloud computing environment become more complex. The challenge, as always, is finding and retaining the talent needed to manage cloud computing environments when every other organization is looking for that same expertise.

Read More

What Are The New Begin End Iterators In C++14?

Iterators are one of the most useful features of containers in modern C++. Mostly we use them with vectors, maps, strings, and other containers of C++. In C++11, the begin() and end() iterators are used to define the start of iteration and the end of the iteration, mostly used in the for loops. In C++14, there are new additions to the global std::begin – std::end functions, and in this post, we explain these new begin-end iterators. What are the begin end iterators in C++11 and beyond? In modern C++, containers are data storage arrays. They are very useful for iterating and searching data with their amazing methods and properties. An iterator () is an object that points to an element in a range of elements (i.e. characters of a string or members of a vector). We can use Iterators to iterate through the elements of this range using a set of operators, for example using the ++, –, and * operators. Iteration can be done with begin/end iterators, The begin() method returns an iterator pointing to the first element in the vector.  The end() method returns an iterator pointing to the theoretical element that follows the last element in the vector. Here is a simple example how we can use begin end iterators in the for iteration as below.   for (auto vi= vec.begin(); vi!= vec.end(); vi++) std::cout

Read More

How to Solve the GPU Shortage Problem With Automation

GPU instances have never been as precious and sought-after as they have since generative AI captured the industry’s attention. Whether it’s due to broken supply chains or the sudden demand spike, one thing is clear: Getting a GPU-powered virtual machine is harder than ever, even if a team is fishing in the relatively large pond of the top three cloud providers. One analysis confirmed “a huge supply shortage of NVIDIA GPUs and networking equipment from Broadcom and NVIDIA due to a massive spike in demand.” Even the company behind the rise of generative AI–OpenAI–suffers from a lack of GPUs. And companies have started adopting rather unusual tactics to get their hands on these machines (like repurposing old video gaming chips). What can teams do when facing a quota issue and the cloud provider runs out of GPU-based instances? And once they somehow score the right instance, how can you make sure no GPUs go to waste? Automation is the answer. Teams can use it to accomplish two goals: Find the best GPU instances for their needs and maximize their utilization to get more bang for their buck. Automation Makes Finding GPU Instances Easier The three major cloud providers offer many types and sizes of GPU-powered instances. And they’re constantly rolling out new ones–an excellent example of that is AWS P5, launched in July 2023. To give a complete picture, here’s an overview of instance families with GPUs from AWS, Google Cloud and Microsoft Azure: AWS P3 P4d G3 G4 (this group includes G4dn and G4ad instances) G5 Note: AWS offers Inferentia machines optimized for deep learning inference apps and Trainium for deep learning training of 100B+ parameter models. Google Cloud Microsoft Azure NCv3-series NC T4_v3-series ND A100 v4-series NDm A100 v4-series When picking instances manually, teams may easily miss out on opportunities to snatch up golden GPUs from the market. Cloud automation solutions help them find a much larger supply of GPU instances with the right performance and cost parameters. Considering GPU Spot Instances Spot instances offer significant discounts–even 90% off on-demand rates–but they come at a price. The potential interruptions make them a risky choice for important jobs. However, running some jobs on GPU spot instances is a good idea as they accelerate the training process, leading to savings. ML training usually takes a very long time–from hours to even weeks. If interruptions occur, the deep learning job must start over, resulting in significant data loss and higher costs. Automation can prevent that, allowing teams to get attractively-priced GPUs still available on the market to cut training and inference expenses while reducing the risk of interruptions. In machine learning, checkpointing is an important practice that allows for the saving of model states at different intervals during training. This practice is especially beneficial in lengthy and resource-intensive training procedures, enabling the resumption of training from a checkpoint in case of interruptions rather than starting anew. Furthermore, checkpointing facilitates the evaluation of models at different stages of training, which can be enlightening for understanding the training dynamics. Zoom in on Checkpointing PyTorch, a popular ML framework, provides native functionalities for checkpointing models and optimizers during training. Additionally, higher-level libraries such as PyTorch Lightning abstract away much of the boilerplate code associated with training, evaluation, and checkpointing in PyTorch. Let’s take a […]

Read More

What Is Heterogeneous Lookup In Associative Containers In C++?

Containers are data storage arrays in modern C++ and they are very useful to iterate and search data with their amazing methods and properties. The C++ Standard Library defines four different main container types and one of them is associative containers such as std::map, and std::set. These class types allow us to use the look-up method “find()” by a value based on a value of that type. C++14 introduced the “Heterogeneous Lookup In Associative Containers” feature that allows the lookup to be done via an arbitrary type, so long as the comparison operator can compare that type with the actual key type. In this post, we explain containers, associative containers, and heterogeneous lookup in associative containers. What is a container in C++? Containers are data storage arrays in modern C++, and they are very useful to iterate and search data with their amazing methods and properties. In C++, there are four main types of containers: Sequence Containers (vectors, arrays, …) Associative Containers (maps, sets, …) Unordered Associative Containers (unordered_set, unordered_map, …) Container Adapters (stack, queue, priority_queue) If you want to know more about containers, here are more details about their types: What are associative containers in C++? Associative Containers are class templates of container types that can be used to implement sorted data structures where can be quickly searched. They are sorted by keys. We can say they are about O(log n) complexity data structures. The associative containers are: std::map : a class template for the collection of key-value pairs, its keys are unique and it is sorted by keys std::set : a class template for the collection of unique keys, it is sorted by keys  multiset : a class template for the collection of keys, it is sorted by keys multimap : a class template for the collection of key-value pairs, it is sorted by keys  What is heterogeneous lookup in associative containers in C++? The C++ Standard Library defines 4 associative container types. These class types allow us to use the look-up method find() by a value based on a value of that type. C++14 introduced the “Heterogeneous Lookup In Associative Containers” feature that allows the lookup to be done by an arbitrary type, so the comparison operator can compare types with the actual key type. The heterogeneous lookup in associative containers gives us to use std::map, std::set, and other associative containers. In example, let’s have some strings and have some values in a std::map (which is a associative container),   std::map mymap { { “Hello Developers”, 10 }, { “Please Visit Us”, 20 }, { “LearnCPlusPlus.org”, 30 } };   we can use find method of map as shown below:   auto m = mymap.find(std::string(“LearnCPlusPlus.org”)); // m iterator   In the heterogeneous lookup we can use less or other features. Is there a full example about heterogeneous lookup in associative containers in C++? When we want to do heterogeneous lookup, all we have to do is to use std::less or other heterogeneous lookup features and we should implement correct comparison methods. Here is an example,   std::map mymap2 { { “Hello Developers”, 10 }, { “Please Visit Us”, 20 }, { “LearnCPlusPlus.org”, 30 } };   The interesting thing is, it is straightforward to enable and we can use find method like so:   auto m = mymap2.find(std::string(“LearnCPlusPlus.org”)); // m iterator   Note that here m […]

Read More

[Yukon Beta Blog] C++ and Visual Assist in RAD Studio 12.0

This blog post is based on a pre-release version of the RAD Studio software and it has been written with specific permission by Embarcadero. No feature is committed until the product GA release. RAD Studio 12 is just around the corner, with our release webinar this Thursday! Back in August, we gave a preview webinar of what is being worked on internally for C++, covering a preview of the updated Clang compiler, and our initial integration of Visual Assist. The thing is, we held out a little. The preview was quite conservative. We didn’t share everything. In fact, we only shared about a third of what we’re shipping for Visual Assist. So what’s coming? In the August webinar, we shared that there are three areas for features in the initial implementation: code completion, refactoring and navigation. So, here’s a few teasers for what you’ll see on Thursday and what you’ll be able to use when you install C++Builder 12.0. Table of Contents Code completion Refactoring Navigation C++Builder and RAD Studio 12 are out any day now! Code completion What’s interesting about this image? Hint: It shows code completion doing something that it could never do before in C++Builder. Something pretty magical. Lots more to say about code completion, and Code Insight in general, but we’ll show it on Thursday! Let’s move right along… Refactoring In the preview webinar, we showed one refactoring, Rename. And a rename refactoring is really useful, it would be great to have just that in the first version! But here’s the Refactor menu in C++Builder 12.0: Rename and… three other items. That’s four items when we previewed one. So what are they? Well, the remaining three are not quite refactorings per se (they won’t rewrite or move around code) so much as a kind of code generation, doing or creating something useful for you. The first one is pretty amazing – magical, in fact. I smile every time I use it. The third and fourth are smaller, and are two versions of the same feature (each operation is the inverse of the other one.) What could they be? Navigation In the preview webinar, we previewed Find References — an incredibly useful feature to find where any symbol (method, class etc) is used, or referred to, in your code — and Find Symbol, a feature to find any symbol anywhere in your project group or the headers it uses or… anywhere. But look at this. Four other menu items. The bottom two are small: a Delphi feature that happens to be one of the most-requested navigation features for C++Builder. (They’re two aspects of the same feature, one being the inverse of the other. And though small we think those of you who’ve asked for it will love seeing it added. It’s not actually an existing Visual Assist feature, we added it into Visual Assist specially for C++Builder!) The top two? They’re bigger. What could they be? Well, one opens a dialog. A useful dialog. One, however, opens another menu. This menu: Look at it. Lots more functionality appears. Not only that, but there are submenus. What could they contain? This menu can be invoked another way too, by the way. And for what it’s worth, in the launch webinar I refer to this feature as the most […]

Read More

I first met Philippe Kahn and Turbo Pascal 40 years ago this month

In 1983, I was working for Softsel Computer Products (Softsel) in the product evaluation, support and training group. Softsel had a booth at the Fall 1983 COMDEX (Computer Dealer Expo) conference (November 28 to December 2) in the Las Vegas Convention Center. I sat at a pod in the booth to answer questions about Softsel, products we distributed and to talk with software and device manufacturers that might be looking to have their products distributed to computer stores. During the convention Philippe Kahn (PK) walked by the Softsel booth and stopped for a moment. I said hello to him (not knowing anything about him nor his company). During the conversation, we talked about programming and developer tools. PK mentioned that he had a Pascal compiler that he was selling but that he was not looking to have it distributed (he was selling it direct to programmers using direct mail and an ad in Byte Magazine). Before he left the booth, he gave me two floppy disks containing copies of Turbo Pascal 1.0 (8″ CPM-80 and 5.25 PC-DOS). On one of my breaks, I took the floppy disks into a booth “office” that we had for meetings that also had an IBM PC. I was very excited to see what PK had since I had learned Pascal in 1972 while I was a Computer Science major at Cal Poly San Luis Obispo. I put in the 5.25 floppy disk and started the Turbo.com executable and up popped a menu with a few options. I selected the editor and typed in a short command line “Hello World” program and tried to run it. Amazingly it compiled blazingly fast and the app started up. I had to tell my co-worker and friend Spencer Leyton about this Pascal compiler and how important it was for the CPM and PC programming world. From that day on, Spencer talked with PK to try to convince him to allow Softsel to distribute Turbo Pascal to its network of computer store accounts. While it took awhile to convince PK, Spencer eventually got PK to agree to a distribution contract. Spencer went on to get a job at Borland. I continued working at Softsel for awhile and eventually Spencer convinced PK to interview me for a job. My job interview was on PK’s racing sailboat in Monterey Bay. We had dinner afterwards at the Crow’s Nest restaurant at the Santa Cruz harbor. I went back to Los Angeles and was given a job offer. I accepted the offer and started in June of 1985 (a little less that 2 years after I first met PK). I enjoyed the privilege of working with Anders Hejlsberg and a talented global team of dedicated employees for more than three decades (and about 4 million air miles). I seems unreal that its been almost 40 years since I first met PK and first tried Turbo Pascal. It’s also been more than 50 years since I first tried the Pascal language while I was in college. Back then you could build programs for two platforms: PC-DOS and CPM-80. Most amazingly, you can still create “textbook” Pascal applications with every release of Turbo, Borland, Kylix and Delphi Pascal compilers. And, with Delphi, you can create modern applications that can run on desktops, web servers, clouds […]

Read More