Noutați

Grafana Labs Acquires Asserts.ai to Bring AI to Observability

At its ObservabilityCON event, Grafana Labs today announced it has acquired Asserts.ai to automate configurations and customization of dashboards. In addition, the company is previewing an ability to apply artificial intelligence (AI) to incident management to make it simpler to surface the root cause of an issue. Sift is a diagnostic assistant in Grafana Cloud that automatically analyzes metrics, logs and tracing data, while Grafana Incident is a generative AI tool that summarizes incident timelines with a single click, creates metadata for dashboards and simplifies the writing of PromQL queries. Grafana Labs is also making generally available an Application Observability module for Grafana Cloud to provide a more holistic view of IT environments. Finally, Grafana Beyla, an open source auto-instrumentation project that makes use of extended Berkeley Packet Filtering (eBPF), is now also generally available. That tool enables DevOps teams to collect telemetry data for an IT environment from a sandbox environment running in the microkernel of an operating system. That approach makes it simpler to automatically instrument an IT environment, but there are instances where DevOps teams will be managing complex applications that will still require them to collect telemetry data via the user space of an application. Richi Hartmann, director of community for Grafana Labs, said collectively, these additional capabilities will make it simpler to apply observability across increasingly complex IT environments. For example, the AI technologies developed by Assert.ai will make it possible for DevOps teams to start sending data to Grafana Labs that will enable the cloud service to identify the applications and infrastructure being used. AI models will then be able to automatically generate a custom dashboard for that environment that DevOps teams can extend as they see fit, said Hartmann. In general, machine learning algorithms and generative AI are starting to be more widely applied to observability. The ultimate goal is to automatically identify issues in ways that reduce the cognitive load required to manage complex IT environments while also making it easier to launch queries that identify bottlenecks that could adversely impact application performance and availability. It’s not clear to what degree observability tools might eliminate the need for monitoring tools that track pre-defined metrics, but most DevOps teams will likely be using a mix of both for the foreseeable future. In the meantime, IT environments are only becoming more complex as various types of cloud-native applications are deployed alongside existing monolithic applications that are continuously being updated. The challenge is the overall size of DevOps teams is not expanding, so there is a greater need for tools to streamline the management of DevOps workflows. AI will naturally play a larger role in enabling organizations to achieve that goal, but it’s not likely to replace the need for DevOps engineers, said Hartmann. Conversely, many DevOps teams will also naturally gravitate toward organizations that make the tools they need to succeed available. Today, far too many manual tasks are increasing turnover as DevOps teams burn out. Organizations that want to hire and retain the best DevOps engineers will need to invest in AI. Of course, DevOps, at its core, has always been about ruthlessly automating as many manual tasks as possible. AI is only the latest in a series of advances that, over time, continue to make DevOps more accessible to IT professionals of […]

Read More

What Are The New Overloads For Ranges In C++ 14?

In C++14, there are pretty good new overloads of std::equal, std::mismatch, and std::is_permutation that can be used to take a pair of iterators for the second range. They can be used with the new C++ Builder 12 along with the 11.x, or 10.x versions too. In this post, we explain these new overloads that we use ranges of iterations. What are the new overloads for ranges in C++ 14? In C++14, there are new overloads of std::equal, std::mismatch, and std::is_permutation that can be used to take a pair of iterators for the second range. We can pass them in two full ranges that have a beginning and end. In modern C++, the range parameter is obtained from a std::list without the original list that needs to be traversed through to get the size. In C++11, std::equal, std::mismatch, and std::is_permutation had 3 parameters, these were First1, Last1 for the Range1 and First2 for the Range2. In C++ 14, there is one more parameter that you can use, it is the Last2 for the Range2. What is the new overload for the std::equal in C++14? In C++11, std::equal is defined as below.   template inline bool equal(_InIt1 _First1, _InIt1 _Last1,_InIt2 _First2)   The std::equal is defined in the header included in the header that returns true if the range between the First1 and Last1 is equal to the range between the First2 and the First2 + (Last1 – First1), and it returns false if this is not satisfied.   std::string s = “abCba”; std::cout

Read More

Debug vcpkg portfiles in CMake script mode with Visual Studio Code

Debug vcpkg portfiles in CMake script mode with Visual Studio Code Ben McMorran November 16th, 20230 0 We recently announced support for debugging the CMake language using the VS Code CMake Tools extension. Now in version 1.16 of the extension, you can fine-tune the debugger configuration using a launch.json file. This enables debugging in CMake script mode in addition to the existing debugging of CMake project generation.  CMake script mode is an alternative way of running CMake that does not generate a build system, but instead uses the CMake language as a general-purpose scripting language. While script mode is not widely used (although there are fun esoteric examples), it is the mechanism that powers vcpkg portfiles. vcpkg uses portfiles to know how to acquire, build, and install libraries. Through a combination of the new experimental “–x-cmake-debug” vcpkg option and CMake Tools, it’s now possible to debug these portfiles in VS Code. This is helpful when adding a library to the vcpkg catalog.  Debugging vcpkg portfiles As an example, we’ll explore how to debug the vcpkg portfile for zlib. First, ensure you have CMake Tools 1.16 and CMake 3.27 installed. Next, clone the vcpkg repo and open it in VS Code.  > git clone https://github.com/microsoft/vcpkg > cd vcpkg > .bootstrap-vcpkg.bat (or bootstrap-vcpkg.sh on Linux) > code . We’ll use a tasks.json to define a task that installs the zlib library with the –x-cmake-debug option and a launch.json file to configure CMake Tools to attach to the zlib portfile as it’s being run. In VS Code, create a new “.vscode” directory. Add the following tasks.json and launch.json files to this directory. tasks.json {      “version”: “2.0.0”,      “tasks”: [          {              “label”: “Reinstall zlib”,              “type”: “shell”,              “isBackground”: true,              “command”: “& “${workspaceFolder}/vcpkg.exe” remove zlib; & “${workspaceFolder}/vcpkg.exe” install zlib –no-binarycaching –x-cmake-debug \\.\pipe\portfile_debugging”,              “problemMatcher”: [                  {                      “pattern”: [                          {                              “regexp”: “”,                              “file”: 1,                              “location”: 2,                              “message”: 3                          }                      ],                      “background”: {                          “activeOnStart”: true,                          “beginsPattern”: “.”,                          “endsPattern”: “Waiting for debugger client to connect”                      }                  }              ]          }      ]  }  The important parts of the configuration are the command, which removes zlib if it’s already installed and then reinstalls it, and the endsPattern, which tells VS Code when it can consider the task complete. CMake will output the “Waiting for debugger client to connect” string as soon as it has finished initializing the debugger. The rest of the problemMatcher values are required to satisfy VS Code’s launch configuration schema but are not used in this example.  launch.json {      “version”: “0.2.0”,      “configurations”: [          {              “type”: “cmake”,              “request”: “launch”,              “name”: “Debug zlib portfile”,              “cmakeDebugType”: “external”,              “pipeName”: “\\.\pipe\portfile_debugging”,              “preLaunchTask”: “Reinstall zlib”          }      ]  }  This CMake Debugging configuration uses a cmakeDebugType of external, meaning the CMake Tools extension is not responsible for launching CMake (in this case vcpkg is launching CMake). Notice how we use the same pipe name as the vcpkg install task. Finally, open ports/zlib/portfile.cmake and set a breakpoint on line 2 (the call to vcpkg_from_github). In the VS Code Run and Debug view, select the Debug zlib portfile configuration and click the play button to start debugging. VS Code will automatically run the vcpkg install task […]

Read More

Three Important C++14 Features That You Can Use In C++ Builder 12

Hello C++ Developers, Yilmaz here, community manager for LearnCPlusPlus.org. This week was another milestone for the C++ developers, the new RAD Studio 12, the new C++ Builder 12, and the new Delphi 12 were released packed full of great features, optimizations, and improvements. There was an amazing 2.5 hours of presentation about RAD Studio 12 (I think it was the longest release presentation in the history of RAD Studio), so many new features, and some big changes in both the Delphi and C++ side. One of the great features of C++ Builder 12 was the new Visual Assist (VA) with Code Completion, refactoring, navigation, and many other useful features for developers. The inclusion of an initial CLANG C++ compiler is another big step introducing a new 64bit bcc64x CLANG (15.x) compiler (Version 7.60), which supports C++11, C++14, C++17, and partially C++20 standards. There were many new features in IDE, libs, components, and compilers. Please see below for details. I love the new logo designs too. They officially released all new C++ Builder, Delphi, and RAD Studio logos here:  https://www.embarcadero.com/news/logo This week we have 3 more post picks from LearnCPLusPlus.org that can be used with the new C++ Builder 12. The first post pick is about std::integral_constant and its () operator that comes with C++14. In the second post, we explain the standard user-defined literals in C++14 and in the third post, we explain containers, associative containers, and heterogeneous lookup in associative containers. Our educational LearnCPlusPlus.org site has a broad selection of new and unique posts with examples suitable for everyone from beginners to professionals alike. It is growing well thanks to you, and we have many new readers, thanks to your support! The site features a treasure-trove of posts that are great for learning the features of modern C++ compilers with very simple explanations and examples. RAD Studio’s C++ Builder, Delphi, and their free community editions C++ Builder CE, and Delphi CE are powerful tools for modern application development. Where I can I learn C++ and test these examples with a free C++ compiler? If you don’t know anything about C++ or the C++ Builder IDE, don’t worry, we have a lot of great, easy to understand examples on the LearnCPlusPlus.org website and they’re all completely free. Just visit this site and copy and paste any examples there into a new Console, VCL, or FMX project, depending on the type of post. We keep adding more C and C++ posts with sample code. In today’s round-up of recent posts on LearnCPlusPlus.org, we have new articles with very simple examples that can be used with: The free version of C++ Builder 11 CE Community Edition or a professional version of C++ Builder  or free BCC32C C++ Compiler and BCC32X C++ Compiler or the free Dev-C++ Read the FAQ notes on the CE license and then simply fill out the form to download C++ Builder 11 CE. How to use modern C++ with C++ Builder? Modern C++ has base class features that can be used with other modern features of C++. The std::integral_constant is the base class for the C++ type traits in C++11, and in C++14, std::integral_constant gained an operator () overload to return the constant value. In the first post, we explain what integral_constant and () operator are in C++14. C++11 introduced new forms of literals using modified syntax and semantics to provide User-Defined Literals (UDL) also known as Extensible Literals. While there was the ability to use them the standard library […]

Read More

GitHub Aims to Expand Copilot Scope and Reach in 2024

GitHub is gearing up to launch Copilot Workspace next year, a platform that will leverage generative artificial intelligence (AI) to automatically propose a plan for building an application based on natural language descriptions typed into the GitHub Issues project management software. The platform, revealed at the GitHub Universe 2023 conference, Copilot Workspace will generate editable documents via a single click that can be used to create code that developers can then visually inspect, edit and validate. Any errors discovered by application developers or the Copilot Workspace platform can also be automatically fixed. In addition, summaries of the project can automatically be created and shared across an application development team. GitHub CEO Thomas Dohnke told conference attendees this “revolutionary” approach will enable developers to employ AI as a “second brain.” In the meantime, GitHub is making an enterprise edition of Copilot available that can be trained using code connected to a private repository to ensure intellectual property is protected. GitHub also moving to integrate GitHub Copilot with third-party developer tools, online services and knowledge outside GitHub by collaborating with, for example, Datastax, LaunchDarkly, Postman, Hashicorp and Datadog. GitHub is moving to make the generative AI capabilities it provides accessible beyond text editors. Copilot Chat, starting next month, can be accessed via a mobile application to foster collaboration by explaining concepts, suggesting code based on your open files and windows, detecting security vulnerabilities and finding and fixing code errors. Copilot Chat, based on Chat GPT 4, can also be accessible across the GitHub website in addition to integrated development environments (IDEs) such as JetBrains and via a command line interface (CLI). Generative AI is already having a massive impact on the rate at which applications are developed, but that code still needs to be reviewed. Chat GPT is based on a general-purpose large language model (LLM) that is trained by pulling in code of varying quality from all across the web. As a result, code generated by the platform might contain vulnerabilities or be inefficient. In many cases, professional developers still prefer to write their own code. Of course, not every programming task requires the same level of coding expertise. In many instances, ChatGPT will generate, for example, a script that can be reused with confidence across a DevOps workflow. There is no shortage of mediocre developers who are now writing better code thanks to tools such as GitHub Copilot, and soon, domain-specific LLMs will make it possible to consistently write better code based on validated examples of code. The one thing that is certain is the volume of code written by machines is only going to increase. The challenge will be managing all the DevOps pipelines that will be needed to move increased volumes of code into a production environment. There is no doubt that AI will be applied to the management of DevOps pipelines, but for the moment, at least, the pace at which AI is being applied to writing code is already exceeding the ability of DevOps teams to manage it.

Read More

How Can We Use The is_final Type Trait In C++ 14?

In C++11, the final specifier is used for a function or for a class that cannot be overridden by derived classes, and there was no way to check if that class or method is the final. In C++14, there is a std::is_final type trait that can be used to detect if a class or a method is marked as a final or not. In this post, we explain how we can use the std::is_final type trait in C++14 and C++17. What is the final specifier in modern C++? The final specifier (keyword) is used for a function or for a class that cannot be overridden by derived classes. Regarding virtual overrides, C++11 tends to tighten the rules, to detect some problems that often arise. To achieve this goal C++11 introduced a new contextual keyword, the final specifier. The final keyword specifies that a method cannot be overridden, or a class cannot be derived. If you want to learn more about it, here it is, What is the std::is_final type trait in C++ 14? The std::is_final type trait (UnaryTypeTrait) defined in detects if a class is marked final and returns true or false boolean. If a class or method is final, it returns the member constant value equal to true, if not returns the value is false. Here is the syntax (since C++14).   template struct is_final   How can we use the std::is_final type trait in C++ 14? We can use the std::is_final type trait to check classes if it is marked as a final or not. Here is a simple example.   class myclass final { };   if(  std::is_final::value ) std::cout

Read More

Unreal Engine and C++ Game Development Made Easy with Visual Studio 2022

Unreal Engine and C++ Game Development Made Easy with Visual Studio 2022 David Li November 14th, 20230 0 Introduction Creating amazing games just got easier. We are very happy to announce the latest Unreal Engine integrations and powerful C++ productivity features in Visual Studio 2022. Our team has been tirelessly working to incorporate your feedback and bring even more features that will enhance your game development experience whether you work on Unreal Engine or a proprietary engine. In this blog, we will explore how you can leverage the new Unreal Engine Test Adapter, which helps to streamline your testing process without leaving the IDE. Then, we will also show you how you can code faster with Unreal Engine snippets and macro specifier suggestions, as well as view in-memory bitmaps. Next, we have included a range of core C++ productivity features and debugger enhancements that will benefit not only those working on Unreal Engine but also anyone who works on their own engines. Lastly, we will round out the blog with updates on C++ IntelliSense and debugger launch performance improvements. Most of these productivity features are available in Visual Studio 2022 version 17.8, while some are available in the latest previews. We are confident that these features will help you be more productive and enable you to create even more amazing games. Download Visual Studio 2022 17.8 Latest Unreal Engine Integrations Setting Up Unreal Engine Integrations Unreal Engine integrations will only show up when you are working on an Unreal Engine project. To ensure these features are active, double check that the “IDE support for Unreal Engine” component is enabled in the “Game development for C++” workload in the Visual Studio Installer. Some integrations such as Blueprints support and Test Adapter will require the free “Visual Studio Integration Tool” Unreal Engine Plugin. Please see Visual Studio Tools for Unreal Engine for detailed setup instructions. Unreal Engine Test Adapter Special thanks to the folks at Rare who contributed tremendously to this feature. Streamline your testing process without leaving the IDE with Unreal Engine Test Adapter. You can now discover, run, manage, and debug your Unreal Engine tests. In Visual Studio 2022 version 17.8, you will automatically see your Unreal Engine Tests when you open Visual Studio. To see your tests, you can open Test Explorer with View > Test Explorer. The latest version of our free Visual Studio Tools for Unreal Engine plugin is required to use Unreal Engine Test Adapter. In addition, ensure the “Unreal Engine Test Adapter” component in the “Game development with C++” workload is enabled in the Visual Studio Installer. Unreal Engine Code Snippets Write code more efficiently with Unreal Engine Code Snippets. In Visual Studio 2022 version 17.8, you can find common Unreal Engine constructs as snippets in your member list. To begin, enter the name of any Unreal Engine construct, such as “uclass”. Then, press Tab or Enter to expand the snippet. We have also included exported versions of UCLASS (uclass, uclassexported), UINTERFACE (uinterface, uinterfaceexported), and USTRUCT (ustruct, ustructexported) for those working with exported APIs and plugins. In addition, we have included macros such as SWidget (swidget), TActorRange (tactorrange), TObjectRange (tobjectrage), and WITH_EDITOR (witheditor) based on your feedback. List of Supported Snippets uclass uclassexported uenum ufunction uinterface uinterfaceexported uproperty ustruct ustructexported uelog swidget tactoreange tobjectrange witheditor […]

Read More

NetApp Extends Microsoft Alliance to Include CloudOps Tools

NetApp this week extended its alliance with Microsoft to now include its CloudOps portfolio of tools for optimizing cloud computing environments. Previously, the alliance between the two companies focused on data management but is now expanding to include tools to deploy workloads, improve performance and reduce costs using machine learning algorithms across both instances of virtual machines and the Azure Kubernetes Service (AKS). Kevin McGrath, vice president of Spot by NetApp, said in more challenging economic times, there’s a lot more focus on programmatically reining cloud costs using FinOps best practices within the context of a DevOps workflow. Organizations are also starting to create platform engineering teams to more efficiently manage DevOps workflows at scale across hybrid cloud computing environments, he added. For years, developers have been provisioning cloud infrastructure resources with little to no oversight. Unfortunately, developers are also prone to over-provision infrastructure resources to ensure maximum application availability. Many of those infrastructure resources never wind up being consumed by the application, so the cost of cloud computing winds up becoming inflated. IT leaders are also being increasingly required to make sure cloud costs are more predictable. Sudden spikes in consumption resulting in higher monthly bills are an unwelcome surprise to finance teams that are now required to manage costs more closely. Ongoing advances in artificial intelligence (AI) should make it easier to predict costs across highly dynamic cloud computing environments. Navigating all the pricing options that cloud service providers make available is challenging. IT teams need to clearly understand the attributes of each workload to ensure optimal usage of cloud infrastructure resources. Less clear is the degree to which IT teams are pitting cloud service providers against one another. Pricing across the cloud services that most organizations use today is fairly consistent. Most organizations that deploy workloads in the cloud tend to run the bulk of them on the same service because they lack the internal expertise needed to manage multiple clouds equally well. There may be some workloads running on additional clouds, but enterprise licensing agreements reward customers for running more workloads on a cloud. The only way to really optimize cloud spending is to shift workloads to less expensive tiers of service that might only be available for a relatively limited amount of time. One way or another, the management of cloud computing is finally starting to mature. As the percentage of workloads that organizations have running in the cloud steadily increases, IT teams are becoming more adept at both maximizing application performance and the associated return on investment (ROI). Each IT organization will need to decide for itself how best to manage cloud computing environments as it continues to build and deploy cloud-native applications alongside legacy monolithic applications running on virtual machines, but NetApp is betting the need for tools such as CloudOps will increase as cloud computing environment become more complex. The challenge, as always, is finding and retaining the talent needed to manage cloud computing environments when every other organization is looking for that same expertise.

Read More

What Are The New Begin End Iterators In C++14?

Iterators are one of the most useful features of containers in modern C++. Mostly we use them with vectors, maps, strings, and other containers of C++. In C++11, the begin() and end() iterators are used to define the start of iteration and the end of the iteration, mostly used in the for loops. In C++14, there are new additions to the global std::begin – std::end functions, and in this post, we explain these new begin-end iterators. What are the begin end iterators in C++11 and beyond? In modern C++, containers are data storage arrays. They are very useful for iterating and searching data with their amazing methods and properties. An iterator () is an object that points to an element in a range of elements (i.e. characters of a string or members of a vector). We can use Iterators to iterate through the elements of this range using a set of operators, for example using the ++, –, and * operators. Iteration can be done with begin/end iterators, The begin() method returns an iterator pointing to the first element in the vector.  The end() method returns an iterator pointing to the theoretical element that follows the last element in the vector. Here is a simple example how we can use begin end iterators in the for iteration as below.   for (auto vi= vec.begin(); vi!= vec.end(); vi++) std::cout

Read More

How to Solve the GPU Shortage Problem With Automation

GPU instances have never been as precious and sought-after as they have since generative AI captured the industry’s attention. Whether it’s due to broken supply chains or the sudden demand spike, one thing is clear: Getting a GPU-powered virtual machine is harder than ever, even if a team is fishing in the relatively large pond of the top three cloud providers. One analysis confirmed “a huge supply shortage of NVIDIA GPUs and networking equipment from Broadcom and NVIDIA due to a massive spike in demand.” Even the company behind the rise of generative AI–OpenAI–suffers from a lack of GPUs. And companies have started adopting rather unusual tactics to get their hands on these machines (like repurposing old video gaming chips). What can teams do when facing a quota issue and the cloud provider runs out of GPU-based instances? And once they somehow score the right instance, how can you make sure no GPUs go to waste? Automation is the answer. Teams can use it to accomplish two goals: Find the best GPU instances for their needs and maximize their utilization to get more bang for their buck. Automation Makes Finding GPU Instances Easier The three major cloud providers offer many types and sizes of GPU-powered instances. And they’re constantly rolling out new ones–an excellent example of that is AWS P5, launched in July 2023. To give a complete picture, here’s an overview of instance families with GPUs from AWS, Google Cloud and Microsoft Azure: AWS P3 P4d G3 G4 (this group includes G4dn and G4ad instances) G5 Note: AWS offers Inferentia machines optimized for deep learning inference apps and Trainium for deep learning training of 100B+ parameter models. Google Cloud Microsoft Azure NCv3-series NC T4_v3-series ND A100 v4-series NDm A100 v4-series When picking instances manually, teams may easily miss out on opportunities to snatch up golden GPUs from the market. Cloud automation solutions help them find a much larger supply of GPU instances with the right performance and cost parameters. Considering GPU Spot Instances Spot instances offer significant discounts–even 90% off on-demand rates–but they come at a price. The potential interruptions make them a risky choice for important jobs. However, running some jobs on GPU spot instances is a good idea as they accelerate the training process, leading to savings. ML training usually takes a very long time–from hours to even weeks. If interruptions occur, the deep learning job must start over, resulting in significant data loss and higher costs. Automation can prevent that, allowing teams to get attractively-priced GPUs still available on the market to cut training and inference expenses while reducing the risk of interruptions. In machine learning, checkpointing is an important practice that allows for the saving of model states at different intervals during training. This practice is especially beneficial in lengthy and resource-intensive training procedures, enabling the resumption of training from a checkpoint in case of interruptions rather than starting anew. Furthermore, checkpointing facilitates the evaluation of models at different stages of training, which can be enlightening for understanding the training dynamics. Zoom in on Checkpointing PyTorch, a popular ML framework, provides native functionalities for checkpointing models and optimizers during training. Additionally, higher-level libraries such as PyTorch Lightning abstract away much of the boilerplate code associated with training, evaluation, and checkpointing in PyTorch. Let’s take a […]

Read More