Noutați

How Event-Driven Architectures Drive Real-Time Operations

People, events, the human brain—in fact, the whole world operate in real-time, but businesses have struggled to keep up. With the help of event-driven architecture (EDA) and the Open API economy, businesses can now do the same. The power of an event-driven world means that, after years of geopolitical events affecting how businesses operate, many businesses are starting to uncover real value by truly being able to operate in real-time. Whether it be retail and manufacturing or energy and resources and financial services, locating and responding to vital issues within a company’s supply chains or product lines in real-time, is key to success. Amazon’s CTO, Dr. Werner Vogels, said that “the world is event driven” in his keynote speech at AWS re:Invent in December 2022. Now, new IDC research unveils that nine out of 10 of the world’s largest companies will deploy real-time intelligence driven by event-streaming technologies by 2025. But What’s the Secret Behind Such Success? A recent IDC Infobrief, sponsored by Solace, surveyed over 300 enterprise IT professionals in North America, Asia and Europe, all of whom work for large companies implementing or considering EDA. The results are quite telling–an overwhelming 93% of respondents at companies that have deployed EDA across multiple use cases said EDA has either met or exceeded their expectations. In addition to technical advantages from EDA, most businesses also see clear business benefits: A full 23% of respondents reported increasing productivity; 22% said better customer acquisition and 18% saw revenues increase as a result of EDA efforts. 1. Get Support From the Top to Ensure Alignment Throughout Expanding the footprint of EDA across the enterprise is a journey, and every journey starts by assembling those that are critical to its overall success. Business sponsorship and engaging key stakeholders is vital, especially in the early days of EDA adoption – 56% of respondents in the early EDA stages cited this as a priority when ROI and business benefits may not be immediately clear. The impact of well-aligned C-suite, operational and technical teams is reflective of business-level digital maturity, too. As 35% of respondents at an advanced stage of EDA rollout felt C-level support was critical, it comes as no surprise that respondents with higher levels of EDA maturity also have higher levels of overall digital maturity, including digital strategy and change management support. 2. Tackle Complexities Head-On With the Backing of IT As EDA becomes more pervasive across an organization, demands on IT become more sophisticated, requiring a deepening of EDA skills in the IT organization, notably with DevOps teams, developers and architects. More than one-third (36.1%) of respondents cited the lack of skills to execute EDA as a hurdle to adoption. Approaches to logging, governance and oversight (30.7%) can also become increasingly challenging and must be thought through carefully. This is where EDA providers themselves need to step up and provide adequate training and a certification path for architects DevOps and developers looking to gain the fundamental knowledge and skills to design and implement event-driven systems. This should include technical details such as understanding various design patterns for EDA, microservices choreography versus orchestration, the saga pattern and RESTful microservices. Education should also clearly define and demonstrate key concepts and tools for EDA success, such as event portal, topic hierarchy best practices and event mesh. […]

Read More

Learn How to Use New And Delete Elisions In C++

The C++14 standard (and later), brings a lot of useful smart pointers and improvements on them. They help us to avoid the mistake of incorrectly freeing the memory addressed by pointers. In modern C++, we have the new and delete operators that are used to allocate and free objects from memory, and in this post, we explain how to use new and delete operators. Can we use new and delete elisions in modern C++ or is it obsolete? Allocating memory and freeing it safely is hard if you are programming a very big application. In Modern C++, there are smart pointers that help avoid the mistake of incorrectly freeing the memory addressed by pointers. Smart pointers make it easy to define pointers, they came with C++11. The most used types of C++ smart pointer are unique_ptr, auto_ptr, shared_ptr, and weak_ptr. Smart pointers are preferable to raw pointers in many different scenarios, but there is still a lot of need to use for the new and delete methods in C++14 and above. When we develop code that requires in-place construction, we need to use new and, possibly, delete operations. They are useful, in a memory pool, as an allocator, as a tagged variant, as a buffer, or as a binary message to a buffer. Sometimes we can use new and delete for some containers if we want to use raw pointers for storage. Modern C++ has a lot of modern choices for faster and safer memory operations. Generally, developers choose to use unique_ptr/make_unique and make_shared rather than raw calls to new and delete. Even though we have a lot of standard smart pointers and abilities, we still need new and/or delete operators. How can we use new and delete elisions in C++? What is the new operator in C++? The new operator in C++, denotes a request for memory allocation on the free memory. If there is sufficient memory, a new operator initializes the memory and returns the address of the newly allocated and initialized memory to the pointer variable.    = new ;   Here, the data_type could be any built-in data type, i.e. basic C / C++ types, array, or any class types i.e. class, structure, union.  Now, let’s see how we can use new operator. We can simply use auto and new to create a new pointer and space for its buffer as below,   auto *ptr = new long int;     or in old style still you can use its type in definition instead of auto as below,   long int *ptr = new long int;     one of the great feature of pointers is we don’t need to allocate them at the beginning, we can set them NULL, and we can allocate them in some steps as below,   long int *ptr = NULL; … ptr = new long int;     What is the delete operator in C++? The delete operator in C++ is used to free the dynamically allocated array pointed by pointer variable. Here is the syntax for the delete operator,   delete ;   we can use delete[] for the pointer_arrays, here is the syntax for them,   delete[] ;   here is how we can use delete to delete a pointer, and this is how we can delete pointer arrays,   int *arr […]

Read More

Why AIOps is Critical for Networks

Speaker 1: This is Techstrong TV. Mitch Ashley: With great pleasure of being joined by Andrew Colby. Andrew is VP of AIOps at Vitria. Welcome, Andrew. Andrew Colby: Good afternoon, Mitch. And thank you. Mitch Ashley: It’s a great topic. I’m excited to talk with you about it. We could go down the share war stories in telco experience, which really could be about 10 episodes of a different show, but today in the telco environment, or just in the business environment in general, the economic conditions, competitive pressures, looking for areas where we can get more for less, there are a lot of different parameters that have shifted or changed or maybe tightened that we’re currently working within. I’d love to get your perspective on that. Andrew Colby: Certainly, and thank you. Yeah, I’d say we see cautious optimism. Obviously, I’m based in the US in the DC Metro area, Maryland. And the US, the government entities and quasi-governmental entities have been tightening the economic structure in order to tame inflation. Fortunately, that has not driven our economy and had the potential recessionary effect that was feared, but people are still cautious, businesses are still cautious. That said, it’s hard to hire people and it’s really hard to hire technical people. So a lot of companies are continuing to look towards how to leverage technologies and automation to build efficiency so that they can do more with either the same number of people or re-task their people to higher value purposes, and let the technology do some of the more menial and mundane tasks. And we can explore this a little bit, especially in these new complex service delivery and network environments. It’s very difficult for me to imagine how an engineer who’s gone through anywhere from two to eight years of college education is going to really be happy going and spending their days collecting a lot of data across network, container management VM and other infrastructure systems to figure out what’s going on. I mean, really that’s where a lot of the automation provides a significant amount of value to let the engineers do the smart, difficult things that we want humans to do. Mitch Ashley: And a lot of pressures around meantime to recovery, even looking at resiliency, how do we stand up under a test-full situation, whether it be a security attack that might be going on or some unobserved condition that our systems and networks have never been under? Andrew Colby: Oh, there’s so much of that. So much is changing. It’s not just a person like you or me behind a smartphone that can actually report that there’s a problem, but it’s sensors and equipment that won’t necessarily report right away, so it needs to be detected. So that’s a whole nother additional dimension that service providers, large enterprise IT organizations are under, which is to be able to have this kind of real-time awareness of what’s going on. Whether the service is real time, like the video conference that we’re on or not, there really is a desire and expectation to have real time awareness of the service delivery to be able to detect what’s going on, react to it, address it before the user, whoever that is, the customer, the employee […]

Read More

What Is make_unique And How To Use With unique_ptr In C++

Allocating memory and freeing it safely is hard if you are programming a very big application. In Modern C++ there are smart pointers that help avoid the mistake of incorrectly freeing the memory addressed by pointers. Smart pointers make it easier to define and manage pointers, they came with C++11. The most used types of C++ smart pointer are unique_ptr, auto_ptr, shared_ptr, and weak_ptr. In C++14, there is make_unique (std::make_unique) that can be used to allocate memory for the unique_ptr smart pointers. What is a smart pointer and what is unique_ptr in C++? In computer science, all data and operations during runtime are stored in the memory of our computation machines (computers, IoTs, or other microdevices). This memory is generally RAM (Random Access Memory) that allows data items to be read or written in almost the same amount of time, irrespective of the physical location of data inside the memory. Allocating memory and freeing it safely is really hard if you are programming a very big application. C and C++ were notorious for making this problem even more difficult since earlier coding styles and C++ standards had fewer ways of helping developers deal with pointers and it was fairly easy to make a mistake of manipulating a pointer which had become invalid. In Modern C++, there are smart pointers that help avoid the mistake of incorrectly freeing the memory addressed by pointers which was often the source of bugs in code which could often be difficult to track down. Smart pointers make it easier to manage pointers, they came with the release of the C++11 standard. The most used types of C++ smart pointer are unique_ptr, auto_ptr, shared_ptr, and weak_ptr. std::unique_ptr is a smart pointer that manages and owns another object with a pointer, when the unique_ptr goes out of scope C++ disposes of that object automatically. If you want to do some magic in memory operations use std::unique_ptr. Here are more details on how you can use unique_ptr in C++, What is std::make_unique in C++? The std::make_unique is a new feature that comes with C++14. It is used to allocate memory for the unique_ptr smart pointers, thus managing the lifetime of dynamically allocated objects. std::make_unique template constructs an array of the given dynamic size in memory, and these array elements are value-initialized. Since C++14 to C++20, the std::make_unique function is declared as a template as below,   template unique_ptr make_unique( std::size_t size );     template unique_ptr make_unique( Args&&… args );   The std::make_unique function does not have an allocator-aware counterpart; it returns unique_ptr which would contain an allocator object and invoke both destroy and deallocate in its operator(). How to use std::make_unique with std::unique_ptr in C++? Let’s assume you have a simple class named myclass, We can create a unique pointer (a smart pointer) with this class as below,   std::unique_ptr p = std::make_unique();   Or it can be used with struct as below:   struct st_xy {     int x,y; };   We can create a unique pointer as we illustrate in the example below.   std::unique_ptr xy = std::make_unique();   We can use the auto keyword and we can define the maximum array elements like so:   auto ui = std::make_unique(5);   What are the advantages to use std::make_unique in C++? The std::make_unique function is useful when allocating smart pointer, because: We don’t need to think about new/delete or new[]/delete[] elisions. When […]

Read More

How To Use std::exchange In C++

C++ is a very precise programming language that allows you to use memory operations in a wide variety of ways. Mastering efficient memory handling will improve your app performance at run time and can result in faster applications that have optimal memory usage. One of the many useful features that come with the C++14 standard is the std::exchange algorithm that is defined in the utility header. In this post, we explain how to use std::exchange in C++. What is std::exchange in C++? The std::exchange algorithm is defined in the header and it is a built-in function that comes with C++14. The std::exchange algorithm copies the new value to a given object and it will return the old value of that object. Here is the template definition syntax (since C++14, until C++20):   template T exchange( T& object, U&& new_value );   How to use std::exchange in C++? We can use std::exchange to exchange variables values as below, and we can obtain old value from the return value of std::exchange.   int x = 500; int y = std::exchange(x, 333);   After these lines, x is now 333 and y is the old value of x , 500. We can use std::exchange to set values of object lists (i.e vectors) as shown below:   std::vector vec; std::exchange( vec, { 10, 20, 30, 40, 50 } );   In addition, we can copy old values to another object list as we show in the following example:   std::vector vec, vec0; vec0 = std::exchange( vec, { 10, 20, 30, 40, 50 } );   We can use std::exchange to change function definitions too, for example assume we have myf1() function, now we want to exchange myf() function to myf1() function, this is how we can do:   void (*myf)(); std::exchange(myf, myf1); // myf is now myf1 myf();   Is there a full example of how to use std::exchange in C++? Here is a full example of how to use std::exchange in C++. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46   #include #include   void myf1() { std::cout

Read More

Open Sourcing IFC SDK for C++ Modules

Open Sourcing IFC SDK for C++ Modules GDR October 3rd, 20234 3 Back with VS2019 version 16.10, we announced a complete implementation of C++ Modules (and, generally, of all C++20 features) across the MSVC compiler toolset, static analysis, IntelliSense, and debugger. Implementing Modules requires principled intermediate representation of C++ source programs. Today, we are thrilled to announce the availability of the IFC SDK, a Microsoft implementation of the IFC Specification. This is an open-source project under the Apache 2-with-LLVM-exception license. The IFC Specification formalizes C++ programs as data amenable to semantics-based manipulation. We are open sourcing the IFC SDK in the hope of accelerating widespread use and implementation of C++ Modules across the C++ community and the C++ tools ecosystem. What Is an IFC and What Is It Good for? A popular compilation strategy of a C++ Module interface (or header unit) source file is to translate the source file, exactly once, into an internal representation suitable for reuse in other source files. That intermediate form is generally referred to as a Built Module Interface (BMI). An IFC file is an implementation of the BMI concept. An IFC file is a persistent representation of all information pertaining to the semantics of a C++ program. In addition to direct use by compilers to implement C++ Modules, an IFC can also be inspected by non-compiler tools for semantics-based exploration of C++ source files. Such tools are useful for explorers in understanding how C++ source-level constructs can be represented in a form suitable for C++ Modules implementation. IFC SDK Scope The IFC SDK is currently an experimental project. It focuses on providing datatypes and source code supporting the read and write of IFC files. This SDK is the same tested implementation used in the MSVC compiler front-end to implement C++ Modules.  Consequently, while the GitHub OSS repo is “experimental”, the code is far from it. Those datatypes can be memory-mapped directly for scalable and efficient processing. The project also features utilities for formatting or viewing IFC files. We welcome, and we are looking for, contributions that fix gaps between the SDK and the Specification, and for changes required to support C++ standards starting from C++20 and upwards. Call to Action We encourage you to check out the IFC SDK repo, build it, experiment with it, and contribute. We would like to hear from you. If you’re a C++ compiler writer interested in supporting the IFC file format, please reach out at the IFC Specification repo and share feedback, suggestions, or bug fixes.  You can also reach us on Twitter (@VisualC), or via email at visualcpp@microsoft.com.   GDR Software Architect, DevDiv Follow

Read More

Build Reliable and Secure C++ programs — Microsoft Learn

Build Reliable and Secure C++ programs — Microsoft Learn Herb Sutter October 2nd, 20230 1 “The world is built on C and C++” is no overstatement — C and C++ are foundational languages for our global society and are always in the world’s top 10 most heavily used languages now and for the foreseeable future. Visual Studio has always supported many programming languages and we encourage new languages and experiments; diversity and innovation are healthy and help progress the state of the art in software engineering. In Visual Studio we also remain heavily invested long-term in providing the best C and C++ tools and continuing to actively participate in ISO C++ standardization, because we know that our internal and external customers are relying on these languages for their success, now and for many years to come. As cyberattacks and cybercrimes increase, we have to defend our software that is under attack. Malicious actors target many attack vectors, one of which is memory safety vulnerabilities. C and C++ do not guarantee memory safety by default, but you can build reliable and secure software in C++ using additional libraries and tools. And regardless of the programming languages your project uses, you need to know how to defend against the many other attack vectors besides memory safety that bad actors are using daily to attack software written in all languages. To that end, we’ve just published a new document on Microsoft Learn to help our customers know and use the tools we provide in Visual Studio, Azure DevOps, and GitHub to ship reliable and secure software. Most of the advice applies to all languages, but this document has a specific focus on C and C++. It was coauthored by many subject-matter experts in programming languages and software security from across Microsoft: Build reliable and secure C++ programs | Microsoft Learn This is a section-by-section companion to the United States government publication NISTIR 8397: Guidelines on Minimum Standards for Developer Verification of Software. NISTIR 8397 contains excellent guidance on how to build reliable and secure software in any programming language, arranged in 11 sections or topics. For each NISTIR 8397 section, this Learn document summarizes how to use Microsoft developer products for C++ and other languages to meet that section’s security needs, and provides guidance to get the most value in each area. Most of NISTIR 8397’s guidance applies to all software; for example, all software should protect its secrets, use the latest versions of tools, do automated testing, use CI/CD, verify its bill of materials, and so on. But for C++ memory safety specifically, see our Learn document’s information in sections 2.3, 2.5, and 2.9 for a detailed list of analyses (e.g., CodeQL, Binskim, /analyze), safe libraries (e.g., GSL, SafeInt), hardening compiler switches (e.g., /sdl, /GS, /guard, /W4, /WX, /Qspectre), and tools (e.g., Address Sanitizer, LibFuzzer) that your project can and should use regularly to harden your C++ programs. If your project uses C++ and you find items listed that you’re not using yet, don’t wait — start adding them to your project today! Safety and security are essential. We want to help all of our customers to know about and use the state-of-the-art tools we provide in Visual Studio, Azure DevOps, and GitHub for writing reliable and secure software in our device-and-cloud […]

Read More

Senser Unveils AIOps Platform Using eBPF to Collect Data

Senser emerged from stealth this week to launch an artificial intelligence for IT operations (AIOps) platform that leverages extended Berkeley Packet Filter (eBPF) running in the microkernel of Linux operating systems to collect data from IT environments. Fresh from raising $9.5 million in funding, Senser CEO Amir Krayden said the company’s namesake platform then applies machine learning algorithms to that data to identify issues that could lead to outages. Those insights are surfaced using graph technology to make it simpler to both observe IT environments and triage issues at scale because the AIOps platform is running processes at the microkernel level rather than in user space. The approach provides IT teams with a more efficient and holistic approach to observability at a level of scale legacy platforms can’t achieve, said Krayden. The use of machine learning algorithms also reduces the cognitive load on DevOps teams because issues involving, for example, performance degradations are automatically surfaced, he added. In addition, the company is working toward adding generative AI capabilities to provide summaries that explain what IT events have occurred, noted Krayden. In effect, eBPF changes the way operating systems are designed because it enables networking, storage and observability software to scale at much higher levels of throughput since they are no longer running in user space. That’s especially critical for observability and AIOps platforms that need to dynamically process massive amounts of data in near-real-time. As the number of organizations running the latest versions of Linux continues to increase, more hands-on experience with eBPF will be gained. IT teams may not need to concern themselves with what is occurring in the microkernel of the operating systems, but they do need to understand how eBPF ultimately reduces the total cost of running IT at scale. AI and graph technology, in combination with eBPF, will fundamentally change how IT is implemented and managed. The current complexity of application environments is already exceeding the ability of IT teams to cost-effectively manage them at scale, so the need for a different approach is already apparent. Many IT environments are already too complex for IT personnel to manage without the help of some form of AI. It’s not clear precisely how much AI will automate the management of IT, but it’s not likely the need for humans to manage and supervise these environments will happen any time soon. However, the level of scale at which an IT environment can be effectively managed is changing as AI makes it easier to identify issues and understand their impact. Too often today, there are simply too many dependencies within an IT environment to keep track of using legacy monitoring tools that only track a set of pre-defined metrics. It may be a while before AI is pervasively employed across IT environments, but it’s now more a question of when rather than if. The issue now is determining where the interface between the humans and the machines that are jointly managing IT environments lies.

Read More

Raspberry Pi 5: Faster, Better, Stronger — Spendier

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. In a cheeky extra post this week: Everyone’s favorite single-board ARM computer, the Raspberry Pi, has a new generation coming soon. Compared to the ’4, RPi5 has double the performance, quadruple the base RAM and far more capable I/O. Analysis: And you’ll even be able to buy one The pandemic completely messed up the Raspberry Pi Foundation’s supply chains, meaning they had to focus on supplying companies who’d forward-bought the devices. This time, Eben Upton’s crew are trying to get back to their roots, promising—for the first couple of months—to sell RPi5s only to individuals. What’s the story? Alaina Yee reports—“Raspberry Pi 5 just got announced”: “I can’t wait”Forget the holiday pie, this is what I want on my table for Thanksgiving. … It looks totally badass. … Not only does the Raspberry Pi 5 appear ready to deliver a sizable step up in performance compared to its 2019 predecessor, but its new silicon was designed in-house.…The Raspberry Pi 5 is leaning hard into high-octane mini-computing. … You can expect the Raspberry Pi 5 to be about two to three times faster. Memory bandwidth also doubles.…And … a new official first-party operating system will be launching … in mid-October. Called Raspberry Pi OS, it’s based on the Linux Debian distro, as well as the Raspbian derivative that’s existed for years. … I can’t wait. Speeds and feeds? Brad Linder’s got ’em—“Raspberry Pi 5 offers 2X the performance”: “4x ARM Cortex-A76”The new Raspberry Pi 5 is a single-board computer that’s a major upgrade over the Raspberry Pi 4 … in just about every way. … At launch, there will be two configurations available: a model with 4GB of RAM that sells for $60 and an 8GB version priced at $80. That means the starting model has twice as much RAM as a $35 Raspberry Pi 4.…At the heart of new computer is a new … 16nm chip featuring 4x ARM Cortex-A76 CPU cores @ 2.4 GHz, 512KB per-core L2 cache, 2MB L3 cache, VideoCore VII graphics with support for dual 4k/60 Hz HDMI displays. [It] also features 32-bit LPDDR4X 4267MT/s memory … 2x micro HDMI (4K/60Hz), 2x USB 3.0 Type-A, 2x USB 2.0 Type-A, 1x Gigabit Ethernet with PoE support, 1x USB-C power input, 1x microSD card reader. … There are also two 4-lane MIPI interfaces. Horse’s mouth? Eben Upton—“Introducing: Raspberry Pi 5!”: “We’re incredibly grateful”Virtually every aspect of the platform has been upgraded, delivering a no-compromises user experience. … And it’s the first Raspberry Pi computer to feature silicon designed in‑house here in Cambridge, UK. … Broadcom’s VideoCore VII [is also] developed here.…Like all flagship Raspberry Pi products [it’s] built at the Sony UK Technology Centre in Pencoed, South Wales. We have been working with Sony since the launch of the first Raspberry Pi … in 2012, and we’re firm believers in the benefits of manufacturing our products within a few hours’ drive of our engineering design centre in Cambridge.…We expect the first units to ship by the end of October. … We’re incredibly grateful to the community of makers and hackers who make Raspberry Pi what it is. [So,] we’re going to ringfence all of the Raspberry Pi 5s we sell until at least the end of […]

Read More

What Are The Relaxed Constexpr Restrictions In C++?

In C++, the constexpr specifier is used to declare a function or variable to evaluate the value of at compile time, which speeds up code during runtime. This useful property had some restrictions in C++11, these are relaxed in C++14 and this feature is known as Relaxed Constexpr Restrictions. In this post, we explain what are the relaxed constexpr restrictions in modern C++. What is the constexpr specifier in modern C++? The C++11 standard comes with the introduction of generalized constexpr functions. The constrexpr is used as a return type specifier of function and improves performance by computations done at compile time rather than run time. Return values of constexpr could be consumed by operations that require constant expressions, such as an integer template argument. Here is the syntax:   constexpr   Here is an example of how to use a constexpr in C++:   constexpr double sq(const double x) { return ( x*x ); }   and can be used as shown below.   constexpr double y = sq(13.3);   Note that here we used a constant variable 13.3, that is given in coding, thus the result y will be calculated during compilation as 176.89. Then, we can say, similarly this line will be compiled as we show below:   const double y = 176.89   Is there a simple constexpr specifier example in modern C++? Here we can sum all above in an example. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18   #include   // C++11, a simple constexpr example constexpr double sq(const double x) { return ( x*x ); }   int main() { constexpr double y = sq(13.3); std::cout

Read More