Noutați

Coexisting With AI: The Future of Software Testing

If 2023 was the year of artificial intelligence (AI), then 2024 is going to be the year of human coexistence with the technology. Since the release of Open AI’s ChatGPT in November 2022, there has been a steady stream of competing large language models (LLMs) and integrated applications for specific tasks, including content, image processing and code production. It’s no longer a question of if AI will be adopted; we have moved on to the question of how best to bring this technology into our daily lives. These are my predictions for the software quality assurance testing industry for 2024. Automated testing will become a necessity, not a choice. Developers will lean heavily on AI-powered copilot tools, producing more code faster. That means huge increases in the risk profile of every software release. In 2024, testers will respond by embracing AI-powered testing tools to keep up with developers using AI-powered tools and not become the bottleneck in the software development life cycle (SDLC). The role of the tester will increase and evolve. While AI is helping software engineers and test automation engineers produce more code faster, it still requires the highly skilled eye of an experienced engineer to determine how good and usable the code or test is. In 2024, there will be a high demand for skilled workers with specific domain knowledge who can parse through the AI-generated output and determine if it’s coherent and useful within the specific business application. Although this is necessary for developers and testers to start trusting what the AI generates, they should be wary of spending inefficient amounts of time constructing AI prompts, as this can ultimately lead to decreased levels of performance. For instance, a developer could easily spend most of their time validating the AI-generated output instead of testing the release that will be deployed to users. Being able to distinguish between when to rely on AI and when to forego AI’s help will be key to streamlining the workflow. Eventually, we’re going to start seeing AI-powered testing tools for non-coders that focus on achieving repeatability, dependability and scalability so that testers can truly use AI as their primary testing tool and ultimately boost their productivity. The rise of protected, offline LLMs and the manual tester. As enterprise companies show signs they don’t trust public LLMs (e.g., ChatGPT, Bard, etc.) with their data and intellectual property (IP), they’re starting to build and deploy their own private LLMs behind secured firewalls. Fine-tuning those LLMs with domain-specific data (e.g., banking, health care, etc.) will require a great volume of testing. This promises a resurgence in the role of the manual tester as they will have an increasingly important role to play in that process since they possess deep domain knowledge that is increasingly scarce across enterprises. As we stand on the brink of 2024, it is evident that the synergy between artificial intelligence and human expertise will be the cornerstone of software quality engineering. Human testers must learn to harness the power of AI while contributing the irreplaceable nuance of human judgment. The year ahead promises to be one where human ingenuity collaborates with AI’s efficiency to ensure that the software we rely on is not only functional but also reliable and secure. There will likely be a concerted effort to refine these […]

Read More

The architecture of today’s LLM applications

We want to empower you to experiment with LLM models, build your own applications, and discover untapped problem spaces. That’s why we sat down with GitHub’s Alireza Goudarzi, a senior machine learning researcher, and Albert Ziegler, a principal machine learning engineer, to discuss the emerging architecture of today’s LLMs. In this post, we’ll cover five major steps to building your own LLM app, the emerging architecture of today’s LLM apps, and problem areas that you can start exploring today. Five steps to building an LLM app Building software with LLMs, or any machine learning (ML) model, is fundamentally different from building software without them. For one, rather than compiling source code into binary to run a series of commands, developers need to navigate datasets, embeddings, and parameter weights to generate consistent and accurate outputs. After all, LLM outputs are probabilistic and don’t produce the same predictable outcomes. Click on diagram to enlarge and save. Let’s break down, at a high level, the steps to build an LLM app today. ???? 1. Focus on a single problem, first. The key? Find a problem that’s the right size: one that’s focused enough so you can quickly iterate and make progress, but also big enough so that the right solution will wow users. For instance, rather than trying to address all developer problems with AI, the GitHub Copilot team initially focused on one part of the software development lifecycle: coding functions in the IDE. 2. Choose the right LLM. You’re saving costs by building an LLM app with a pre-trained model, but how do you pick the right one? Here are some factors to consider: Licensing. If you hope to eventually sell your LLM app, you’ll need to use a model that has an API licensed for commercial use. To get you started on your search, here’s a community-sourced list of open LLMs that are licensed for commercial use. Model size. The size of LLMs can range from 7 to 175 billion parameters—and some, like Ada, are even as small as 350 million parameters. Most LLMs (at the time of writing this post) range in size from 7-13 billion parameters. Conventional wisdom tells us that if a model has more parameters (variables that can be adjusted to improve a model’s output), the better the model is at learning new information and providing predictions. However, the improved performance of smaller models is challenging that belief. Smaller models are also usually faster and cheaper, so improvements to the quality of their predictions make them a viable contender compared to big-name models that might be out of scope for many apps. Looking for open source LLMs? Check out our developer’s guide to open source LLMs and generative AI, which includes a list of models like OpenLLaMA and Falcon-Series. Model performance. Before you customize your LLM using techniques like fine-tuning and in-context learning (which we’ll cover below), evaluate how well and fast—and how consistently—the model generates your desired output. To measure model performance, you can use offline evaluations. What are offline evaluations? They’re tests that assess the model and ensure it meets a performance standard before advancing it to the next step of interacting with a human. These tests measure latency, accuracy, and contextual relevance of a model’s outputs by asking it questions, to which there are […]

Read More

CircleCI Extends CI/CD Reach to AI Models

CircleCI this week revealed it is extending the reach of its continuous integration/continuous delivery (CI/CD) platform to make it simpler to incorporate artificial intelligence (AI) models into DevOps workflows. In addition to providing access to the latest generation of graphical processor units (GPUs) from NVIDIA via the Amazon Web Services (AWS) cloud, Circle CI has added inbound webhooks to access AI model curation services from providers such as Hugging Face and integrations with LangSmith, a debugging tool for generative AI applications and the Amazon SageMaker service for building AI applications. CircleCI CEO Jim Rose said while there is clearly a lot of enthusiasm for incorporating AI models into applications, the processes being used are still immature, especially in terms of automating workflows that include testing of probabilistic AI models. Most AI models are built by small teams of data scientists that create a software artifact that needs to be integrated within a DevOps workflow just like any other artifact, noted Rose. The challenge is that most data science teams have not yet defined a set of workflows for automating the delivery of those artifacts as part of a larger DevOps workflow, he added. DevOps teams will also need to make adjustments to a version control-centric approach to managing applications to trigger pipelines to pull AI software artifacts that exist outside of traditional software repositories. For example, the inbound webhooks provided by CircleCI now make it possible to automatically create a pipeline whenever an AI model residing on Hugging Face changes. It’s still early days as far as the deployment of AI models in production environments is concerned, but there is no doubt generative AI will have a major impact on how software is developed. AI models are a different class of software artifacts that are retrained instead of being updated, a process that occurs intermittently. As such, DevOps teams need to keep track of each time an AI model is being retrained to ensure applications are updated. At the same time, generative AI will also increase the pace at which other software artifacts are being created and deployed. Many of the manual tasks that today slow down the rate at which applications are built and deployed will be eliminated. That doesn’t mean there will be no need for software engineers, but it does mean the role they play in developing and deploying software is about to rapidly evolve. DevOps teams need to evaluate both how generative AI will impact the tasks they manage as well as the way the overall software development life cycle (SDLC) process needs to evolve. Each organization, as always, will need to decide for itself how best to achieve those goals depending on the use cases for AI,  but the changes that generative AI will bring about are now all but inevitable. The longer it takes to adjust, the harder it will become to overcome the cultural and technical challenges that will be encountered along the way.

Read More

When Is The CMath Mathematical Special Functions Header Used in Modern C++?

The math library  in C language is designed to be used in mathematical operations. From the first C language to the latest C++ Builder 12, there have been many changes and improvements in both hardware and software. We were able to use this math.h library in C++ applications. After the C++17 standard, this library is modernized in the cmath library, Functions are declared in  header for compatibility reasons in modern C++. In this post, we explain what are the math.h and cmath libraries. What is the math.h math library in C++? In the early days of computers there was an FPU (Floating Point Unit) in addition to a CPU (Central Processing Unit). While the CPUs were slower in floating point operations (especially in trigonometric functions) FPUs were faster than CPUs in those days. The math library  in the C language is designed to be used in mathematical operations with these FPUs and CPUs. From the first C language to the latest CLANG C++ compiler, there have been many changes and improvements in both hardware and software. We were able to use this math.h library in C++ applications. The math library library functions are declared in math.h header file and it is in the standard library of the C programming language. Most of the functions are trigonometric and basic math functions, and they mostly use floating point numbers such as float, double, or long double variables. Trigonometric functions use radians in angular parameters and all functions take doubles for floating-point arguments unless otherwise specified. In C++ (C++98, C++11, C++14), these C functions were begin used by the same header . For example, if you want to use sin(), cos(), tan(), exp(), log(), and pow() functions you have to add library to the C and C++11, C++14 applications. Note that, some mathematical library functions that operate on integers are instead specified in the  header, such as abs, labs, div, and ldiv. Here is a simple C example using the sin function.   #include #include   int main() { double x = sin(1.0); }   What is the cmath mathematical special functions library in C++? In C++11 and C++14, we were able to use the math.h library in C++ applications. After the C++17 standard, this library is modernized in the cmath library, and functions are declared in  header for compatibility reasons in modern C++, and the is an optional old header to support some old codes. The CMath Mathematical Special Functions Header  defines mathematical functions and symbols in the std namespace, and previous math functions are also included, it may also define them in the global namespace. You have to add a std namespace with using namespace std; or you should use the std:: prefix for each math function. Some of the mathematical special functions are added to the C++17 cmath library header by the contents of the former international standard ISO/IEC 29124:2010 and math.h functions added too. These are only available in namespace std. If you do not use namespace you should add std:: prefix to use these modern math functions. Here is a simple C++ example using the sin function.   #include #include   int main() { double x = std::sin(1.0); }   What is the difference between math.h and cmath in modern C++? The CMath Mathematical Special Functions Header  defines mathematical functions and symbols in the std namespace, and previous math functions are also […]

Read More

DevOps Best Practices for Faster and More Reliable Software Delivery

Imagine a scenario where teams creating the software and delivery aren’t just passing information but sitting together, brainstorming and solving problems in real time. That’s the core of DevOps. It’s not a one-click software solution, but teams working together to provide a reliable solution for seamless and faster software delivery. Let’s take an example of an app or software update; users would expect it to work seamlessly. The secret here for that seamless experience is often a well-structured DevOps strategy. DevOps isn’t just about speeding things up, it’s about balancing the need for speed with the need for stability. According to research, 99% of organizations witnessed a positive impact after implementing DevOps in their business delivery processes. They’re deploying updates far more frequently, their failure recovery is lightning-fast, and they see fewer issues when they launch new features. Using DevOps for Efficient Software Delivery DevOps is crucial for organizations looking to resolve the complexities of modern software delivery. It bridges the gap between ‘code complete’ and ‘code in production,’ ensuring that software isn’t just created but delivered swiftly and effectively to the end-user. This approach not only accelerates time-to-market but also enhances product quality and customer satisfaction. By adopting continuous integration and continuous delivery (CI/CD), automation, and constant feedback, DevOps empowers teams to respond to market changes with agility and confidence. It’s about balancing processes, people and technology that work together to unlock higher efficiency, innovation and success. Implementing Continuous Integration and Continuous Deployment (CI/CD) Continuous integration and continuous deployment (CI/CD) are core practices in the DevOps approach, designed to streamline and automate the steps in getting software from development to deployment. CI/CD establishes a framework for development teams that supports frequent code changes while maintaining system stability and high-quality output. This method depends on automation to detect problems early, reduce manual errors and speed up the delivery process, ensuring that new features, updates and fixes are available to users quickly and reliably. Teams should follow several best practices: • Commit to Version Control Rigorously: Every piece of code, from application to configuration scripts, should be version-controlled. It ensures that any changes can be tracked, rolled back or branched out at any point, providing a solid foundation for collaborative development and deployment.• Automate the Build for Consistency: Automation is the key to CI/CD. By automating the build process, one can ensure that the software can be reliably built at any time. This automation includes compiling code, running database migrations, and executing any necessary scripts to move from source code to a working program.• Incorporate Comprehensive Automated Testing: A robust suite of automated tests, including unit, integration, acceptance, and regression tests, should be run against every build to catch bugs early. Automated tests act as a safety net that helps maintain code quality throughout the rapid pace of DevOps cycles.• Replicate Production in Staging: A staging environment replicates your production environment and is crucial for pre-deployment testing. It should mimic production as closely as possible to surface any environment-specific issues that could otherwise cause unexpected behavior after release. • Ensure Quick and Safe Rollbacks: The ability to roll back to a previous state quickly is essential. This safety measure minimizes downtime by swiftly reversing failed deployments or critical issues without going through a prolonged troubleshooting process during peak hours.• Monitor Relentlessly […]

Read More

What Are The CMath Mathematical Special Functions in Modern C++?

In C++11 and C++14, we were able to use this math.h library in C++ applications. After the C++17 standard, this library modernized math operations with the cmath library. Functions are declared in  header. For compatibility reasons the is an optional alternative to support older code. In this post, we list most of these mathematical functions declared in the header of modern C++. What is cmath mathematical functions library in C++? The CMath Mathematical Special Functions Header  defines mathematical functions and symbols in the std namespace. It includes the previous math functions. It may also define them in the global namespace. You have to add a std namespace using namespace std; or you should use the std:: prefix for each math function. Some of the mathematical special functions are added to the C++17 cmath library header by the contents of the former international standard ISO/IEC 29124:2010 and math.h functions added too. These are only available in namespace std. If you do not use namespace you should add std:: prefix to use these modern math functions. In general, they are mostly double functions and can be slower, but they have more accurate results. For example sin() uses double variables, sinf() uses a float variable (same as C++11, faster, less accurate), while sinl() is used with long double variables (same as C++11, also slower, but more accurate). Here is a simple C++ example using the sin function.   #include #include   int main() { double d = std::sin(1.0); // double float  f = std::sinf(1.0); // float (C++11) long double  l = std::sinl(1.0); // long double (C++11)   }   What are the CMath mathematical special functions in modern C++ 17? There are many new modern mathematical special functions in the C++17 cmath header. Such as functions for associated Laguerre polynomials, elliptic integral of the first kind functions, Cylindrical Bessel functions (of the first kind), Cylindrical Neumann functions, Exponential integral functions, Hermite polynomials functions, Legendre polynomials functions, Laguerre polynomials, Riemann zeta function, and some spherical functions. Here is a list of the CMath special functions. Description double float long double Associated Laguerre polynomials assoc_laguerre assoc_laguerref assoc_laguerrel Associated Legendre polynomials assoc_legendre assoc_legendref assoc_legendrel Beta function beta betaf betal Elliptic integral of the first kind (complete) comp_ellint_1 comp_ellint_1f comp_ellint_1l Elliptic integral of the second kind (complete) comp_ellint_2 comp_ellint_2f comp_ellint_1l Elliptic integral of the third kind (complete) comp_ellint_3 comp_ellint_3f comp_ellint_1l Regular modified cylindrical Bessel functions cyl_bessel_i cyl_bessel_if cyl_bessel_il Cylindrical Bessel functions (of the first kind) cyl_bessel_j cyl_bessel_jf cyl_bessel_jl Irregular modified cylindrical Bessel functions cyl_bessel_k cyl_bessel_kf cyl_bessel_kl Cylindrical Neumann functions cyl_neumann cyl_neumannf cyl_neumannl Elliptic integral of the first kind (incomplete) ellint_1 ellint_1f ellint_1l Elliptic integral of the second kind (incomplete) ellint_2 ellint_2f ellint_2l Elliptic integral of the third kind (incomplete) ellint_3 ellint_3f ellint_3l Exponential integral expint expint expint Hermite polynomials hermite hermitef hermitel Legendre polynomials legendre legendref legendrel Laguerre polynomials laguerre laguerref laguerrel Riemann zeta function riemann_zeta riemann_zetaf riemann_zetal spherical associated Legendre functions sph_legendre sph_legendref sph_legendrel spherical Bessel functions (of the first kind) sph_bessel sph_besself sph_bessell spherical Neumann functions sph_neumann sph_neumannf sph_neumannl Note that, by the C++20 standard, only default names of math functions are used. For example, the laguerre() is used for the float, double and long double versions. For more details about changes in C++17 standard, please see this https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0226r1.pdf C++ Builder is the easiest and fastest C and C++ compiler […]

Read More

What’s New in the vcpkg 2023.11.20 Release

What’s New in the vcpkg 2023.11.20 Release Augustin Popa November 28th, 20230 0 The 2023.11.20 release of the vcpkg package manager is available. This blog post summarizes changes from October 19th, 2023 to November 19th, 2023 for the Microsoft/vcpkg, Microsoft/vcpkg-tool, and Microsoft/vcpkg-docs GitHub repos. Some stats for this period: 34 new ports were added to the open-source registry. A port is a versioned recipe for building a package from source, such as a C or C++ library. 268 updates were made to existing ports. As always, we validate each change to a port by building all other ports that depend on or are depended by the library that is being updated for our nine main triplets. There are now 2,352 total libraries available in the vcpkg public registry. 22 contributors submitted PRs, issues, or participated in discussions in the main repo. The main vcpkg repo has over 5,800 forks and 20,200 stars on GitHub.   Key changes This vcpkg update includes some bugfixes, documentation improvements, as well as a new community triplet. Notable changes for this release are summarized below.   Added mips64-linux community triplet A community contributor has added a mips64-linux community triplet. MIPS stands for Microprocessor without Interlocked Pipelined Stages and is a “family of reduced instruction set computer (RISC) instruction set architectures (ISA)” (Source: MIPS architecture on Wikipedia). As someone who took courses in university where we wrote code targeting MIPS, I thought this was pretty neat! Also, as implied by the triplet name, this support is specifically for 64-bit MIPS. PR: Microsoft/vcpkg#34392, Microsoft/vcpkg-tool#1226 (thanks @capric8416!)   Documentation changes This month, our documentation changes at learn.microsoft.com/vcpkg include a glossary of terms and two new tutorials. The first tutorial covers exporting compiled dependencies, which is useful when you want to share libraries across multiple projects in a portable manner, without requiring the end user to install vcpkg to receive them. The second tutorial describes how to update an open-source vcpkg dependency to a new version and submit the changes to the vcpkg repo. Documentation changelog: Added Glossary of terms. Added Tutorial: Export compiled dependencies. Describes how to export compiled dependencies using the vcpkg export command. Added Tutorial: Update an existing vcpkg dependency. Updated MSBuild integration article to describe properties for app-local DLL deployment. Fixed incorrect spelling to an “env” macro in a CMakePresets.json snippet (PR: Microsoft/vcpkg-docs#215, thanks @oraqlle!) Fixed a couple of links in the CMake integration page (PR: Microsoft/vcpkg-docs#212, thanks @randallpittman!) Other minor edits / typo fixes.   Bug fixes / performance improvements Fixed vcpkg activate failing when run with the –no-color switch in Visual Studio (PR: Microsoft/vcpkg-tool#1247). Fixed crash when running “vcpkg add port sqlite3[core]” (PR: Microsoft/vcpkg-tool#1163, thanks @autoantwort!) Other minor bugfixes.   Total ports available for tested triplets We are re-building our ports for arm64-windows and x64-windows due to an error that occurred in the last CI run. The numbers for these will be updated shortly. triplet ports available x64-windows Building… x86-windows 2,122 x64-windows-static 2,084 x64-windows-static-md 2,108 arm64-windows Building… x64-uwp 1,217 arm64-uwp 1,184 x64-linux 2,158 x64-osx 2,050 arm-neon-android 1,496 x64-android 1,555 arm64-android 1,513 While vcpkg supports a much larger variety of target platforms and architectures, the list above is validated exhaustively to ensure updated ports don’t break other ports in the catalog.   Thank you to our contributors vcpkg couldn’t be where it is today […]

Read More

What Are The New Rules For Auto Deduction In C++ 17?

The features of C++17 were a big step in the history of C++ and brought a lot of new features. In C++17, there are new auto rules for direct-list-initialization. This feature in C++, changes the rules for auto deduction from a braced-init-list. In this post, we explain what these new rules for auto deduction with examples are. What is auto keyword in C++? The auto keyword arrives with the new features in C++11 and improved in C++17, can be used as a placeholder type specifier (an auto-typed variable), or it can be used in function declaration, or in a structured binding declaration. The auto keyword can be used with other new CLANG standards like C++14, C++17, etc. What are the new rules for auto deduction in C++ 17? In C++17, For copy-list-initialization, the auto deduction will either deduce a std::initializer_list (if the types of entries in the braced-init-list are all identical) or be ill-formed otherwise. Note that, auto a = {1, 2, 3}, b = {1}; remains unchanged and deduces initializer_list. This change is intended as a defect resolution against C++14. Now, let’s see the examples below. Auto deduction from braced-init-list Rule #1 For direct list-initialization: For a braced-init-list with only a single element, the auto deduction will deduce from that entry. In the example below, there is a single member in the braced-init-list, and this is automatically defined as an initializer_list that consists of int members.   auto a = { 30 }; // decltype(a) is std::initializer_list for (auto i : a)  std::cout

Read More

Modern Examples For The New Modern C++ Builder 12

Hello C++ Developers, Yilmaz here from LearnCPlusPlus.org. This month, the new RAD Studio 12, the new C++ Builder 12, and the new Delphi 12 were released packed full of great features, optimizations, and improvements. We’ve had some great positive and encouraging feedback especially about one of the great features of C++ Builder 12 the new Visual Assist (VA) with code completion, refactoring, and very powerful navigation. Feedback for the CLANG C++ compiler preview is also very encouraging for the future of C++ Builder. It is another big step introducing a new 64bit bcc64x CLANG (15.x) compiler (Version 7.60), which supports C++11, C++14, C++17, and partially the C++20 standards. There were many new features in IDE, libs, components, and compilers on both C++ Builder and Delphi side. Please see below for details. This week we have 3 new post picks from LearnCPLusPlus.org that can be used with the new C++ Builder 12. The first post pick is about the std::is_final type trait that can be used to detect if a class or a method is marked as a final or not. The second post is about the new begin() and end() iterators that come with C++14 and are used to define the start of iteration and the end of the iteration. The other new post is about the std::integer_sequence is a class template for the sequence of integers that is generated at compile-time. It has been 3 years since we start adding posts to our educational LearnCPlusPlus.org site, it has a broad selection of new and unique posts with examples suitable for everyone from beginners to professionals alike. It is growing well thanks to you, and we have many new readers, thanks to your support! The site features a treasure-trove of posts that are great for learning the features of modern C++ compilers with very simple explanations and examples. RAD Studio’s C++ Builder, Delphi, and their free community editions C++ Builder CE, and Delphi CE are powerful tools for modern application development. Where I can I learn C++ and test these examples with a free C++ compiler? If you don’t know anything about C++ or the C++ Builder IDE, don’t worry, we have a lot of great, easy to understand examples on the LearnCPlusPlus.org website and they’re all completely free. Just visit this site and copy and paste any examples there into a new Console, VCL, or FMX project, depending on the type of post. We keep adding more C and C++ posts with sample code. In today’s round-up of recent posts on LearnCPlusPlus.org, we have new articles with very simple examples that can be used with: The free version of C++ Builder 11 CE Community Edition or a professional version of C++ Builder  or free BCC32C C++ Compiler and BCC32X C++ Compiler or the free Dev-C++ Read the FAQ notes on the CE license and then simply fill out the form to download C++ Builder 11 CE. How to use modern C++ with C++ Builder? In C++11, the final specifier is used for a function or for a class that cannot be overridden by derived classes, and there was no way to check if that class or method is the final. In C++14, there is a std::is_final type trait that can be used to detect if a class or a method is marked as a final or not. In the first post, we explain how we can use the std::is_final type trait in C++14 and C++17. Iterators are one of the […]

Read More

New Relic Adds Ability to Monitor AI Models to APM Platform

New Relic today added an ability to monitor artificial intelligence (AI) models to its application performance management (APM) platform. Peter Pezaris, senior vice president for strategy and experience for New Relic, said as next-generation applications are built and deployed, it’s apparent most of them will incorporate multiple AI models. New Relic is extending its APM platform to make it simpler to monitor the behavior of those AI models within the context of an application, he added. To achieve that goal, New Relic has added more than 50 integrations with frameworks and AI models to troubleshoot, compare and optimize different prompts and responses to address performance, cost, security and quality issues such as hallucinations, bias, toxicity and fairness. For example, response tracing for large language models (LLMs) can be applied using New Relic agent software to collect telemetry data that can be used to compare how different AI models are performing and responding to queries. Those results will provide immediate full visibility into the models, applications and infrastructure being used to provide complete visibility across the entire AI stack, said Pezaris. That capability is going to prove crucial as developer use a mix of proprietary, open source and custom large language models (LLMs) alongside a range of other types of AI models to build and deploy applications, he added. Organizations are likely to find themselves managing hundreds of AI models that either they or a third party developed. The challenge, as always, is bringing order to a potentially chaotic process that, in addition to wasting resources, represents a significant risk to the business given the potential for regulatory fines to be levied, noted Pezaris. Each organization will need to determine for itself how best to construct workflows spanning data scientists, application developers, software engineers, cybersecurity teams and compliance specialists. Before too long, organizations will find themselves managing hundreds of AI models that might be integrated into thousands of applications. New Relic is essentially making a case for extending an existing APM platform to address that challenge rather than requiring organizations to acquire, deploy and maintain additional platforms. Eventually, in addition to updating AI models, IT teams will find they are being regularly replaced as advances continue to be made at a fast and furious rate. Data science teams are now making AI models based on significantly larger parameters that make previous generations of models obsolete before they can even be deployed in production environments. As a result, operationalizing AI is going to present DevOps teams with major challenges as they look to both tune application performance and ensure the results being generated are accurate and consistent. That latter issue is especially critical in enterprise application environments where the results generated by an AI model can’t vary from one query to the next.It’s still early days in terms of how AI will be applied to applications, but as AI models join the pantheon of artifacts DevOps teams need to manage, application development and deployment are about to become much more complex to manage.

Read More