New Relic Adds Ability to Monitor AI Models to APM Platform
New Relic today added an ability to monitor artificial intelligence (AI) models to its application performance management (APM) platform.
Peter Pezaris, senior vice president for strategy and experience for New Relic, said as next-generation applications are built and deployed, it’s apparent most of them will incorporate multiple AI models. New Relic is extending its APM platform to make it simpler to monitor the behavior of those AI models within the context of an application, he added.
To achieve that goal, New Relic has added more than 50 integrations with frameworks and AI models to troubleshoot, compare and optimize different prompts and responses to address performance, cost, security and quality issues such as hallucinations, bias, toxicity and fairness.
For example, response tracing for large language models (LLMs) can be applied using New Relic agent software to collect telemetry data that can be used to compare how different AI models are performing and responding to queries. Those results will provide immediate full visibility into the models, applications and infrastructure being used to provide complete visibility across the entire AI stack, said Pezaris.
That capability is going to prove crucial as developer use a mix of proprietary, open source and custom large language models (LLMs) alongside a range of other types of AI models to build and deploy applications, he added.
Organizations are likely to find themselves managing hundreds of AI models that either they or a third party developed. The challenge, as always, is bringing order to a potentially chaotic process that, in addition to wasting resources, represents a significant risk to the business given the potential for regulatory fines to be levied, noted Pezaris.
Each organization will need to determine for itself how best to construct workflows spanning data scientists, application developers, software engineers, cybersecurity teams and compliance specialists. Before too long, organizations will find themselves managing hundreds of AI models that might be integrated into thousands of applications. New Relic is essentially making a case for extending an existing APM platform to address that challenge rather than requiring organizations to acquire, deploy and maintain additional platforms.
Eventually, in addition to updating AI models, IT teams will find they are being regularly replaced as advances continue to be made at a fast and furious rate. Data science teams are now making AI models based on significantly larger parameters that make previous generations of models obsolete before they can even be deployed in production environments.
As a result, operationalizing AI is going to present DevOps teams with major challenges as they look to both tune application performance and ensure the results being generated are accurate and consistent. That latter issue is especially critical in enterprise application environments where the results generated by an AI model can’t vary from one query to the next.
It’s still early days in terms of how AI will be applied to applications, but as AI models join the pantheon of artifacts DevOps teams need to manage, application development and deployment are about to become much more complex to manage.