metrics

Grafana Labs Acquires Asserts.ai to Bring AI to Observability

At its ObservabilityCON event, Grafana Labs today announced it has acquired Asserts.ai to automate configurations and customization of dashboards. In addition, the company is previewing an ability to apply artificial intelligence (AI) to incident management to make it simpler to surface the root cause of an issue. Sift is a diagnostic assistant in Grafana Cloud that automatically analyzes metrics, logs and tracing data, while Grafana Incident is a generative AI tool that summarizes incident timelines with a single click, creates metadata for dashboards and simplifies the writing of PromQL queries. Grafana Labs is also making generally available an Application Observability module for Grafana Cloud to provide a more holistic view of IT environments. Finally, Grafana Beyla, an open source auto-instrumentation project that makes use of extended Berkeley Packet Filtering (eBPF), is now also generally available. That tool enables DevOps teams to collect telemetry data for an IT environment from a sandbox environment running in the microkernel of an operating system. That approach makes it simpler to automatically instrument an IT environment, but there are instances where DevOps teams will be managing complex applications that will still require them to collect telemetry data via the user space of an application. Richi Hartmann, director of community for Grafana Labs, said collectively, these additional capabilities will make it simpler to apply observability across increasingly complex IT environments. For example, the AI technologies developed by Assert.ai will make it possible for DevOps teams to start sending data to Grafana Labs that will enable the cloud service to identify the applications and infrastructure being used. AI models will then be able to automatically generate a custom dashboard for that environment that DevOps teams can extend as they see fit, said Hartmann. In general, machine learning algorithms and generative AI are starting to be more widely applied to observability. The ultimate goal is to automatically identify issues in ways that reduce the cognitive load required to manage complex IT environments while also making it easier to launch queries that identify bottlenecks that could adversely impact application performance and availability. It’s not clear to what degree observability tools might eliminate the need for monitoring tools that track pre-defined metrics, but most DevOps teams will likely be using a mix of both for the foreseeable future. In the meantime, IT environments are only becoming more complex as various types of cloud-native applications are deployed alongside existing monolithic applications that are continuously being updated. The challenge is the overall size of DevOps teams is not expanding, so there is a greater need for tools to streamline the management of DevOps workflows. AI will naturally play a larger role in enabling organizations to achieve that goal, but it’s not likely to replace the need for DevOps engineers, said Hartmann. Conversely, many DevOps teams will also naturally gravitate toward organizations that make the tools they need to succeed available. Today, far too many manual tasks are increasing turnover as DevOps teams burn out. Organizations that want to hire and retain the best DevOps engineers will need to invest in AI. Of course, DevOps, at its core, has always been about ruthlessly automating as many manual tasks as possible. AI is only the latest in a series of advances that, over time, continue to make DevOps more accessible to IT professionals of […]

Read More

Survey Surfaces Benefits of Applying AI to FinOps

A survey of 200 enterprise IT decision-makers published this week found organizations that have infused artificial intelligence (AI) into financial operations (FinOps) workflows to reduce IT costs are 53% more likely to report cost savings of more than 20%. Conducted by the market research firm Foundry on behalf of Tangoe, a provider of tools for managing IT and telecommunications expenses, the survey found organizations that embraced FinOps without any AI capabilities averaged less than 10% in cost savings. The top three drivers for adopting FinOps/cloud cost management programs are the need to increase cloud resource production and performance (70%), reduce budgets (60%) and rising costs (58%), and simpler overall program management (50%), the survey found. Major benefits included productivity savings (46%), cost savings (43%) and reduced security risks (43%). Nearly two-thirds of respondents cited service utilization and right-sizing of services as another reason to embrace FinOps. FinOps describes a methodology for embedding programmatic controls within DevOps workflows to reduce costs. In the face of increased economic headwinds, IT leaders are looking to reduce cloud computing costs, but it’s turning out to be more challenging than many of them anticipated. Cloud infrastructure is typically provisioned by developers using infrastructure-as-code (IaC) tools with little to no supervision. The reason for this is developers have long argued that waiting for an IT team to provision cloud infrastructure took too long. Developers would be more productive if they just provisioned cloud infrastructure themselves. However, after ten years of cloud computing, it’s become apparent there are a lot of wasted cloud infrastructure resources. Developers who don’t pay the monthly bills for cloud services tend to view available infrastructure resources as essentially infinite. It’s usually not until someone from the finance department starts raising cost concerns that developers even become aware there might be an issue. The challenge is that adopting FinOps best practices is not quite as easy as it might seem. In fact, more than half (54%) of survey respondents cited challenges in building the right process and human support systems for FinOps into workflows that have been in place for years. Chris Ortbals, chief product officer for Tangoe, said the simplest path to FinOps is to rely on a software-as-a-service (SaaS) platform designed from the ground up to leverage AI to help IT teams manage cloud computing and telecommunications expenses both before and after applications are deployed. Each DevOps team will ultimately need to determine how much they will implement metrics to foster more efficient consumption of cloud computing resources. The more aware of those costs DevOps teams are, the more likely that better decisions about what types of workloads should be run where and, just as importantly in the age of the cloud, at what time, given all the pricing options provided. Developers, of course, tend to jealously guard their prerogatives. Convincing them to give up their ability to provision cloud infrastructure on demand is going to be a challenge, at least until someone makes it plain how much all those cloud instances wind up costing the organization each and every month.

Read More