FinOps

NetApp Extends Microsoft Alliance to Include CloudOps Tools

NetApp this week extended its alliance with Microsoft to now include its CloudOps portfolio of tools for optimizing cloud computing environments. Previously, the alliance between the two companies focused on data management but is now expanding to include tools to deploy workloads, improve performance and reduce costs using machine learning algorithms across both instances of virtual machines and the Azure Kubernetes Service (AKS). Kevin McGrath, vice president of Spot by NetApp, said in more challenging economic times, there’s a lot more focus on programmatically reining cloud costs using FinOps best practices within the context of a DevOps workflow. Organizations are also starting to create platform engineering teams to more efficiently manage DevOps workflows at scale across hybrid cloud computing environments, he added. For years, developers have been provisioning cloud infrastructure resources with little to no oversight. Unfortunately, developers are also prone to over-provision infrastructure resources to ensure maximum application availability. Many of those infrastructure resources never wind up being consumed by the application, so the cost of cloud computing winds up becoming inflated. IT leaders are also being increasingly required to make sure cloud costs are more predictable. Sudden spikes in consumption resulting in higher monthly bills are an unwelcome surprise to finance teams that are now required to manage costs more closely. Ongoing advances in artificial intelligence (AI) should make it easier to predict costs across highly dynamic cloud computing environments. Navigating all the pricing options that cloud service providers make available is challenging. IT teams need to clearly understand the attributes of each workload to ensure optimal usage of cloud infrastructure resources. Less clear is the degree to which IT teams are pitting cloud service providers against one another. Pricing across the cloud services that most organizations use today is fairly consistent. Most organizations that deploy workloads in the cloud tend to run the bulk of them on the same service because they lack the internal expertise needed to manage multiple clouds equally well. There may be some workloads running on additional clouds, but enterprise licensing agreements reward customers for running more workloads on a cloud. The only way to really optimize cloud spending is to shift workloads to less expensive tiers of service that might only be available for a relatively limited amount of time. One way or another, the management of cloud computing is finally starting to mature. As the percentage of workloads that organizations have running in the cloud steadily increases, IT teams are becoming more adept at both maximizing application performance and the associated return on investment (ROI). Each IT organization will need to decide for itself how best to manage cloud computing environments as it continues to build and deploy cloud-native applications alongside legacy monolithic applications running on virtual machines, but NetApp is betting the need for tools such as CloudOps will increase as cloud computing environment become more complex. The challenge, as always, is finding and retaining the talent needed to manage cloud computing environments when every other organization is looking for that same expertise.

Read More

Survey Surfaces Benefits of Applying AI to FinOps

A survey of 200 enterprise IT decision-makers published this week found organizations that have infused artificial intelligence (AI) into financial operations (FinOps) workflows to reduce IT costs are 53% more likely to report cost savings of more than 20%. Conducted by the market research firm Foundry on behalf of Tangoe, a provider of tools for managing IT and telecommunications expenses, the survey found organizations that embraced FinOps without any AI capabilities averaged less than 10% in cost savings. The top three drivers for adopting FinOps/cloud cost management programs are the need to increase cloud resource production and performance (70%), reduce budgets (60%) and rising costs (58%), and simpler overall program management (50%), the survey found. Major benefits included productivity savings (46%), cost savings (43%) and reduced security risks (43%). Nearly two-thirds of respondents cited service utilization and right-sizing of services as another reason to embrace FinOps. FinOps describes a methodology for embedding programmatic controls within DevOps workflows to reduce costs. In the face of increased economic headwinds, IT leaders are looking to reduce cloud computing costs, but it’s turning out to be more challenging than many of them anticipated. Cloud infrastructure is typically provisioned by developers using infrastructure-as-code (IaC) tools with little to no supervision. The reason for this is developers have long argued that waiting for an IT team to provision cloud infrastructure took too long. Developers would be more productive if they just provisioned cloud infrastructure themselves. However, after ten years of cloud computing, it’s become apparent there are a lot of wasted cloud infrastructure resources. Developers who don’t pay the monthly bills for cloud services tend to view available infrastructure resources as essentially infinite. It’s usually not until someone from the finance department starts raising cost concerns that developers even become aware there might be an issue. The challenge is that adopting FinOps best practices is not quite as easy as it might seem. In fact, more than half (54%) of survey respondents cited challenges in building the right process and human support systems for FinOps into workflows that have been in place for years. Chris Ortbals, chief product officer for Tangoe, said the simplest path to FinOps is to rely on a software-as-a-service (SaaS) platform designed from the ground up to leverage AI to help IT teams manage cloud computing and telecommunications expenses both before and after applications are deployed. Each DevOps team will ultimately need to determine how much they will implement metrics to foster more efficient consumption of cloud computing resources. The more aware of those costs DevOps teams are, the more likely that better decisions about what types of workloads should be run where and, just as importantly in the age of the cloud, at what time, given all the pricing options provided. Developers, of course, tend to jealously guard their prerogatives. Convincing them to give up their ability to provision cloud infrastructure on demand is going to be a challenge, at least until someone makes it plain how much all those cloud instances wind up costing the organization each and every month.

Read More