artificial intelligence

From Reaction to Robots: Riding the AI Wave in 2024

As we navigate another year of consistent zero-day breaches, legislative pivots, the explosion of AI tooling and threat actors growing bolder and more desperate, it’s safe to say that getting comfortable with change is a requirement for thriving in the technology industry. We occupy a notoriously unpredictable space, but that’s half the fun. Compared to many other verticals, technology—especially cybersecurity—is relatively youthful, and the future should be something we can all look forward to blossoming in sophistication alongside the technology we swear to protect. So, what can we expect in the industry in 2024? We put our heads together, looked into our crystal ball, and these were the results: Government Regulations Around AI Will Turn the Industry Upside Down It was the talk of the conference circuit in 2023, with several high-profile presentations at Black Hat, DEF CON, Infosecurity Europe and many more warning of the explosive changes we can expect from AI implementation across every industry, especially cybersecurity. As tends to happen with low barriers to entry for such transformative technology, adoption has outpaced any official regulation or mandates at the government level. With significant movements in general cybersecurity guidelines and benchmarks around the world, including CISA’s Secure-by-Design and -Default principles in the U.S. and similar initiatives from the UK and Australian governments, it is essentially a foregone conclusion that regulations around AI use will be announced sooner rather than later. While much of the debate surrounding the mainstream use of AI tooling and LLMs has centered around copyright issues with training data, another perspective delves into how AI is best used in cybersecurity practices. When it comes to coding, perhaps its most human quality is its similar hardship in displaying contextual security awareness, and this factor is deeply concerning as more developers are adopting AI coding assistants in the construction of software. This has not gone unnoticed, and in a time of increased scrutiny for software vendors adopting security best practices, government-level intervention certainly would not surprise. … And Demand for AI/ML Coding Tools Will Create a Need for More Developers, not Less! Much has been written about the AI takeover, and for the better part of a year, we have been subject to a plethora of clickbait headlines that spell doom and destruction for just about every white-collar profession out there, and developers were not spared. After months of speculation and experimentation with LLMs in a coding context, we remain entirely unconvinced that development jobs are at collective risk. There is no doubt that AI/ML coding tools represent a new era of powerful assistive technology for developers, but they are trained on human-created input and data, and that has rendered the results far from perfect. Perhaps if every developer on the planet was a top-tier, security-minded engineer, we might see genuine cause for concern. However, just as the average adult driver vastly overshoots their ability (notice how everyone says they’re a great driver, and it’s always other people who lack skill? That’s a classic example of the Dunning-Kruger effect!), so too does the development community, especially when it comes to security best practices. According to one Stanford study into developer use of AI tooling, it is likely that unskilled developers using this technology will become dangerous. The study claimed that participants who had access to AI assistants […]

Read More

Skillsoft Survey Sees AI Driving Increased Need to Retrain IT Teams

More organizations than ever will need to invest in IT training as advances in artificial intelligence (AI) transform roles and responsibilities in the coming year. A survey of 2,740 IT decision-makers conducted by Skillsoft, a provider of an online training platform, finds two-thirds (66%) were already dealing with skills gaps in 2023. As AI becomes more pervasively applied to the management of IT, that skills gap is only going to widen given the limited pool of IT professionals that have any experience using AI to manage IT. Skillsoft CIO Orla Daly said it’s already apparent AI creates an imperative for training because there are simply not enough IT people with the requisite skills required. In fact, the survey finds nearly half of IT decision-makers (45%) plan to close skills gaps by training their existing teams. That training is crucial because the primary reason IT staff change jobs is a lack of growth and development opportunities, noted Daly. While there is naturally a lot of consternation over the potential elimination of IT jobs, in the final analysis, AI will add more different types of jobs than it eliminates, adds Daly. Each of those jobs will require new skills that will need to be acquired and honed, she notes. “Training is the price of innovation,” said Daly. In the meantime, there is much interest in finding ways to automate existing IT processes to create more time for IT teams to experiment with AI technologies, said Daly. The report also finds well over half of IT decision-makers (56%) expect their IT budgets to increase to help pay for new platforms and tools, compared with only 12% expecting a decrease. It’s not clear to what degree AI will transform the management of AI, but it’s already apparent that many manual tasks involving, for example, generating reports are about to be automated. The instant summarization capabilities that generative AI enables also promise to dramatically reduce the time required to onboard new members to an incident response team. Rather than having to allocate someone on the team to bring new members up to speed, each new member of the team will use queries framed in natural language to determine for themselves the extent of the crisis at hand. In addition, many tasks that today require expensive specialists to perform might become more accessible to a wider range of organizations as, for example, more DevOps processes are automated. That doesn’t necessarily mean that DevOps as an IT discipline disappears as much as it leads to the democratization of best DevOps practices. Each IT organization in the year ahead will need to determine to what degree to rely on AI to manage IT processes. It may take a while before IT teams have enough confidence in AI to rely on it to manage mission-critical applications but many of the tasks that today conspire to make the management of IT tedious will undoubtedly fade away. The challenge and the opportunity now is identify those tasks today with an eye toward revamping how IT might be managed in the age of AI tomorrow.

Read More

How to Solve the GPU Shortage Problem With Automation

GPU instances have never been as precious and sought-after as they have since generative AI captured the industry’s attention. Whether it’s due to broken supply chains or the sudden demand spike, one thing is clear: Getting a GPU-powered virtual machine is harder than ever, even if a team is fishing in the relatively large pond of the top three cloud providers. One analysis confirmed “a huge supply shortage of NVIDIA GPUs and networking equipment from Broadcom and NVIDIA due to a massive spike in demand.” Even the company behind the rise of generative AI–OpenAI–suffers from a lack of GPUs. And companies have started adopting rather unusual tactics to get their hands on these machines (like repurposing old video gaming chips). What can teams do when facing a quota issue and the cloud provider runs out of GPU-based instances? And once they somehow score the right instance, how can you make sure no GPUs go to waste? Automation is the answer. Teams can use it to accomplish two goals: Find the best GPU instances for their needs and maximize their utilization to get more bang for their buck. Automation Makes Finding GPU Instances Easier The three major cloud providers offer many types and sizes of GPU-powered instances. And they’re constantly rolling out new ones–an excellent example of that is AWS P5, launched in July 2023. To give a complete picture, here’s an overview of instance families with GPUs from AWS, Google Cloud and Microsoft Azure: AWS P3 P4d G3 G4 (this group includes G4dn and G4ad instances) G5 Note: AWS offers Inferentia machines optimized for deep learning inference apps and Trainium for deep learning training of 100B+ parameter models. Google Cloud Microsoft Azure NCv3-series NC T4_v3-series ND A100 v4-series NDm A100 v4-series When picking instances manually, teams may easily miss out on opportunities to snatch up golden GPUs from the market. Cloud automation solutions help them find a much larger supply of GPU instances with the right performance and cost parameters. Considering GPU Spot Instances Spot instances offer significant discounts–even 90% off on-demand rates–but they come at a price. The potential interruptions make them a risky choice for important jobs. However, running some jobs on GPU spot instances is a good idea as they accelerate the training process, leading to savings. ML training usually takes a very long time–from hours to even weeks. If interruptions occur, the deep learning job must start over, resulting in significant data loss and higher costs. Automation can prevent that, allowing teams to get attractively-priced GPUs still available on the market to cut training and inference expenses while reducing the risk of interruptions. In machine learning, checkpointing is an important practice that allows for the saving of model states at different intervals during training. This practice is especially beneficial in lengthy and resource-intensive training procedures, enabling the resumption of training from a checkpoint in case of interruptions rather than starting anew. Furthermore, checkpointing facilitates the evaluation of models at different stages of training, which can be enlightening for understanding the training dynamics. Zoom in on Checkpointing PyTorch, a popular ML framework, provides native functionalities for checkpointing models and optimizers during training. Additionally, higher-level libraries such as PyTorch Lightning abstract away much of the boilerplate code associated with training, evaluation, and checkpointing in PyTorch. Let’s take a […]

Read More

Digital.ai Update Extends Scope and Reach of DevSecOps Platform

Digital.ai this week made generally available a Denali update to its DevSecOps platform that promises to make it simpler to integrate custom artificial intelligence (AI) models with the AI models developed by the company. At the same time, the company is adding self-guided workflows and templates to generate tests and implement DevSecOps best practices along with integrations with Terraform by Hashicorp, Azure Bicep, Azure Key Vault and AWS Secrets Manager. Finally, Digital.ai is adding an ARM Protection feature to better secure iOS applications without requiring embedded bitcode or integrations into the build system. DevOps teams can, via a single command, protect compiled applications locally with support for obfuscation, run-time active protections and application monitoring without uploading them to a third-party service. Greg Ellis, general manager for application security for Digital.ai, said the overall goal is to make it simpler for software engineering teams to invoke capabilities that have been embedded within the company’s DevSecOps platform. In the case of AI models, that also means instead of requiring DevOps teams to only use AI models developed by Digital.ai, the company is moving to make it simpler for DevOps teams that adopt its platform to incorporate custom AI models as they see fit as part of an ongoing effort to democratize intelligence at scale, he noted. In general, it’s already apparent organizations will be employing heterogeneous approaches to incorporating AI models into DevOps workflows, said Ellis. The challenge now is moving beyond experimenting with AI to embedding them within DevOps workflows, he added. It’s already clear developers are using generative AI to develop code at increasingly faster rates. The challenge now is to manage that accelerated pace of development when many organizations are already struggling to manage existing DevOps workflows at scale. Hopefully, AI technologies will also one day help software engineers find ways to manage that volume of code moving across their DevOps pipelines. In the meantime, organizations will also need to better define where the machine learning operations (MLOps) workflows that data scientists use to build AI models end and where DevOps workflows that will be used to embed AI models into applications begin. As is often the case when it comes to emerging technologies, cultural issues are just as challenging as the implementation hurdles that need to be overcome. At this point, like it or not, the generative AI genie is out of the proverbial bottle. Just about every job function imaginable will be impacted to varying degrees. In the case of DevOps teams, the ultimate impact should involve less drudgery as many of the manual tasks that conspire to make managing DevOps workflows tedious are eliminated. Less clear is to what degree AI may drive organizations that have already embraced DevOps to adopt an alternative platform, but savvy DevOps teams are, at the very least, starting to map out which processes are about to be automated so they can have more time to focus on issues that add more value to the business.

Read More

Survey Surfaces Benefits of Applying AI to FinOps

A survey of 200 enterprise IT decision-makers published this week found organizations that have infused artificial intelligence (AI) into financial operations (FinOps) workflows to reduce IT costs are 53% more likely to report cost savings of more than 20%. Conducted by the market research firm Foundry on behalf of Tangoe, a provider of tools for managing IT and telecommunications expenses, the survey found organizations that embraced FinOps without any AI capabilities averaged less than 10% in cost savings. The top three drivers for adopting FinOps/cloud cost management programs are the need to increase cloud resource production and performance (70%), reduce budgets (60%) and rising costs (58%), and simpler overall program management (50%), the survey found. Major benefits included productivity savings (46%), cost savings (43%) and reduced security risks (43%). Nearly two-thirds of respondents cited service utilization and right-sizing of services as another reason to embrace FinOps. FinOps describes a methodology for embedding programmatic controls within DevOps workflows to reduce costs. In the face of increased economic headwinds, IT leaders are looking to reduce cloud computing costs, but it’s turning out to be more challenging than many of them anticipated. Cloud infrastructure is typically provisioned by developers using infrastructure-as-code (IaC) tools with little to no supervision. The reason for this is developers have long argued that waiting for an IT team to provision cloud infrastructure took too long. Developers would be more productive if they just provisioned cloud infrastructure themselves. However, after ten years of cloud computing, it’s become apparent there are a lot of wasted cloud infrastructure resources. Developers who don’t pay the monthly bills for cloud services tend to view available infrastructure resources as essentially infinite. It’s usually not until someone from the finance department starts raising cost concerns that developers even become aware there might be an issue. The challenge is that adopting FinOps best practices is not quite as easy as it might seem. In fact, more than half (54%) of survey respondents cited challenges in building the right process and human support systems for FinOps into workflows that have been in place for years. Chris Ortbals, chief product officer for Tangoe, said the simplest path to FinOps is to rely on a software-as-a-service (SaaS) platform designed from the ground up to leverage AI to help IT teams manage cloud computing and telecommunications expenses both before and after applications are deployed. Each DevOps team will ultimately need to determine how much they will implement metrics to foster more efficient consumption of cloud computing resources. The more aware of those costs DevOps teams are, the more likely that better decisions about what types of workloads should be run where and, just as importantly in the age of the cloud, at what time, given all the pricing options provided. Developers, of course, tend to jealously guard their prerogatives. Convincing them to give up their ability to provision cloud infrastructure on demand is going to be a challenge, at least until someone makes it plain how much all those cloud instances wind up costing the organization each and every month.

Read More

Google De-Recruits 100s of Recruiters ¦ ARM Valued at $45½B in IPO

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. This week: Google fires hundreds of recruiters, and ARM gets a sky-high valuation. 1. Layoffs for the recruiters themselves First up this week: Google’s hiring has slowed to such an extent that it has far too many in-house recruiters. Boo hoo? Analysis: Don’t shed a tear at task shedding I get it. Many reading this care little for the typical recruiter. All too often they seem like pointless brokers—adding no value to the process yet receiving a huge bonuses. But this news is the latest indication that DevOps jobs are harder to come by. Louise Matsakis has the scoop: Google lays off hundreds on recruiting team “Hard decision”Google is laying off hundreds of people across its global recruiting team as hiring at the tech giant continues to slow. … Workers who were laid off began learning their roles had been eliminated earlier today, according to posts on social media.…Google began reducing the speed of its hiring last year, after adding tens of thousands of workers in 2020 and 2021. … Google spokesperson Courtenay Mencini said, … “In order to continue our important work to ensure we operate efficiently, we’ve made the hard decision to reduce the size of our recruiting team.” Bring in the RecruitBot 4000. galaxytachyon explains: How likely is it that this is because of AI taking over the jobs? Sift through resumes, contact candidates, schedule some interviews, connect the hiring manager to the candidate, even getting some extra information from the candidate via email or phone calls are all things an LLM can efficiently do. They may actually do it even better than a regular human, since they might “know” more about the role and the technical requirements than an average [recruiter]. AI recruiters—and AI developers, too. Here’s Qbertino: I don’t expect those jobs to return. … After 23 years in IT I’m looking into a … career switch myself. Our industry is fully industrialized, custom coding is by now only for mostly totally broken legacy **** that will be replaced by SOA subscriptions within the next few years and what’s still left to code will be mostly done by AI quite soon I suspect.…Time to move on. It was an awesome ride but we’ve now finally built the bots that will replace us. Nice. This will spell more wealth for everyone in the long run even if we are out of cushy jobs with obscene salaries. When Google catches a cold, do other DevOps shops sneeze? Not in gijames1225’s experience: It’s weird being at a midsize company that has only accelerated hiring for engineers while the big players all go through these layoff cycles. The cynic in me sees them as token displays of fiscal responsibility being made for shareholders and a weird performativity of not wanting to be outdone by other tech giants. Another bit of me wonders about general productivity at these places if they can layoff so many people and nothing really appears to change (from a consumer perspective). All of which makes this Anonymous Coward wonder: I wonder what happens now to those who have threatened to quit or were reluctant to come in to physical offices. Meanwhile, u/saracenraider has questions: Do […]

Read More