Tagged With: ai

SmartBear Acquires Reflect to Gain Generative AI-Based Testing Tool

SmartBear this week revealed it has acquired Reflect, a provider of a no-code testing platform for web applications that leverages generative artificial intelligence to create and execute tests. Madhup Mishra, senior vice president of product marketing at SmartBear, said the platform Reflect created will initially be incorporated into the company’s existing Test Hub platform before Reflect’s generative AI capabilities are added to other platforms. Reflect provides access to a natural language interface to create tests using multiple large language models (LLMs) that it is designed to invoke. It can also understand the intent of a test to understand what elements to test regardless of whether, for example, a button has been moved from one part of a user interface to another, said Mishra. Test step definitions, once approved, can also be automatically executed using scripts generated by the platform. SmartBear has no plans to build its own LLMs, said Mishra. Rather, the company is focusing its efforts on providing the tools and prompt engineering techniques needed to effectively operationalize them, he added. Reflect is the tenth acquisition SmartBear has made as part of an effort to provide lightweight hubs to address testing, the building of application programming interfaces (APIs) and analysis of application performance and user experience. Last year, the company acquired Stoplight to gain API governance capabilities. Rather than building a single integrated platform, the company is focused on providing access to lightweight hubs that are simpler to invoke, deploy and maintain, said Mishra. The overall goal is to meet IT teams where they are versus requiring them to adopt any entirely new monolithic platform that requires organizations to rip and replace every tool they already have in place, he said. There is little doubt at this point that generative AI will have a profound impact on application testing in a way that should ultimately improve the quality of the applications. As the time required to create tests drops, more tests will be run. Today, it’s all too common for tests not to be conducted as thoroughly as they should be simply because either a developer lacked the expertise to create one or, with a delivery deadline looming, they simply ran out of time. Naturally, the rise of generative AI will also change how testing processes are managed. It’s not clear how far left generative AI will push responsibility for testing applications, but as more tests are created and run, they will need to be integrated into DevOps workflows. Of course, testing is only one element of a DevOps workflow that is about to be transformed by generative AI. DevOps teams should already be identifying manual tasks that can be automated using generative AI as part of an effort to further automate workflows that, despite commitments to automation, still require too much time to execute and manage. Once identified, DevOps teams can then get a head start on redefining roles and responsibilities as generative AI is increasingly operationalized across those workflows.

Read More

2024 Infrastructure Tech Predictions

Ganesh Srinivasan, partner at Venrock, co-authored this article. 2023 was a rollercoaster like none other; from the death of the modern data stack sprawl to the birth of generative AI, we are only at the beginning of a new era in the ‘art of the possible.’ We guarantee 2024 won’t be a disappointment. With a new year approaching, it’s the perfect time for us to examine what we anticipate being the biggest developments in the year ahead. Here is what we think is going to happen in 2024: 1. OpenAI’s Reign Challenged With the emerging learnings in core neural net architectures that led to the transformer and dominance by OpenAI, it is likely that their imminent release of GPT5 will be surpassed in specific performance benchmarks by a new entrant on the backs of more efficient architectures, improved multimodal capabilities, better contextual understanding of the world and enhanced transfer learning. These new models will be built on emerging research in spatial networks, graph structures and combinations of various neural networks that will lead to more efficient, versatile and powerful capabilities. 2. Apple: The New Leader in Generative AI One of the most important players in the generative AI space is only starting to show their cards. 2024 will be the year Apple launches its first set of generative AI capabilities, unlocking the true potential of an AI-on-the-edge, closed architecture with full access to your personal data – showing that Apple is actually the most important company in the generative AI race. 3. Building for Client-First The last decade has reflected a shift away from fat clients to server-side rendering and compute. But the world is changing back to the client. Mobile-first experiences will be required to work in offline mode. Real-time experiences require ultra-low latency transactions. Running LLMs will increasingly be required to run on the device to increase performance and reduce costs. 4. Death of Data Infrastructure Sprawl The rapid growth of the data infrastructure needs of enterprises has led to an increasing sprawl of point solutions, from data catalogs, data governance, reverse extract, transform, load, and airflow alternatives to vector databases and yet another lakehouse. The pendulum will swing back to unified platforms and fewer silos to bring down the total cost of ownership and operating overhead going into 2024. 5. Approaching the AI Winter Generative AI in 2023 could be best characterized as the ‘art of the possible,’ with 2024 being the true test to see if prototypes convert into production use cases. With the peak of the hype cycle likely here, 2024 will experience the stage of disillusionment where enterprises discover where generative AI can create margin-positive impact and where the costs outweigh the benefits. 6. The Misinformation Threat While image and video diffusion models have unlocked a new era for digital creation and artistic expression, there’s no doubt that its dark side has not yet taken its toll. With a presidential election in the wings, diffusion models as a machine for political disinformation will emerge to become the next major disinformation weapon of choice. 7. AI’s Real-World Breakthrough Coming out of the ‘field of dreams’ era for AI, 2024 will represent a breakthrough for commercial use cases in AI, particularly in the physical world. Using AI for physical world modalities will unlock our ability to […]

Read More

From Reaction to Robots: Riding the AI Wave in 2024

As we navigate another year of consistent zero-day breaches, legislative pivots, the explosion of AI tooling and threat actors growing bolder and more desperate, it’s safe to say that getting comfortable with change is a requirement for thriving in the technology industry. We occupy a notoriously unpredictable space, but that’s half the fun. Compared to many other verticals, technology—especially cybersecurity—is relatively youthful, and the future should be something we can all look forward to blossoming in sophistication alongside the technology we swear to protect. So, what can we expect in the industry in 2024? We put our heads together, looked into our crystal ball, and these were the results: Government Regulations Around AI Will Turn the Industry Upside Down It was the talk of the conference circuit in 2023, with several high-profile presentations at Black Hat, DEF CON, Infosecurity Europe and many more warning of the explosive changes we can expect from AI implementation across every industry, especially cybersecurity. As tends to happen with low barriers to entry for such transformative technology, adoption has outpaced any official regulation or mandates at the government level. With significant movements in general cybersecurity guidelines and benchmarks around the world, including CISA’s Secure-by-Design and -Default principles in the U.S. and similar initiatives from the UK and Australian governments, it is essentially a foregone conclusion that regulations around AI use will be announced sooner rather than later. While much of the debate surrounding the mainstream use of AI tooling and LLMs has centered around copyright issues with training data, another perspective delves into how AI is best used in cybersecurity practices. When it comes to coding, perhaps its most human quality is its similar hardship in displaying contextual security awareness, and this factor is deeply concerning as more developers are adopting AI coding assistants in the construction of software. This has not gone unnoticed, and in a time of increased scrutiny for software vendors adopting security best practices, government-level intervention certainly would not surprise. … And Demand for AI/ML Coding Tools Will Create a Need for More Developers, not Less! Much has been written about the AI takeover, and for the better part of a year, we have been subject to a plethora of clickbait headlines that spell doom and destruction for just about every white-collar profession out there, and developers were not spared. After months of speculation and experimentation with LLMs in a coding context, we remain entirely unconvinced that development jobs are at collective risk. There is no doubt that AI/ML coding tools represent a new era of powerful assistive technology for developers, but they are trained on human-created input and data, and that has rendered the results far from perfect. Perhaps if every developer on the planet was a top-tier, security-minded engineer, we might see genuine cause for concern. However, just as the average adult driver vastly overshoots their ability (notice how everyone says they’re a great driver, and it’s always other people who lack skill? That’s a classic example of the Dunning-Kruger effect!), so too does the development community, especially when it comes to security best practices. According to one Stanford study into developer use of AI tooling, it is likely that unskilled developers using this technology will become dangerous. The study claimed that participants who had access to AI assistants […]

Read More

AI a Key Driver Behind HPE’s $14 Billion Deal for Juniper

Hewlett Packard Enterprise is looking to become a more significant player in the networking space through its planned $14 billion acquisition of Juniper, a deal that it hopes will make it a more formidable rival to longtime market leader Cisco Systems. The deal, announced Tuesday after the markets closed, is a big deal in the early days of the new year for a networking industry that has become central in an IT sector that is becoming more distributed and more cloud-native. During a virtual briefing with analysts and journalists this morning, HPE CEO Antonio Neri described an HPE centered around its networking business that has AI capabilities and its GreenLake edge-to-cloud platform of IT services at its foundation. “HPE will be a new company where networking will be the core foundation of everything we do,” Neri said. “We’re going to accelerate what we call an AI-driven agenda, and that will allow us to capture this massive inflection point.” Even when the deal closes – which is expected to happen later this year or in early 2025 – HPE will still likely be in third place in the global networking space behind Cisco and Huawei, but will have a stronger portfolio that will not only include greater AI capabilities but also a stronger presence in both the enterprise and telecom spaces. Once it closes, Juniper CEO Rami Rahim will lead the combined HPE networking business and report to Neri. Juniper’s Mist AI is at the Center Unsurprisingly, AI was a key component of the deal. In a research note, Will Townsend and Patrick Moorhead, analysts with Moor Insights and Strategy, wrote that their thinking after initial news reports about a possible deal circulated was that HPE likely was looking for a “strong AI anchor” for its portfolio of hardware, software, and GreenLake IT consumption services. “AI is hot, ignited by the attention being directed toward generative AI, the underlying large language models, and many promising use cases,” Townsend and Moorhead wrote. “One could argue that beyond the AIOps capability found in the HPE Aruba Networking portfolio today, HPE needs further AI depth to remain competitive and continue to grow its top-line revenue and profitability. Juniper could deliver on that front.” Rahim called AI “the biggest inflection since the dawn of the internet itself” and added that the combination of HPE and Juniper “will be able to bring the depth and the breadth of the portfolios necessary to capture the full market opportunity that AI presents in front of us.” AI in networking is a strength for Juniper, which in 2019 bought Mist Systems and its AI technologies, including the Mavis virtual network assistant, which the analyst wrote serves “as the tip of the spear for Juniper’s reinvigorated efforts within the enterprise for WLAN, LAN, WAN, and SD-WAN solutions.” “By all measures, the Mist acquisition has been a success, with Juniper growing its enterprise install base at a faster rate than its service provider business over the last 12 to 18 months,” Townsend and Moorhead wrote. AI and Networking AI will play an increasingly important role in networking going forward, from dynamically adjusting bandwidth and self-correcting in the network for maximum uptime to quickly finding root causes for problems and deploying virtual network assistants. In a blog post last month, Liz Centoni, […]

Read More

Best of 2023: Copilots For Everyone: Microsoft Brings Copilots to the Masses

As we close out 2023, we at DevOps.com wanted to highlight the most popular articles of the year. Following is the latest in our series of the Best of 2023. Microsoft has been doing a lot to extend the coding ‘copilot’ concept into new areas. And at its Build 2023 conference, Microsoft leadership unveiled new capabilities in Azure AI Studio that will empower individual developers to create copilots of their own. This news is exciting, as it will enable engineers to craft copilots that are more knowledgeable about specific domains. Below, we’ll cover some of the major points from the Microsoft Build keynote from Tuesday, May 23, 2023, and explore what the announcement means for developers. We’ll examine the copilot stack and consider why you might want to build copilots of your own. What is Copilot? A copilot is an artificial intelligence tool that assists you with cognitive tasks. To date, the idea of a copilot has been mostly associated with GitHub Copilot, which debuted in late 2021 to bring real-time auto-suggestions right into your code editor. “GitHub Copilot was the first solution that we built using the new transformational large language models developed by OpenAI, and Copilot provides an AI pair programmer that works with all popular programming languages and dramatically accelerates your productivity,” said Scott Guthrie, executive vice president at Microsoft. However, Microsoft recently launched Copilot X, powered by GPT-4 models. A newer feature also offers chat functionality with GitHub Copilot Chat to accept prompts in natural language. But the Copilot craze hasn’t stopped there—Microsoft is actively integrating Copilot into other areas, like Windows and even Microsoft 365. This means end users can write natural language prompts to spin up documents across the Microsoft suite of Word, Teams, PowerPoint and other applications. Microsoft has also built Dynamics 365 Copilot, Power Platform Copilot, Security Copilot, Nuance and Bing. With this momentum, it’s easy to imagine copilots for many other development environments. Having built out these copilots, Microsoft began to see commonalities between them. This led to the creation of a common framework for copilot construction built on Azure AI. At Build, Microsoft unveiled how developers can use this framework to build out their own copilots. Building Your Own Copilot Foundational AI models are powerful, but they can’t do everything. One limitation is that they often lack access to real-time context and private data. One way to get around this is by extending models through plugins with REST API endpoints to grab context for the tasks at hand. With Azure, this could be accomplished by building a ChatGPT plugin inside VS Code and GitHub Codespaces to help connect apps and data to AI. But you can also take this further by creating copilots of your own and even leveraging bespoke LLMs. Understanding The Azure Copilot Stack Part of the Azure OpenAI service is the new Azure AI Studio. This service enables developers to combine AI models like ChatGPT and GPT-4 with their own data. This could be used to build copilot experiences that are more intelligent and contextually aware. Users can tap into an open source LLM, Azure OpenAI or bring their own AI model. The next step is creating a “meta-prompt” that provides a role for how the copilot should function. So, what’s the process like? Well, first, you […]

Read More

Skillsoft Survey Sees AI Driving Increased Need to Retrain IT Teams

More organizations than ever will need to invest in IT training as advances in artificial intelligence (AI) transform roles and responsibilities in the coming year. A survey of 2,740 IT decision-makers conducted by Skillsoft, a provider of an online training platform, finds two-thirds (66%) were already dealing with skills gaps in 2023. As AI becomes more pervasively applied to the management of IT, that skills gap is only going to widen given the limited pool of IT professionals that have any experience using AI to manage IT. Skillsoft CIO Orla Daly said it’s already apparent AI creates an imperative for training because there are simply not enough IT people with the requisite skills required. In fact, the survey finds nearly half of IT decision-makers (45%) plan to close skills gaps by training their existing teams. That training is crucial because the primary reason IT staff change jobs is a lack of growth and development opportunities, noted Daly. While there is naturally a lot of consternation over the potential elimination of IT jobs, in the final analysis, AI will add more different types of jobs than it eliminates, adds Daly. Each of those jobs will require new skills that will need to be acquired and honed, she notes. “Training is the price of innovation,” said Daly. In the meantime, there is much interest in finding ways to automate existing IT processes to create more time for IT teams to experiment with AI technologies, said Daly. The report also finds well over half of IT decision-makers (56%) expect their IT budgets to increase to help pay for new platforms and tools, compared with only 12% expecting a decrease. It’s not clear to what degree AI will transform the management of AI, but it’s already apparent that many manual tasks involving, for example, generating reports are about to be automated. The instant summarization capabilities that generative AI enables also promise to dramatically reduce the time required to onboard new members to an incident response team. Rather than having to allocate someone on the team to bring new members up to speed, each new member of the team will use queries framed in natural language to determine for themselves the extent of the crisis at hand. In addition, many tasks that today require expensive specialists to perform might become more accessible to a wider range of organizations as, for example, more DevOps processes are automated. That doesn’t necessarily mean that DevOps as an IT discipline disappears as much as it leads to the democratization of best DevOps practices. Each IT organization in the year ahead will need to determine to what degree to rely on AI to manage IT processes. It may take a while before IT teams have enough confidence in AI to rely on it to manage mission-critical applications but many of the tasks that today conspire to make the management of IT tedious will undoubtedly fade away. The challenge and the opportunity now is identify those tasks today with an eye toward revamping how IT might be managed in the age of AI tomorrow.

Read More

Best of 2023: Will ChatGPT Replace Developers?

As we close out 2023, we at DevOps.com wanted to highlight the most popular articles of the year. Following is the latest in our series of the Best of 2023. AI is buzzing again thanks to the recent release of ChatGPT, a natural language chatbot that people are using to write emails, poems, song lyrics and college essays. Early adopters have even used it to write Python code, as well as to reverse engineer shellcode and rewrite it in C. ChatGPT has sparked hope among people eager for the arrival of practical applications of AI, but it also begs the question of whether it will displace writers and developers in the same way robots and computers have replaced some cashiers, assembly-line workers and, perhaps in the future, taxi drivers.  It’s hard to say how sophisticated the AI text-creation capabilities will be in the future as the technology ingests more and more examples of our online writing. But I see it having very limited capabilities for programming. If anything, it could end up being just another tool in the developer’s kit to handle tasks that don’t take the critical thinking skills software engineers bring to the table. ChatGPT has impressed a lot of people because it does a good job of simulating human conversation and sounding knowledgeable. Developed by OpenAI, the creator of the popular text-to-image AI engine DALL-E, it is powered by a large language model trained on voluminous amounts of text scraped from the internet, including code repositories. It uses algorithms to analyze the text and humans fine-tune the training of the system to respond to user questions with full sentences that sound like they were written by a human. But ChatGPT has flaws—and the same limitations that hamper its use for writing content also render it unreliable for creating code. Because it’s based on data, not human intelligence, its sentences can sound coherent but fail to provide critically informed responses. It also repurposes offensive content like hate speech. Answers may sound reasonable but can be highly inaccurate. For example, when asked which of two numbers, 1,000 and 1,062, was larger, ChatGPT will confidently respond with a fully reasoned response that 1,000 is larger. OpenAI’s website provides an example of using ChatGPT to help debug code. The responses are generated from prior code and lack the capability to replicate human-based QA, which means it can generate code that has errors and bugs. OpenAI acknowledged that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” This is why it should not be used directly in the production of any programs. The lack of reliability is already creating problems for the developer community. Stack Overflow, a question-and-answer website coders use to write and troubleshoot code, temporarily banned its use, saying there was such a huge volume of responses generated by ChatGPT that it couldn’t keep up with quality control, which is done by humans. “​​Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.” Coding errors aside, because ChatGPT—like all machine learning tools—is trained on data that suits its outcome (in this case, a textual nature), it lacks the ability to understand the […]

Read More

ScienceLogic Unveils Revamped AIOps Platform

Via an early access program, ScienceLogic this week made available a Hollywood update to its artificial intelligence for IT operations (AIOps) platform that, among other capabilities, provides root cause analysis capabilities that can be invoked via a natural language interface. For the first time, this release of the ScienceLogic SL1 platform incorporates predictive and generative AI technologies the company gained with the acquisition of Zebrium in 2022. Michael Nappi, chief product officer for ScienceLogic, said Zebrium provided ScienceLogic with an AI model that surfaces insights in a way that is simpler for IT teams to understand and act on. In addition to recommending automated workflows to run, the platform gives IT teams the option to automatically run them when, for example, there is an IT incident that an existing workflow has been defined to address, he added. Each organization will need to determine its level of comfort in automating those processes based on the level of potential risk to the business, noted Nappi. In addition, the SL1 user interface displays IT operational information at the business service level to provide IT teams with a guided experience that makes it simpler to prioritize tasks based on their relevance to the business, noted Nappi. There is also now an SL1 toolkit that DevOps teams can use to build or customize PowerPacks templates for monitoring specific processes and services. Finally, SL1 is now integrated with Slack and WebEx to streamline collaboration across IT teams. Previously, the platform only supported Microsoft Teams. In effect, SL1 now provides IT teams with a cockpit through which they can invoke AI to autonomously manage a wide range of tasks, said Nappi. It’s not clear how much advances in AI might one day soon democratize the management of IT, but it is clear the level of expertise required to manage complex IT environments is declining. The overall goal is to reduce dependency on IT professionals, such as software engineers, who are hard to find and retain. The rate of change being made to complex IT environments is now also occurring faster than IT teams can track without the aid of AI, noted Nappi. ScienceLogic, in the longer term, is working toward making it possible to interact with a chatbot in real-time to enable IT teams to meet that challenge, he noted. Each IT organization will need to decide for itself how heavily to rely on AI to manage IT functions, but in time, many IT professionals are not going to want to work for organizations that don’t provide some level of AI to reduce the level of toil they regularly encounter today. Rather than having to manually perform a series of monotonous tasks, AI should enable IT professionals to act more like supervisors in an IT environment. It may be a while before that aspiration is fully realized, but in the meantime, IT teams would be well-advised to start identifying which tasks will soon be automated using AI because roles and responsibilities will evolve. The challenge and the opportunity now is determining how IT teams can add more value to the business as those transitions occur.

Read More

Coexisting With AI: The Future of Software Testing

If 2023 was the year of artificial intelligence (AI), then 2024 is going to be the year of human coexistence with the technology. Since the release of Open AI’s ChatGPT in November 2022, there has been a steady stream of competing large language models (LLMs) and integrated applications for specific tasks, including content, image processing and code production. It’s no longer a question of if AI will be adopted; we have moved on to the question of how best to bring this technology into our daily lives. These are my predictions for the software quality assurance testing industry for 2024. Automated testing will become a necessity, not a choice. Developers will lean heavily on AI-powered copilot tools, producing more code faster. That means huge increases in the risk profile of every software release. In 2024, testers will respond by embracing AI-powered testing tools to keep up with developers using AI-powered tools and not become the bottleneck in the software development life cycle (SDLC). The role of the tester will increase and evolve. While AI is helping software engineers and test automation engineers produce more code faster, it still requires the highly skilled eye of an experienced engineer to determine how good and usable the code or test is. In 2024, there will be a high demand for skilled workers with specific domain knowledge who can parse through the AI-generated output and determine if it’s coherent and useful within the specific business application. Although this is necessary for developers and testers to start trusting what the AI generates, they should be wary of spending inefficient amounts of time constructing AI prompts, as this can ultimately lead to decreased levels of performance. For instance, a developer could easily spend most of their time validating the AI-generated output instead of testing the release that will be deployed to users. Being able to distinguish between when to rely on AI and when to forego AI’s help will be key to streamlining the workflow. Eventually, we’re going to start seeing AI-powered testing tools for non-coders that focus on achieving repeatability, dependability and scalability so that testers can truly use AI as their primary testing tool and ultimately boost their productivity. The rise of protected, offline LLMs and the manual tester. As enterprise companies show signs they don’t trust public LLMs (e.g., ChatGPT, Bard, etc.) with their data and intellectual property (IP), they’re starting to build and deploy their own private LLMs behind secured firewalls. Fine-tuning those LLMs with domain-specific data (e.g., banking, health care, etc.) will require a great volume of testing. This promises a resurgence in the role of the manual tester as they will have an increasingly important role to play in that process since they possess deep domain knowledge that is increasingly scarce across enterprises. As we stand on the brink of 2024, it is evident that the synergy between artificial intelligence and human expertise will be the cornerstone of software quality engineering. Human testers must learn to harness the power of AI while contributing the irreplaceable nuance of human judgment. The year ahead promises to be one where human ingenuity collaborates with AI’s efficiency to ensure that the software we rely on is not only functional but also reliable and secure. There will likely be a concerted effort to refine these […]

Read More

CircleCI Extends CI/CD Reach to AI Models

CircleCI this week revealed it is extending the reach of its continuous integration/continuous delivery (CI/CD) platform to make it simpler to incorporate artificial intelligence (AI) models into DevOps workflows. In addition to providing access to the latest generation of graphical processor units (GPUs) from NVIDIA via the Amazon Web Services (AWS) cloud, Circle CI has added inbound webhooks to access AI model curation services from providers such as Hugging Face and integrations with LangSmith, a debugging tool for generative AI applications and the Amazon SageMaker service for building AI applications. CircleCI CEO Jim Rose said while there is clearly a lot of enthusiasm for incorporating AI models into applications, the processes being used are still immature, especially in terms of automating workflows that include testing of probabilistic AI models. Most AI models are built by small teams of data scientists that create a software artifact that needs to be integrated within a DevOps workflow just like any other artifact, noted Rose. The challenge is that most data science teams have not yet defined a set of workflows for automating the delivery of those artifacts as part of a larger DevOps workflow, he added. DevOps teams will also need to make adjustments to a version control-centric approach to managing applications to trigger pipelines to pull AI software artifacts that exist outside of traditional software repositories. For example, the inbound webhooks provided by CircleCI now make it possible to automatically create a pipeline whenever an AI model residing on Hugging Face changes. It’s still early days as far as the deployment of AI models in production environments is concerned, but there is no doubt generative AI will have a major impact on how software is developed. AI models are a different class of software artifacts that are retrained instead of being updated, a process that occurs intermittently. As such, DevOps teams need to keep track of each time an AI model is being retrained to ensure applications are updated. At the same time, generative AI will also increase the pace at which other software artifacts are being created and deployed. Many of the manual tasks that today slow down the rate at which applications are built and deployed will be eliminated. That doesn’t mean there will be no need for software engineers, but it does mean the role they play in developing and deploying software is about to rapidly evolve. DevOps teams need to evaluate both how generative AI will impact the tasks they manage as well as the way the overall software development life cycle (SDLC) process needs to evolve. Each organization, as always, will need to decide for itself how best to achieve those goals depending on the use cases for AI,  but the changes that generative AI will bring about are now all but inevitable. The longer it takes to adjust, the harder it will become to overcome the cultural and technical challenges that will be encountered along the way.

Read More