continuous testing

SmartBear Acquires Reflect to Gain Generative AI-Based Testing Tool

SmartBear this week revealed it has acquired Reflect, a provider of a no-code testing platform for web applications that leverages generative artificial intelligence to create and execute tests. Madhup Mishra, senior vice president of product marketing at SmartBear, said the platform Reflect created will initially be incorporated into the company’s existing Test Hub platform before Reflect’s generative AI capabilities are added to other platforms. Reflect provides access to a natural language interface to create tests using multiple large language models (LLMs) that it is designed to invoke. It can also understand the intent of a test to understand what elements to test regardless of whether, for example, a button has been moved from one part of a user interface to another, said Mishra. Test step definitions, once approved, can also be automatically executed using scripts generated by the platform. SmartBear has no plans to build its own LLMs, said Mishra. Rather, the company is focusing its efforts on providing the tools and prompt engineering techniques needed to effectively operationalize them, he added. Reflect is the tenth acquisition SmartBear has made as part of an effort to provide lightweight hubs to address testing, the building of application programming interfaces (APIs) and analysis of application performance and user experience. Last year, the company acquired Stoplight to gain API governance capabilities. Rather than building a single integrated platform, the company is focused on providing access to lightweight hubs that are simpler to invoke, deploy and maintain, said Mishra. The overall goal is to meet IT teams where they are versus requiring them to adopt any entirely new monolithic platform that requires organizations to rip and replace every tool they already have in place, he said. There is little doubt at this point that generative AI will have a profound impact on application testing in a way that should ultimately improve the quality of the applications. As the time required to create tests drops, more tests will be run. Today, it’s all too common for tests not to be conducted as thoroughly as they should be simply because either a developer lacked the expertise to create one or, with a delivery deadline looming, they simply ran out of time. Naturally, the rise of generative AI will also change how testing processes are managed. It’s not clear how far left generative AI will push responsibility for testing applications, but as more tests are created and run, they will need to be integrated into DevOps workflows. Of course, testing is only one element of a DevOps workflow that is about to be transformed by generative AI. DevOps teams should already be identifying manual tasks that can be automated using generative AI as part of an effort to further automate workflows that, despite commitments to automation, still require too much time to execute and manage. Once identified, DevOps teams can then get a head start on redefining roles and responsibilities as generative AI is increasingly operationalized across those workflows.

Read More

Coexisting With AI: The Future of Software Testing

If 2023 was the year of artificial intelligence (AI), then 2024 is going to be the year of human coexistence with the technology. Since the release of Open AI’s ChatGPT in November 2022, there has been a steady stream of competing large language models (LLMs) and integrated applications for specific tasks, including content, image processing and code production. It’s no longer a question of if AI will be adopted; we have moved on to the question of how best to bring this technology into our daily lives. These are my predictions for the software quality assurance testing industry for 2024. Automated testing will become a necessity, not a choice. Developers will lean heavily on AI-powered copilot tools, producing more code faster. That means huge increases in the risk profile of every software release. In 2024, testers will respond by embracing AI-powered testing tools to keep up with developers using AI-powered tools and not become the bottleneck in the software development life cycle (SDLC). The role of the tester will increase and evolve. While AI is helping software engineers and test automation engineers produce more code faster, it still requires the highly skilled eye of an experienced engineer to determine how good and usable the code or test is. In 2024, there will be a high demand for skilled workers with specific domain knowledge who can parse through the AI-generated output and determine if it’s coherent and useful within the specific business application. Although this is necessary for developers and testers to start trusting what the AI generates, they should be wary of spending inefficient amounts of time constructing AI prompts, as this can ultimately lead to decreased levels of performance. For instance, a developer could easily spend most of their time validating the AI-generated output instead of testing the release that will be deployed to users. Being able to distinguish between when to rely on AI and when to forego AI’s help will be key to streamlining the workflow. Eventually, we’re going to start seeing AI-powered testing tools for non-coders that focus on achieving repeatability, dependability and scalability so that testers can truly use AI as their primary testing tool and ultimately boost their productivity. The rise of protected, offline LLMs and the manual tester. As enterprise companies show signs they don’t trust public LLMs (e.g., ChatGPT, Bard, etc.) with their data and intellectual property (IP), they’re starting to build and deploy their own private LLMs behind secured firewalls. Fine-tuning those LLMs with domain-specific data (e.g., banking, health care, etc.) will require a great volume of testing. This promises a resurgence in the role of the manual tester as they will have an increasingly important role to play in that process since they possess deep domain knowledge that is increasingly scarce across enterprises. As we stand on the brink of 2024, it is evident that the synergy between artificial intelligence and human expertise will be the cornerstone of software quality engineering. Human testers must learn to harness the power of AI while contributing the irreplaceable nuance of human judgment. The year ahead promises to be one where human ingenuity collaborates with AI’s efficiency to ensure that the software we rely on is not only functional but also reliable and secure. There will likely be a concerted effort to refine these […]

Read More