SmartBear Acquires Reflect to Gain Generative AI-Based Testing Tool

SmartBear this week revealed it has acquired Reflect, a provider of a no-code testing platform for web applications that leverages generative artificial intelligence to create and execute tests.

Madhup Mishra, senior vice president of product marketing at SmartBear, said the platform Reflect created will initially be incorporated into the company’s existing Test Hub platform before Reflect’s generative AI capabilities are added to other platforms.

Reflect provides access to a natural language interface to create tests using multiple large language models (LLMs) that it is designed to invoke. It can also understand the intent of a test to understand what elements to test regardless of whether, for example, a button has been moved from one part of a user interface to another, said Mishra. Test step definitions, once approved, can also be automatically executed using scripts generated by the platform.

SmartBear has no plans to build its own LLMs, said Mishra. Rather, the company is focusing its efforts on providing the tools and prompt engineering techniques needed to effectively operationalize them, he added.

Reflect is the tenth acquisition SmartBear has made as part of an effort to provide lightweight hubs to address testing, the building of application programming interfaces (APIs) and analysis of application performance and user experience. Last year, the company acquired Stoplight to gain API governance capabilities.

Rather than building a single integrated platform, the company is focused on providing access to lightweight hubs that are simpler to invoke, deploy and maintain, said Mishra. The overall goal is to meet IT teams where they are versus requiring them to adopt any entirely new monolithic platform that requires organizations to rip and replace every tool they already have in place, he said.

There is little doubt at this point that generative AI will have a profound impact on application testing in a way that should ultimately improve the quality of the applications. As the time required to create tests drops, more tests will be run. Today, it’s all too common for tests not to be conducted as thoroughly as they should be simply because either a developer lacked the expertise to create one or, with a delivery deadline looming, they simply ran out of time.

Naturally, the rise of generative AI will also change how testing processes are managed. It’s not clear how far left generative AI will push responsibility for testing applications, but as more tests are created and run, they will need to be integrated into DevOps workflows.

Of course, testing is only one element of a DevOps workflow that is about to be transformed by generative AI. DevOps teams should already be identifying manual tasks that can be automated using generative AI as part of an effort to further automate workflows that, despite commitments to automation, still require too much time to execute and manage. Once identified, DevOps teams can then get a head start on redefining roles and responsibilities as generative AI is increasingly operationalized across those workflows.