Coexisting With AI: The Future of Software Testing
If 2023 was the year of artificial intelligence (AI), then 2024 is going to be the year of human coexistence with the technology. Since the release of Open AI’s ChatGPT in November 2022, there has been a steady stream of competing large language models (LLMs) and integrated applications for specific tasks, including content, image processing and code production. It’s no longer a question of if AI will be adopted; we have moved on to the question of how best to bring this technology into our daily lives. These are my predictions for the software quality assurance testing industry for 2024.
Automated testing will become a necessity, not a choice.
Developers will lean heavily on AI-powered copilot tools, producing more code faster. That means huge increases in the risk profile of every software release. In 2024, testers will respond by embracing AI-powered testing tools to keep up with developers using AI-powered tools and not become the bottleneck in the software development life cycle (SDLC).
The role of the tester will increase and evolve.
While AI is helping software engineers and test automation engineers produce more code faster, it still requires the highly skilled eye of an experienced engineer to determine how good and usable the code or test is. In 2024, there will be a high demand for skilled workers with specific domain knowledge who can parse through the AI-generated output and determine if it’s coherent and useful within the specific business application. Although this is necessary for developers and testers to start trusting what the AI generates, they should be wary of spending inefficient amounts of time constructing AI prompts, as this can ultimately lead to decreased levels of performance. For instance, a developer could easily spend most of their time validating the AI-generated output instead of testing the release that will be deployed to users.
Being able to distinguish between when to rely on AI and when to forego AI’s help will be key to streamlining the workflow. Eventually, we’re going to start seeing AI-powered testing tools for non-coders that focus on achieving repeatability, dependability and scalability so that testers can truly use AI as their primary testing tool and ultimately boost their productivity.
The rise of protected, offline LLMs and the manual tester.
As enterprise companies show signs they don’t trust public LLMs (e.g., ChatGPT, Bard, etc.) with their data and intellectual property (IP), they’re starting to build and deploy their own private LLMs behind secured firewalls. Fine-tuning those LLMs with domain-specific data (e.g., banking, health care, etc.) will require a great volume of testing. This promises a resurgence in the role of the manual tester as they will have an increasingly important role to play in that process since they possess deep domain knowledge that is increasingly scarce across enterprises.
As we stand on the brink of 2024, it is evident that the synergy between artificial intelligence and human expertise will be the cornerstone of software quality engineering. Human testers must learn to harness the power of AI while contributing the irreplaceable nuance of human judgment. The year ahead promises to be one where human ingenuity collaborates with AI’s efficiency to ensure that the software we rely on is not only functional but also reliable and secure. There will likely be a concerted effort to refine these collaborations, ensuring that AI serves as a reliable support rather than an overpromised solution.