The last 12 months have proven very hectic in almost every industry. Many organizations have found it difficult to plan their next steps, adjust processes for new customer expectations, and maintain the pace that today’s customers expect in a highly competitive software marketplace.
It is more important than ever to have the latest insights and trends at your fingertips, especially in the software testing and quality assurance (QA) world.
To give your organization the data it needs, TestMonitor conducted our annual poll of the software development community to learn more about how they approach UAT testing, better understand common hurdles and outcomes, and collect the latest tips and best practices you can use to refine your QA processes to have a strong 2023 and beyond.
The following is a summary of those results as well as the key takeaways and lessons learned.
Overview of Survey Respondents
Throughout the fall of 2022, TestMonitor polled professionals from around the world and across the software development industry, including project managers, QA testers, engineers, developers, and more.
There were respondents from a range of industries, including IT services, government, and healthcare as well as entertainment, financial services, and telecommunications. Overall, the top three industry groups represented include:
Survey respondents represented organizations from around the world with a diverse portfolio of testing responsibilities, including:
When asked about the ultimate aim of pursuing UAT testing at their organization, 60 percent said that testing was used to build the best product for their audience, 22 percent use UAT testing to collect end-user feedback, and 18 percent use UAT to help to ensure compliance.
The Structure and
Approach of UAT Testing
Even with so much buzz about automation and artificial intelligence, automated testing is used less than half (34 percent) of the time for functional testing when compared to manual testing (86 percent). In fact, the number of respondents that used automated testing for functional testing dropped from 49 percent to 34 percent. What is clear, however, is that both forms of testing—manual and automated—are still common practices across the industry.
When it comes to non-functional testing, 36 percent of respondents manually run scripts for usability, load, and security tests, while 24 percent run these tests with automated testing tools.
Compared to the survey results from 2021, where 66 percent of UAT tests were performed at least once a month compared to weekly or yearly cycles, testing frequency was more evenly spread among respondents to the 2022 survey:
Similar to the results from the 2021 survey, respondents in 2022 confirmed a wide variability in how organizations use software to enable UAT.
In particular, 44 percent of respondents use specialized test management tools, such as testing and automation applications, while 20 percent lean solely on other generic tools, such as word processing and spreadsheet software. About 32 percent of respondents noted that their team uses both types of software to facilitate UAT.
Responses to the question about which aspect of running UAT was the most time-consuming showed that the preparation phase of “designing tests” took the longest (46 percent). “Planning tests” and “analyzing test results” were selected by 10 percent and 14 percent, respectively. The remaining 30 percent of respondents stated that “running tests” is the most time-consuming aspect of their quality assurance process.
The UAT Testing Team Composition
Not every software testing effort has the ability to include end users in their UAT testing, and this shows in the responses. When asked if end users are involved in UAT efforts, 40 percent said “yes, always,” while 26 percent said “sometimes,” and 10 percent said “no.”
When asked how many end users are involved in their UAT efforts, the responses were:
UAT Testing Outcomes
Nearly one-third (32 percent) of respondents shared that the most challenging part of UAT
testing was “designing tests,” while 26 percent selected “planning tests.” “Running tests”
was the biggest challenge for 22 percent of respondents, and “analyzing results” was the
biggest hurdle for 20 percent.
While all testing teams identified challenges to their UAT testing efforts, more than half (52
percent) of respondents believe that the overall quality of their approach is effective, rating
it at least a 7 out of 10.
Forty-four percent of the respondents rated their UAT testing as a 6 or worse in terms of
effectiveness, and 10 percent selected 4 or worse.
Testing Team Working Model
The 2022 survey expanded the number of questions focused on where organizations were performing their UAT testing, how this made up compared to the past, and how they expected it to change in the future.
Before the pandemic, 70 percent of UAT testing teams worked “mostly or primarily” in the office while 24 percent said that they had a hybrid model of in-office and remote work. Only 6 percent worked mostly remote before the pandemic.
In the last year, as expected, the results flipped dramatically. In 2022, the number of teams stating that they worked a hybrid model jumped to 70 percent while 14 percent worked mostly remote. Only 16 percent of teams worked mostly or primarily in the office.
Like many aspects of operations, UAT testing also wasn’t immune to the effects of a shifting economic outlook. Forty-six percent of respondents stated that there were “minimal to no impacts” on their team, and 28 percent noted a reduction in their testing budgets. Only 18 percent noted an increase in access to tools, resources, and budgets.
Best Practices Shared by Survey Respondents
In addition to sharing their perspectives on the logistics, quality, and structure of their UAT testing, respondents were also asked to share best practices.
The feedback focused on four main themes:
Bringing It All Together
The underlying industry trends from this year’s UAT Survey were quite clear: an increase in hybrid and remote work, a focus on tester engagement, and teams putting more focus on designing and planning tests. Although these may be common challenges, the right test management tool helps organizations overcome all of these hurdles with an intuitive, cloud-based platform.
TestMonitor is specifically designed to make manual, time-consuming processes more simple and smooth. This includes facilitating test case design, storage, and test-run sequencing, as well as enabling collaboration down the hall or around the world. All of these benefits enable your team to focus more time and energy on building a better product for your customers.
Ultimately, we hope you use these survey results to better understand quality assurance trends across the larger community and refine your own internal testing approach.