Despite the popularity of Agile, UAT is as essential as ever
Is User Acceptance Testing (UAT) still necessary? According to René Ceelen and Derk-Jan de Grood, the answer to that question is: yes, without a doubt. UAT offers information and insights that are relevant to the actual validation and acceptance of the solution. That information is indispensable for organisations and teams that try to maximise their impact with minimal resources.
This article was initially published on AGConnect. You can find it here.
There’s no more differentiation between different types of testing within agile-like software development tracks because the preferred method of testing is in sprints. As a result, the User Acceptance Testing seems to be obsolete. However, that is not the case.
UAT remains a type of testing that adds significant value to agile and more traditional software development methods as well as with the implementation of existing (ERP)-software. UAT enables an optimal user experience and ensures that the delivered solution is of actual benefit for a user and their work. As an addition to the test process, the UAT has a clear added ‘business value’. Listening carefully to your users pays off. Moreover, you can involve a user in the early stage of development and let them partake in the validation of the solution (for example during the UAT). A user knows best what he or she needs, has the most experience using the solution itself and does not have the burden of IT-knowledge that influences their feedback.
The User Acceptance Test
The UAT is an acceptance test that checks the quality of the system within the context of an organisation. End users test a solution to test to what extent it is ready for use. Most of the time by simulating daily usage in which user-friendliness, work processes, links with other systems and things alike are tested as if the system is already in production mode. Besides the experiences of users using the combination of a new system and (new) work processes, UAT also measures the level of acceptance of all the components before a new system is taken in production. There is little doubt with regards to the importance of UAT, as is shown by the article that is part of the terms and conditions of Nederland ICT: “If parties have not agreed on an acceptance test, the client accepts the quality of software as delivered (‘as is, where is’) and received by the client.” However, cutbacks on UAT are common.
Strength in numbers
With enterprise-size systems such as ERP-systems, there are many different users who all use different system configurations and -settings. Naturally, within the process of testing, you map the number of configurations and test scenarios you expect to test. After enabling users to provide feedback, you’ll receive much information on how the system is used and received, due to the high number of variables in system configurations. This specific feedback leads to a better system that adequately supports work processes. Also, users that are involved in the test- and development phase are more likely to accept the solution once it’s delivered. Using the experience of a high number of users does therefore not only lead to a better system, but it also affects the acceptance of it’s future users. That is the strength numbers offer.
Validating test results
The test results of individual users have to be analysed in the right context. With the UAT, end users are asked for their feedback, and although they have much knowledge, they do not have the same capacity as professional testers. They can sabotage progress for just not wanting a change and can nitpick on every single detail.
This reluctant attitude intensifies when users have to work using new and different methods, processes change or when a first version does not have all the features that have been discussed and promised. Users focussing on features that are of great importance to them is a recurring theme, but their concerns are in the grand scheme of things often utterly irrelevant in the acceptance phase. It must be noted, however, that a lot of seemingly small issues can identify a bigger problem.
UAT with large groups delivers you many test results. This big number of feedback allows you to quantify all those individual experiences and feedback. You can quickly detect incidental feedback and see if feedback is backed up (or negated) by experiences of other end users. During the analysis of test results, we use a three-point measurement system, that allows us to compare isolated input with at least three other user experiences. To do this, you have to assemble and compare multiple results from different testers in each test step. You can do this by using Microsoft Word or Microsoft Excel forms. They are easy to use but are also labour intensive and an administrative burden. Fortunately, there are tools available that help you collect and compare results efficiently while simultaneously provide you with the means to communicate effectively and take action.
Communicating beyond input
It is crucial to explain to users what is done with their test results. Users that find the time in their schedule to test are allowed to expect that their input and feedback is taken seriously. This may seem perfectly reasonable, but we’ve noticed that this crucial part of communicating is quite often entirely skipped. Explain to users which of their input is used to improve the system and or why it isn’t. This creates much goodwill from your users, even when their (valid) points and remarks are not solved. Thanks to the earlier mentioned three-point measuring system, we are capable of explaining that despite the fact we understand their issue and context, their problem does not correlate with the bigger picture and is therefore not solved immediately.
After evaluating, the results from the UAT become part of the backlog (agile) or are registered as issues that need to be resolved. There is more than one system that help you register everything where it belongs, such as Jira or DevOps. Tools such as TestMonitor and Testlink are available for test registration with integrated issue management.
Research shows that most of the results of users are not even IT-related results. 65% of the issues raised in a usual ERP-implementation are process-related and are the responsibility of the organisation instead of the software provider. Changing work processes and test results that stir up a big debate, do not effectively change the system at all. When that happens, it is probably wise for a team leader to explain the new processes and their necessity and relevance.
Not all reports are meant for developers. Make sure that the reports suit the right audience. Jira and TFS are often viewed as a tool solely for developers. Not every business unit manager is thrilled about that. Therefore it would be wise to consider a more accessible and user-friendly solution if this helps to asses and process UAT-results more efficiently.
A UAT reveals information that is relevant for the actual validation and acceptance of the solution. That information is of vital importance to organisations that try to maximise their impact with minimal resources. End users can be amazing testers, provided that they receive a good structure and the right tools. With the right tooling, you can test with large groups of users and leverage their knowledge and experience, giving you a high test coverage. Also, it enables you to compare test results from different users. Advantage: you are capable of building a better system with a higher chance of acceptance.
Derk-Jan de Grood is a senior test consultant and agile advisor for Valori. As a trainer, consultant and agile coach he is involved in optimisations, operational test management and agile transformations. Derk-Jan is the author of multiple successful titles like TestGoal, Grip on IT and the anniversary publication of the Dutch Test Society regarding future trends in testing. In 2016, he published ‘Agile in the real world’, a book about Scrum. Derk-Jan won multiple awards, including the prestigious European Testing Excellence Award in 2014.
René Ceelen is the director and owner of TestMonitor, a consultancy firm specialised in ERP-test management. He is a researcher at the Radboud University Nijmegen at the faculty of Institute for Computing and Information Science.