User Acceptance Testing – The Biggest Hurdle for Enterprise Search Software

In the software world, the final stage of the software development process is often a very critical stage. User acceptance Testing, as it is called requires that actual users test the software in real world scenarios to confirm its functionalities. There are different approaches that could be taken in evaluating a software’s usability, what matters is that each of the software’s functions is tested live and that the different teams involved agree on an acceptability threshold beforehand. For really large scale tests, tools like Hewlett-Packard’s Quality Center software could be used for better management and there is also detailed help and guides available to ease the process of standard User Acceptance Testing.

When it comes to the testing of enterprise search software, the rules and procedures of standard UAT cannot be applied because of the fundamental differences of search applications from other types of software. Some of these differences include:

It is almost impossible to determine the true quality of a search application through UAT. This is because of the subjectivity of the individual searcher, who has to judge the search results based on his understanding, level of education, experience level e.t.c. Since individual testers will differ considerably in all of these aspects, it is therefore a very difficult task to determine what exactly a good quality search result looks like.

Unlike other software applications that are designed to aid the user in the performance of certain tasks, enterprise search software simply offers the possibility of a search. This makes it difficult to develop a test script since there is no process.

The nature of search warrants the application of filters in modifying a query and the final result produced by the software. Different users will use different filters, and in the most varied combinations according to their needs. This creates yet another problem for testing search software.

Test scrips for search software applications offer yet another problem. When the application fails to meet the user’s required quality level at any point, the user will be required to take notes, which is a source of problems in this context. The reason is because if a user makes notes of every action taken while testing the system, then there is sure to be confusion, or at the least, an inefficient process, because the user will lose track of the work process from time to time. If the user decides to write the notes only after having completed the tasks, then the problem of missed details will arise,, because it is impossible for an average user to remember every query, and in what sequence the query was made, and which filters were used e.t.c.

Getting it Right
The one way out of the dilemma of User Acceptance Testing of an enterprise search software application is to take the long-term approach of continual evaluation, reassessment and improvement of the application. This will involve having test procedures in place from the very beginning. They will then be continually assessed and if need be, improved on during the designated time-frame for testing. But these assessments will not end, rather they will continue, so that in effect, the software application is continually being improved.

It may make sense to have a team member assigned permanently to design and run the testing. This will help immensely in keeping the project manageable. Other factors to consider include test documents that will be used, the collection of data from a wide range of users which the project manager must have to classify accordingly and the preparation and review of search logs. The use of relevancy tuning applications like Quepid may also positively affect the efficiency and success of the project.

In the end, it will again take a concerted effort to decide on an acceptability threshold. This is because of the various, unique aspects that may be incorporated into the testing process. What matters here is the collection and analysis of data on a continuous basis, which will then make it possible to continuously chart testing results and improvements. Given this information, the period will then arrive when the teams involved can agree on a handover, given that no more noticeable improvements have been recorded. It may not be possible to forecast beforehand when such a period will be reached, but deeper analysis of the data may yield more results.

Be the first to comment

Leave a Reply

Your email address will not be published.