Evaluating and selecting software test automation tools: Synthesizing empirical evidence from practitioners
Thesis event information
Date and time of the thesis defence
Place of the thesis defence
Linnanmaa, L10, https://oulu.zoom.us/s/61688824555
Topic of the dissertation
Evaluating and selecting software test automation tools: Synthesizing empirical evidence from practitioners
Doctoral candidate
Master of Science Päivi Raulamo-Jurvanen
Faculty and unit
University of Oulu Graduate School, Faculty of Information Technology and Electrical Engineering, Empirical Software Engineering in Software, Systems and Services (M3S)
Subject of study
Information processing science
Opponent
Professor Kari Smolander, LUT University, Lappeenranta
Custos
Professor Mika Mäntylä, University of Oulu
Empirical research supporting tool selection for test automation
The key finding of this dissertation is that in the software industry, practitioners have reached an apparent consensus on the important criteria for tool evaluation. However, aligned guidelines and systematic processes for evaluating and selecting the right tool(s) are lacking. The findings highlight the interconnection of different tool evaluation criteria and the context’s potential impact on them. In the software industry, test automation is an investment, and its value to software development is, typically, only visible after a delay. However, finding, evaluating, and selecting the right tool(s) is problematic and may be essential to the success of a business. Furthermore, there is minimal empirical evidence available for evaluating experiential knowledge about these tools and processes for selecting them.
The goal of this dissertation was to review and classify the state-of-the-practice of tool evaluation and selection (for software test automation) among software practitioners and contribute empirically validated evidence to the process utilizing a mixed methods approach. The key lesson learned from the research suggests that empirical evidence via data triangulation is valuable in identifying false claims, justifying facts about the criteria, and revealing possible problems and misconceptions in the process of evaluating and selecting tools. Academic research allows a more comprehensive understanding of the phenomenon by synthesizing practitioner viewpoints by utilizing complementary methods.
The goal of this dissertation was to review and classify the state-of-the-practice of tool evaluation and selection (for software test automation) among software practitioners and contribute empirically validated evidence to the process utilizing a mixed methods approach. The key lesson learned from the research suggests that empirical evidence via data triangulation is valuable in identifying false claims, justifying facts about the criteria, and revealing possible problems and misconceptions in the process of evaluating and selecting tools. Academic research allows a more comprehensive understanding of the phenomenon by synthesizing practitioner viewpoints by utilizing complementary methods.
Last updated: 1.3.2023