How Usability Defects Defer from Non-Usability Defects? : A Case Study on Open Source Projects

Nor Shahida Mohamad Yusop, John Grundy, Jean-Guy Schneider, Rajesh Vasa

Abstract


Usability is one of the software qualities attributes that is subjective and often considered as a less critical defect to be fixed. One of the reasons was due to the vague defect descriptions that could not convince developers about the validity of usability issues. Producing a comprehensive usability defect description can be a challenging task, especially in reporting relevant and important information. Prior research in improving defect report comprehension has often focused on defects in general or studied various aspects of software quality improvement such as triaging defect reports, metrics and predictions, automatic defect detection and fixing.  In this paper, we studied 2241 usability and non-usability defects from three open-source projects - Mozilla Thunderbird, Firefox for Android, and Eclipse Platform. We examined the presence of eight defect attributes - steps to reproduce, impact, software context, expected output, actual output, assume cause, solution proposal, and supplementary information, and used various statistical tests to answer the research questions. In general, we found that usability defects are resolved slower than non-usability defects, even for non-usability defect reports that have less information. In terms of defect report content, usability defects often contain output details and software context while non-usability defects are preferably explained using supplementary information, such as stack traces and error logs. Our research findings extend the body of knowledge of software defect reporting, especially in understanding the characteristics of usability defects. The promising results also may be valuable to improve software development practitioners' practice.

Keywords


defect report; open-source; software repository mining; software defect repository; usability defects.

Full Text:

PDF

References


R. A. Majid, N. L. M. Noor, and W. A. W. Adnan, “An assessment tool for measuring human centered design adoption in software development process,” in Advances in Intelligent Systems and Computing, 2018.

N. H. Basri, W. A. W. Adnan, and H. Baharin, “E-participation service in Malaysian e-government website: the user experience evaluation,” Proc. 10th Int. Conf. E-Education, E-Business, E-Management E-Learning, pp. 342–346, 2019.

D. M. Nichols and M. B. Twidale, “Usability processes in open source projects,” Softw. Process Improv. Pract., vol. 11, no. 2, pp. 149–162, Mar. 2006.

C. Wilson and K. P. Coyne, “The whiteboard: Tracking usability issues: to bug or not to bug?” Interactions, pp. 15–19, 2001.

R. Stefan, A. Giris, and C. Yilmaz, “How to Provide Developers only with Relevant Information?” in 2016 7th International Workshop on Empirical Software Engineering in Practice (IWESEP), 2016, pp. 1–6.

N. S. M. Yusop, J. Grundy, and R. Vasa, “Reporting Usability Defects: A Systematic Literature Review,” IEEE Trans. Softw. Eng., vol. 43, no. 9, pp. 848–867, 2017.

S. Zaman, B. Adams, and A. E. Hassan, “Security Versus Performance Bugs: A Case Study on Firefox,” in Proceedings of the 8th Working Conference on Mining Software Repositories, 2011.

V. Garousi, E. G. Ergezer, and K. Herkilo, “Usage, usefulness and quality of defect reports: an industrial case study,” in Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering, 2016.

N. S. M. Yusop, J.-G. Schneider, J. Grundy, and R. Vasa, “Analysis of the Textual Content of Mined Open Source Usability Defect Reports,” in 24th Asia-Pasific Software Engineering Conference (APSEC), 2017.

J. Uddin, R. Ghazali, M. M. Deris, and R. Naseem, “A survey on bug prioritization,” Artif. Intell. Rev., vol. 47, no. April, 2016.

M. G. Capra, “Usability Problem Description and the Evaluator Effect in Usability Testing,” 2006.

N. S. M. Yusop, J. Grundy, and R. Vasa, “Reporting Usability Defects – Do Reporters Report What Software Developers Need?” in Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering, 2016.

E. I. Laukkanen and M. V. Mantyla, “Survey Reproduction of Defect Reporting in Industrial Software Development,” in International Symposium on Empirical Software Engineering and Measurement, 2011, pp. 197–206.

N. S. M. Yusop, J. Grundy, and R. Vasa, “Reporting Usability Defects: Limitations of Open Source Defect Repositories and Suggestions for Improvement,” in Proceedings of the 24th Australasian Software Engineering Conference, 2015, pp. 38–43.

U. Raja, “All complaints are not created equal: text analysis of open source software defect reports,” Empir. Softw. Eng., vol. 18, no. 1, pp. 117–138, Jan. 2012.

T. D. Sasso, A. Mocci, and M. Lanza, “What Makes a Satisficing Bug Report?” in IEEE International Conference on Software Quality, Reliability and Security (QRS), 2016.

M. R. Karim, A. Ihara, X. Yang, E. Choi, H. Iida, and K. Matsumoto, “Improving the High-Impact Bug Reports: A Case Study of Apache Projects,” 2016.

P. Bhattacharya, L. Ulanova, I. Neamtiu, and S. C. Koduru, “An empirical analysis of bug reports and bug fixing in open source Android apps,” in Proceedings of the European Conference on Software Maintenance and Reengineering, CSMR, 2013, pp. 133–143.




DOI: http://dx.doi.org/10.18517/ijaseit.10.1.10225

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development