Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy
Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples.
M. Jørgensen, , Forecasting of software development work effort: Evidence on expert judgement and formal models. International Journal of Forecasting, 23(3): p. 449-462, 2007.
B. Boehm, B. Clark, Horowitz and Brown, Software cost estimation with Cocomo II with Cdrom, Prentice Hall PTR, 2000.
E. Mendes, The use of Bayesian networks for web effort estimation: further investigation. in Web Engineering, 2008. ICWE'08. Eighth International Conference on, IEEE, 2008.
K. V. Kumar, et al., Software development cost estimation using wavelet neural networks. Journal of Systems and Software, 81(11): p. 1853-1867, 2008.
S. J. Huang, N. H. Chiu and L. W. Chen, Integration of the grey relational analysis with genetic algorithm for software effort estimation. European Journal of Operational Research, 188(3): p. 898-909, 2008.
M. O. Elish, Improved estimation of software project effort using multiple additive regression trees. Expert Systems with Applications, 36(7): p. 10774-10778, 2009.
J. Stefanowski, An empirical study of using rule induction and rough sets to software cost estimation. Fundamenta Informaticae, 71(1): p. 63-82, 2006.
M. Shepperd and C. Schofield, Estimating software project effort using analogies. IEEE Transactions on software engineering, 23(11): p. 736-743, 1997.
M. A. Ahmed and Z. Muzaffar, Handling imprecision and uncertainty in software development effort prediction: A type-2 fuzzy logic based framework. Information and Software Technology, 51(3): p. 640-654, 2009.
S. D. Conte, H. E. Dunsmore, and V. Y. Shen, Software engineering metrics and models. 1986: Benjamin-Cummings Publishing Co., Inc.
Idri, Ali, Fatima azzahra Amazal, and A. Abran. "Analogy-based software development effort estimation: A systematic mapping and review." Information and Software Technology58, 206-230, 2015.
I. Myrtveit and E. Stensrud, A controlled experiment to assess the benefits of estimating with analogy and regression models. IEEE transactions on software engineering, 25(4): p. 510-525, 1999.
A. Idri and A. Abran. Towards a fuzzy logic based measures for software projects similarity. in Proc 6th MCSEAI’2000 Maghrebian Conference on Computer Sciences. 2000.
T. Foss, et al., A simulation study of the model evaluation criterion MMRE. IEEE Transactions on Software Engineering, 29(11): p. 985-995, 2003.
M. Shepperd, and G. Kadoda. Using simulation to evaluate prediction techniques [for software]. in Software Metrics Symposium, 2001. METRICS 2001. Proceedings. Seventh International. IEEE, 2001.
I. Myrtveit, E. Stensrud, and M. Shepperd, Reliability and validity in comparative studies of software prediction models. IEEE Transactions on Software Engineering, 31(5): p. 380-391, 2005.
A. R. Gray and S. G. Macdonell, Software metrics data analysis—exploring the relative performance of some commonly used modeling techniques. Empirical Software Engineering, 4(4): p. 297-316, 1999.
M. Shepperd and G. Kadoda, Comparing software prediction techniques using simulation. IEEE Transactions on Software Engineering, 27(11): p. 1014-1022, 2001.
M. Shepperd, and S. MacDonell, Evaluating prediction systems in software project estimation. Information and Software Technology, 54(8): p. 820-827, 2012.
W. B. Langdon, et al., Exact mean absolute error of baseline predictor, MARP0. Information and Software Technology, 73: p. 16-18, 2016.
M. A. Shah, D. N. A. Jawawi, M. A. Isa, K. Wakil, M. Younas and A. Mustafa, "MINN: A Missing Data Imputation Technique for Analogy-based Effort Estimation". International Journal of Advanced Computer Science and Applications(IJACSA), 10((2)), 2019.
V. Côté, et al., Software metrics: an overview of recent results. Journal of Systems and Software, 8(2): p. 121-131, 1988.
Y. Miyazaki , et al., Robust regression for developing software estimation models. Journal of Systems and Software, 27(1): p. 3-16, 1994.
B. A. Kitchenham, et al., What accuracy statistics really measure. IEE Proceedings-Software, 148(3): p. 81-85, 2001.
E. Stensrud, et al. An empirical validation of the relationship between the magnitude of relative error and project size. in Software Metrics, 2002. Proceedings. Eighth IEEE Symposium on. IEEE, 2002.
T. Foss, I. Myrtveit, and E. Stensrud. MRE and heteroscedasticity: An empirical validation of the assumption of homoscedasticity of the magnitude of relative error. in Proc. ESCOM, 12th European software control and metrics conference. The Netherlands. 2001.
E. Stensrud, et al., A further empirical investigation of the relationship between MRE and project size. Empirical software engineering, 8(2): p. 139-161, 2003.
T. Menzies, et al. Validation methods for calibrating software effort models. in Proceedings of the 27th international conference on Software engineering. ACM, 2005.
D. Port, and M. Korte. Comparative studies of the model evaluation criterions mmre and pred in software cost estimation research. in Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement. ACM, 2008.
M. Jorgensen, Experience with the accuracy of software maintenance task effort prediction models. IEEE Transactions on software engineering, 21(8): p. 674-681, 1995.
L. C. Briand , T. Langley, and I. Wieczorek. A replicated assessment and comparison of common software cost modeling techniques. in Proceedings of the 22nd international conference on Software engineering. ACM, 2000.
I. Myrtveit, and E. Stensrud, Validity and reliability of evaluation procedures in comparative studies of effort prediction models. Empirical Software Engineering, 17(1-2): p. 23-33, 2012.
ISBSG (2011) International Software Benchmarking Standard Group from www.isbsg.org
M. Shepperd and C. Schofield, Estimating software project effort using analogies. IEEE transactions on software engineering, 23(11), 736-743, 1997.
I. Thamarai and S. Murugavalli. "Model for improving the accuracy of relevant project selection in analogy using differential evolution algorithm." Sādhanā 42.1, 23-31, 2017.
- There are currently no refbacks.
Published by INSIGHT - Indonesian Society for Knowledge and Human Development