Bug Prediction is an activity aimed at identifying defect-prone source code entities that allows developers to focus testing efforts on specific areas of software systems. Recently, the research community proposed Just-in-Time (JIT) Bug Prediction with the goal of detecting bugs at commit-level. While this topic has been extensively investigated in the context of traditional systems, to the best of our knowledge, only a few preliminary studies assessed the performance of the technique in a mobile environment, by applying the metrics proposed by Kamei et al. in a within-project scenario. The results of these studies highlighted that there is still room for improvement. In this paper, we faced this problem to understand (i) which Kamei et al.'s metrics are useful in the mobile context, (ii) if different classifiers impact the performance of cross-project JIT bug prediction models and (iii) whether the application of ensemble techniques improves the capabilities of the models. To carry out the experiment, we first applied a feature selection technique, i.e., InfoGain, to filter relevant features and avoid models multicollinearity. Then, we assessed and compared the performance of four different wellknown classifiers and four ensemble techniques. Our empirical study involved 14 apps and 42, 543 commits extracted from the COMMIT GURU platform. The results show that Naive Bayes achieves the best performance with respect to the other classifiers and in some cases outperforms some well-known ensemble techniques.

Cross-Project Just-in-Time Bug Prediction for Mobile Apps: An Empirical Assessment

Catolino, G;Di Nucci, D;Ferrucci, F
2019-01-01

Abstract

Bug Prediction is an activity aimed at identifying defect-prone source code entities that allows developers to focus testing efforts on specific areas of software systems. Recently, the research community proposed Just-in-Time (JIT) Bug Prediction with the goal of detecting bugs at commit-level. While this topic has been extensively investigated in the context of traditional systems, to the best of our knowledge, only a few preliminary studies assessed the performance of the technique in a mobile environment, by applying the metrics proposed by Kamei et al. in a within-project scenario. The results of these studies highlighted that there is still room for improvement. In this paper, we faced this problem to understand (i) which Kamei et al.'s metrics are useful in the mobile context, (ii) if different classifiers impact the performance of cross-project JIT bug prediction models and (iii) whether the application of ensemble techniques improves the capabilities of the models. To carry out the experiment, we first applied a feature selection technique, i.e., InfoGain, to filter relevant features and avoid models multicollinearity. Then, we assessed and compared the performance of four different wellknown classifiers and four ensemble techniques. Our empirical study involved 14 apps and 42, 543 commits extracted from the COMMIT GURU platform. The results show that Naive Bayes achieves the best performance with respect to the other classifiers and in some cases outperforms some well-known ensemble techniques.
2019
978-1-7281-3395-9
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4735641
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 43
  • ???jsp.display-item.citation.isi??? 39
social impact