The study provides a guideline for identifying metrics that are influential for SDP. Random oversampling portrays the best predictive capability of developed defect prediction models. Statistical results advocate the use of resampling methods to improve SDP. The performances of developed models are analyzed using AUC, GMean, Balance, and sensitivity. The impact of 10 resampling methods is analyzed on selected features of 12 object-oriented Apache datasets using 15 machine learning techniques. This study aims at (1) identification of useful metrics in the software using correlation feature selection, (2) extensive comparative analysis of 10 resampling methods to generate effective machine learning models for imbalanced data, (3) inclusion of stable performance evaluators-AUC, GMean, and Balance and (4) integration of statistical validation of results. In addition to this large number of software metrics degrades the model performance. Models trained on imbalanced data leads to inaccurate future predictions owing to biased learning and ineffective defect prediction. Statistics of many defect-related open-source data sets depict the class imbalance problem in object-oriented projects. The development of correct and effective software defect prediction (SDP) models is one of the utmost needs of the software industry.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |