Effort-Aware Just-in-Time Bug Prediction for Mobile Apps Via Cross-Triplet Deep Feature Embedding

Abstract : Just-in-time (JIT) bug prediction is an effective quality assurance activity that identifies whether a code commit will introduce bugs into the mobile app, aiming to provide prompt feedback to practitioners for priority review. Since collecting sufficient labeled bug data is not always feasible for some mobile apps, one possible approach is to leverage cross-app models. In this work, we propose a new cross-triplet deep feature embedding method, called CDFE, for cross-app JIT bug prediction task. The CDFE method incorporates a state-of-the-art cross-triplet loss function into a deep neural network to learn high-level feature representation for the cross-app data. This loss function adapts to the cross-app feature learning task and aims to learn a new feature space to shorten the distance of commit instances with the same label and enlarge the distance of commit instances with different labels. In addition, this loss function assigns higher weights to losses caused by cross-app instance pairs than that by intra-app instance pairs, aiming to narrow the discrepancy of cross-app bug data. We evaluate our CDFE method on a benchmark bug dataset from 19 mobile apps with two effort-aware indicators. The experimental results on 342 cross-app pairs show that our proposed CDFE method performs better than 14 baseline methods.
 EXISTING SYSTEM :
 ? Most of the metrics used for traditional systems have been adapted into the mobile context . However, it is still unclear to what extent these metrics are effective. ? In particular, in the first study they analysed the performance of JIT bug prediction models considering 11 software systems; they built a logistic regression model and validate it using 10 fold-cross validation. ? They conducted a large-scale empirical study to compare their approach with 43 state-of-theart supervised and unsupervised methods under three commonly used performance evaluation scenarios: cross-validation, crossproject-validation, and time wise-cross-validation.
 DISADVANTAGE :
 ? The defects induced by changes are often hard to track, difficult to resolve and cause issues for developers also. ? This problem led the research community to come up with some techniques identify and predict the upcoming possible defects in software so the conflict can be resolved in a timely manner and the debugging effort can be saved. ? They tackled the mentioned problem and developed a fine-grained prediction version control system and proved that fine-grained performs better than coarse-grained predictions. ? We interpreted this as a classification problem and proposed a methodology for effort aware just-in-time prediction using the fusion based method.
 PROPOSED SYSTEM :
 • In the inner layer, they combine Decision Tree and Bagging to build a Random Forest model, while in the outer layer, they use random under-sampling to train many different Random Forest models and ensemble them once more using stacking. • Indeed, they proposed a Just-in-Time quality assurance technique as a more practical alternative to traditional bug prediction techniques being able to provide defect feedback at commit-time. • Afterwards, they proposed a two-layer ensemble learning approach for Just-in-Time defect prediction. • In this context, we will exploit the concept of local bug prediction, a technique that has been already successfully applied to improve the effectiveness of traditional bug prediction.
 ADVANTAGE :
 ? This is done in order to achieve maximum performance, build the trust of user and enhance the overall quality of the product. ? Their assumption was quite true, as the unsupervised methodology provided good performance without extra cost but the false prediction error was not resolved. ? As XGBoost is an enactment of gradient boosted decision-trees, which is designed for its speed and performance quality, it provides a memory efficient solution for the classification tasks. ? However, the performances of these learning mechanisms are highly dependent on the data that is used to train the model. ? Specifically, our results show that considering the performance of a single classifier, Random Forest and XGBoost performs better than the other state-of-the-art methods.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com