doi: 10.17586/2226-1494-2024-24-2-222-229


An optimized deep learning method for software defect prediction using Whale Optimization Algorithm

A. Aliyu Aihong, B. Imam Ya’u, U. Ali, A. Ahmad, M. Abdulrahman Lawal


Read the full article  ';
Article in English

For citation:
Aliyu Aihong A., Imam Ya’u B., Ali U., Ahmad A., Abdulrahman Lawal M. An optimized deep learning method for software defect prediction using Whale Optimization Algorithm. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, vol. 24, no. 2, pp. 222-229. doi: 10.17586/2226-1494-2024-24-2-222-229


Abstract
The goal of this study is to predict a software error using Long Short-Term Memory (LSTM). The suggested system is an LSTM taught using the Whale Optimization Algorithm to save training time while improving deep learning model efficacy and detection rate. MATLAB 2022a was used to develop the enhanced LSTM model. The study relied on 19 open-source software defect databases. These faulty datasets were obtained from the tera-PROMISE data collection. However, in order to evaluate the model performance to other traditional approaches, the scope of this study is limited to five (5) of the most highly ranked benchmark datasets (DO1, DO2, DO3, DO4, and DO5). The experimental results reveal that the quality of the training and testing data has a significant impact on fault prediction accuracy. As a result, when we look at the DO1 to DO5 datasets, we can see that prediction accuracy is significantly dependent on training and testing data. Furthermore, for DO2 datasets, the three deep learning algorithms tested in this study had the highest accuracy. The proposed method, however, outperformed Li’s and Nevendra’s two classical Convolutional Neural Network algorithms which attained accuracy of 0.922 and 0.942 on the DO2 software defect data, respectively.

Keywords: deep learning, software defect prediction, whale optimization algorithms, long short-term memory, machine learning, optimization algorithm

Acknowledgements. This study is funded by the Abubakar Tafawa Balewa University (ATBU) in Bauchi, Nigeria.

References
  1. Wunsch A., Liesch T., Broda S. Groundwater level forecasting with artificial neural networks: a comparison of long short-term memory (LSTM), convolutional neural networks (CNNs), and non-linear autoregressive networks with exogenous input (NARX). Hydrology and Earth System Sciences, 2021, vol. 25, no. 3, pp. 1671–1687. https://doi.org/10.5194/hess-25-1671-2021
  2. Conneau A., Schwenk H., Barrault L.,  Lecun Y. Very deep convolutional networks for text classification. Proc. of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Vol. 1, Long Papers, 2017, pp. 1107–1116. https://doi.org/10.18653/v1/e17-1104
  3. Aljarah I., Faris H., Mirjalili S. Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Computing, 2018, vol. 22, no. 1, pp. 1–15. https://doi.org/10.1007/s00500-016-2442-1
  4. Lipton Z.C., Berkowitz J., Elkan Ch. A critical review of recurrent neural networks for sequence learning. arXiv, 2015, arXiv:1506.00019. https://doi.org/10.48550/arXiv.1506.00019
  5. Xu Z., Li S., Xu J., Liu J., Luo X., Zhang Y., Zhang T., Keung J., Tang Y. LDFR: Learning deep feature representation for software defect prediction. Journal of Systems and Software, 2019, vol. 158, pp. 110402. https://doi.org/10.1016/j.jss.2019.110402
  6. Li Z., Jing X.Y., Zhu X. Progress on approaches to software defect prediction. IET Software, 2018, vol. 12, no. 3, pp. 161–175. https://doi.org/10.1049/iet-sen.2017.0148
  7. Dos Santos G.E., Figueiredo E. Failure of one, fall of many: An exploratory study of software features for defect prediction. Proc. of the IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM), 2020, pp. 98–109. https://doi.org/10.1109/SCAM51674.2020.00016
  8. Zain Z.M., Sakri S., Ismail N.H.A., Parizi R.M. Software defect prediction harnessing on multi 1-dimensional convolutional neural network structure. Computers, Materials and Continua, 2022, vol. 71, no. 1, pp. 1521. https://doi.org/10.32604/cmc.2022.022085
  9. Chen L., Fang B., Shang Z., Tang Y. Tackling class overlap and imbalance problems in software defect prediction. Software Quality Journal, 2018, vol. 26, no. 1, pp. 97–125. https://doi.org/10.1007/s11219-016-9342-6
  10. Nevendra M., Singh P. Software defect prediction using deep learning. Acta Polytechnica Hungarica, 2021, vol. 18, no. 10, pp. 173–189. https://doi.org/10.12700/aph.18.10.2021.10.9
  11. Ahmad A., Musa K.I., Zambuk F.U., Lawal M.A. Optimizing connection weights in a Long Short-Term Memory (LSTM) using Whale Optimization Algorithm (WOA): A review. Journal of Science, Technology and Education, 2022, vol. 10, no. 3, pp. 362–373.
  12. Hochreiter S., Schmidhuber J. Long short-term memory. Neural computation, 1997, vol. 9, no. 8, pp. 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика