doi: 10.17586/2226-1494-2023-23-6-1187-1197


Using topological data analysis for building Bayesan neural networks

A. S. Vatyan, N. F. Gusarova, D. A. Dobrenko, K. S. Pankova, I. V. Tomilov


Read the full article  ';
Article in Russian

For citation:
Vatian A.S., Gusarova N.F., Dobrenko D.A., Pankova K.S., Tomilov I.V. Using topological data analysis for building Bayesan neural networks. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2023, vol. 23, no. 6, pp. 1187–1197 (in Russian). doi: 10.17586/2226-1494-2023-23-6-1187-1197


Abstract
For the first time, a simplified approach to constructing Bayesian neural networks is proposed, combining computational efficiency with the ability to analyze the learning process. The proposed approach is based on Bayesianization of a deterministic neural network by randomizing parameters only at the interface level, i.e., the formation of a Bayesian neural network based on a given network by replacing its parameters with probability distributions that have the parameters of the original model as the average value. Evaluations of the efficiency metrics of the neural network were obtained within the framework of the approach under consideration, and the Bayesian neural network constructed through variation inference were performed using topological data analysis methods. The Bayesianization procedure is implemented through graded variation of the randomization intensity. As an alternative, two neural networks with identical structure were used — deterministic and classical Bayesian networks. The input of the neural network was supplied with the original data of two datasets in versions without noise and with added Gaussian noise. The zero and first persistent homologies for the embeddings of the formed neural networks on each layer were calculated. To assess the quality of classification, the accuracy metric was used. It is shown that the barcodes for embeddings on each layer of the Bayesianized neural network in all four scenarios are between the corresponding barcodes of the deterministic and Bayesian neural networks for both zero and first persistent homologies. In this case, the deterministic neural network is the lower bound, and the Bayesian neural network is the upper bound. It is shown that the structure of data associations within a Bayesianized neural network is inherited from a deterministic model, but acquires the properties of a Bayesian one. It has been experimentally established that there is a relationship between the normalized persistent entropy calculated on neural network embeddings and the accuracy of the neural network. For predicting accuracy, the topology of embeddings on the middle layer of the neural network model turned out to be the most revealing. The proposed approach can be used to simplify the construction of a Bayesian neural network from an already trained deterministic neural network, which opens up the possibility of increasing the accuracy of an existing neural network without ensemble with additional classifiers. It becomes possible to proactively evaluate the effectiveness of the generated neural network on simplified data without running it on a real dataset, which reduces the resource intensity of its development.

Keywords: Bayesian neural networks, persistent homology, normalized persistent entropy, embedding, barcode

Acknowledgements. The work is supported by Grant RSF 23-11-00346.

References
  1. Chazal F., Michel B. An introduction to topological data analysis: fundamental and practical aspects for data scientists. Frontiers in Artificial Intelligence, 2021, vol. 4. https://doi.org/10.3389/frai.2021.667963
  2. Edelsbrunner H., Harer J. Computational topology: an introduction. American Mathe-matical Soc., 2010. Available at: https://www.maths.ed.ac.uk/~v1ranick/papers/edelcomp.pdf (accessed: 10.11.2023).
  3. Ritter H., Kukla M., Zhang C., Li Y. Sparse uncertainty representation in deep learning with inducing weights. Advances in Neural Information Processing Systems, 2021, vol. 8, pp. 6515–6528.
  4. Prabhudesai S., Hauth J., Guo D., Rao A., Banovic N., Huan X. Lowering the computational barrier: Partially Bayesian neural networks for transparency in medical imaging AI. Frontiers in Computer Science, 2023, vol. 5. https://doi.org/10.3389/fcomp.2023.1071174
  5. Zomorodian A., Carlsson G. Computing persistent homology. Discrete & Computational Geometry, 2005, vol. 33, no. 2, pp. 249–274. https://doi.org/10.1007/s00454-004-1146-y
  6. Wasserman L. Topological data analysis. Annual Review of Statistics and Its Application, 2018, vol. 5, pp. 501–532. https://doi.org/10.1146/annurev-statistics-031017-100045
  7. Carlsson G., Gabrielsson R.B. Topological approaches to deep learning. Topological Data Analysis. Springer, 2020, pp. 119–146. https://doi.org/10.1007/978-3-030-43408-3_5
  8. Hensel F., Moor M., Rieck B. A survey of topological machine learning methods. Frontiers in Artificial Intelligence, 2021, vol. 4. https://doi.org/10.3389/frai.2021.681108
  9. Moroni D., Pascali M.A. Learning topology: bridging computational topology and machine learning. Pattern Recognition and Image Analysis, 2021, vol. 31, no. 3, pp. 443–453. https://doi.org/10.1134/S1054661821030184
  10. Zia A., Khamis A., Nichols J., Hayder Z., Rolland V., Peterssonet L. Topological deep learning: A review of an emerging paradigm. arXiv, arXiv:2302.03836v1, 2023. https://doi.org/10.48550/arXiv.2302.03836
  11. Goibert M., Ricatte T., Dohmatob E. An adversarial robustness perspective on the topology of neural networks. arXiv, 2022, arXiv:2211.02675. https://doi.org/10.48550/arXiv.2211.02675
  12. Chen C., Ni X., Bai Q., Wang Y. A topological regularizer for classifiers via persistent homology. Proc. of the AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics, 2020.
  13. Ramamurthy K.N., Varshney K.R., Mody K. Topological data analysis of decision boundaries with application to model selection. Proc. of the 36th International Conference on Machine Learning (ICML), 2019, pp. 9316–9325.
  14. Gabrielsson R.B., Carlsson G. Exposition and interpretation of the topology of neural networks. Proc. of the 18th IEEE International Conference on Machine Learning and Applications (ICMLA), 2019, pp. 1069–1076.
  15. Rieck B., Togninalli M., Bock C., Moor M., Horn M., Gumbsch T., Borwardt K. Neural persistence: A complexity measure for deep neural networks using algebraic topology. Proc. of the 7th International Conference on Learning Representations (ICLR), 2019.
  16. McGuire S., Jackson S., Emerson T., Kvinge H. Do neural networks trained with topological features learn different internal representations? Proceedings of Machine Learning Research, 2023, vol. 197, pp. 122–136.
  17. Guss W.H., Salakhutdinov R. On characterizing the capacity of neural networks using algebraic topology. arXiv, 2018, arXiv:1802.04443v1. https://doi.org/10.48550/arXiv.1802.04443
  18. Bergomi M.G., Frosini P., Giorgi D., Quercioli N. Towards a topological–geometrical theory of group equivariant non-expansive operators for data analysis and machine learning. Nature Machine Intelligence, 2019, vol. 1, no. 9, pp. 423–433. https://doi.org/10.1038/s42256-019-0087-3
  19. Hofer C.D., Graf F., Niethammer M., Kwitt R. Topologically densified distributions. Proc. of the 37th International Conference on Machine Learning (ICML), 2020, pp. 4254–4263.
  20. Naitzat G., Zhitnikov A., Lim L.-H. Topology of deep neural networks. The Journal of Machine Learning Research, 2020, vol. 21, no. 1, pp. 7503–7542.
  21. Gal Y., Ghahramani Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. Proc. of the 33rd International Conference on Machine Learning (ICML), 2016, pp. 1651–1660.


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика