REALIZATION OF THE RADIAL BASIC NEURAL NETWORK ON MASSIVE PARALLEL GPU
Read the full article
The parallel realization in CUDA technology of the training RBFNN algorithm, based on the sequential
correction of centers, widths and weights and on the weights correction using an algorithm for minimizing a
quadratic functional of the conjugate gradient method is proposed. Comparison results of the RBFNN training time on different CPU and GPU are given proving parallel realization efficiency.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License