REALIZATION OF THE RADIAL BASIC NEURAL NETWORK ON MASSIVE PARALLEL GPU

N. Matveyeva


Read the full article  ';

Abstract

The parallel realization in CUDA technology of the training RBFNN algorithm, based on the sequential
correction of centers, widths and weights and on the weights correction using an algorithm for minimizing a
quadratic functional of the conjugate gradient method is proposed. Comparison results of the RBFNN training time on different CPU and GPU are given proving parallel realization efficiency.


Keywords: Graphics Processing Unit, parallelism, CUDA, massive parallel architecture, radial basic neural network, RBFNN, differential equation

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика