Second Order Learning Algorithm for Back Propagation Neural Networks

Nazri Mohd Nawi, Noorhamreeza Abdul Hamid, Noor Azah Samsudin, Mohd Amin Mohd Yunus, Mohd Firdaus Ab Aziz

Abstract


Training of artificial neural networks (ANN) is normally a time consuming task due to iteratively search imposed by implicit nonlinearity of the network behavior.  In this work an improvement to ‘batch-mode’ offline training methods, gradient based or gradient free is proposed. The new procedure computes and improves the search direction along the negative gradient by introducing the ‘gain’ value of the activation functions and calculating the negative gradient on error with respect to the weights as well as ‘gain’ values in minimizing the error function. The main advantage of this new procedure is that it is easy to implement into other faster optimization algorithms such as conjugate gradient method and Quasi-Newton method. The pperformance of the proposed method implemented into conjugate gradient method and Quasi-Newton method is demonstrated by comparing the simulation results to the neural network toolbox for the chosen benchmark. The results show that the proposed method considerably improves the convergence rate significantly faster the learning process of the general back propagation algorithm because of it new efficient search direction.


Keywords


Back propagation algorithm; gradient descent; activation function; second order method; search direction;

Full Text:

PDF


DOI: http://dx.doi.org/10.18517/ijaseit.7.4.1956

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development