An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate

Nazri Mohd Nawi, Faridah Hamzah, Norhamreeza Abdul Hamid, Muhammad Zubair Rehman, Mohammad Aamir, Azizul Ramli Azhar

Abstract


Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm

Keywords


Back Propagation; classification; momentum; adaptive learning rate; local minima; gradient descent

Full Text:

PDF


DOI: http://dx.doi.org/10.18517/ijaseit.7.5.2972

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development