Analysis Of Gradient Descent In Back Propagation Neural Network For Maximum And Efficient Utilization

Authors

  • Aatif Jamshed
  • Bhawna Mallick
  • Rajendra Kumar Bharti

Keywords:

ANN, GD, Backpropagation, Deep Learning, Calculus

Abstract

The back-propagation is a successfully established algorithm for multi-layered perceptron neural networks, which is usually
with successfully for tiny network architectures or small tasks. In this paper we have highlighted the important libraries for the
purpose of implementation of neural networks. After that we have given the process of feed forward in neural network and how
we optimize this process by updating the weights by going backward towards the previous layers. The Author will explain how
to calculate gradient descant and role of activation functions and their importance in neural networks. Author will provide the
deep insights of pytorch and keras library. Author provides an overview of the fundamental theory of backpropagation neural network architecture, including architectural design, performance assessment, function approximation capabilities (and learning), and other aspects of neural network architecture. The survey contains information that has already been published as well as some new findings, such as a formulation of the backpropagation neural network design that allows it to be considered a legitimate neural network. Learn about gradient descent, the role of cost functions especially as a barometer within Machine Learning, different forms of gradient descents, learning rates, and other topics in depth in this article on Gradient Descent in Machine Learning.

Published

2022-04-30