Deep Learning Backward Propagation

  • By Sagar Gade
  • January 6, 2024
  • Deep Learning
Deep Learning Backward Propagation

Deep Learning Backward Propagation 

In this blog, will discuss more Deep Learning Backward Propagation.

  • Forward Propagation is based on random weights. If we use the random weights, cost/error/loss will be high.  
  • In order to reduce this error/cost/loss, we need to optimize the random weights. There are two weights that we  used so far (Wxh and Why
  • We can optimize these two weights by using the Gradient Decent technique. 
  • As a result of forward propagation, we are in the output layer. We will now backpropagate the network from the output layer to the input layer and calculate the gradient of the cost function with respect to all the weights between the output layer and the input layer.

For Free, Demo classes Call: 7507414653

Registration Link: Click Here!

Backward Propagation

Deep Learning Backward Propagation 

Terminologies 

  • Forward Pass: implies the forward propagating from the input layer to the output layer. 
  • Backward Pass: implies the backpropagation from the output layer to the input layer. 
  • Epoch: specifies the number of times the neural network sees our whole training data. So, we can say that one epoch is equal to one forward pass and one backward pass for all the training samples. 
  • Batch Size: specifies the number of training samples we use in one forward pass and one backward pass. Number of Iterations: implies the number of passes where one pass = one forward pass + one backward pass 

For Free, Demo classes Call: 7507414653

Registration Link: Deep Learning Training in Pune!

Say we have 12000 training samples and our batch size is 6000. It will take us two iterations to complete the one epoch.  That is, in one iteration, we pass the first 6000 samples and perform the forward and backward pass; in the second iteration, we pass the next 6000 training samples and perform the forward and backward pass. After two iterations, our neural network will see the whole 12,000 raining samples, which makes it one epoch.

Do visit our channel to learn more: Click Here

Author:

Sagar Gade

Call the Trainer and Book your free demo Class For Deep Learning Call now!!!
| SevenMentor Pvt Ltd.

© Copyright 2021 | SevenMentor Pvt Ltd.

Submit Comment

Your email address will not be published. Required fields are marked *

*
*