Back-propagation

Back-propagation is a computational method that computes the gradient of the loss function in neural networks. This is the third step in the neural network process.

The mathematical concept used here is the chain rule from calculus. The chain rule helps in finding the derivative of a composite function, and a neural network can be considered as a giant composite function.

Back-propagation is a very efficient method to compute the gradients of the loss-function because it avoids redundant calculations of intermediate terms. The gradient is computed layer by layer, starting at the output layer all the way up to the input layer.

3 responses to “Back-propagation”

Leave a comment