Tag: neural network
-
He Initialization
Also known as Kaiming Initialization, this technique was developed for neural networks that used the ReLU activation function. ReLU is a non-linear function that clips all negative inputs to zero. This results in a non-zero, positive mean and a reduction in the signal’s overall magnitude. When we apply Xavier initialization to deep networks using ReLU,…