ML0027 Leaky ReLU

What are the benefits of the Leaky ReLU activation function?

Answer

Leaky ReLU modifies the standard ReLU by allowing a small, non-zero gradient for negative inputs. Its formula is typically written as:
{\large \text{Leaky ReLU}(x) = x \text{ if } x \ge 0,\quad \alpha x \text{ if } x < 0}

Advantages of Leaky ReLU:
1. Addresses the dying ReLU problem: By having a small non-zero slope for negative inputs, Leaky ReLU allows a small gradient to flow even when the neuron is not active in the positive region. This prevents neurons from getting stuck in a permanently inactive state and potentially helps them recover during training.
2. Retains the benefits of ReLU for positive inputs: Maintains the linearity and non-saturation for positive values, contributing to efficient computation and gradient propagation.  


Login to view more content


Did you solve the problem?

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *