This activation function started showing up in the context of visual feature extraction in hierarchical neural networks starting in the late 1960s. It was later argued that it has strong biological motivations and mathematical justifications. In 2011 it was found to enable better training of deeper networks, compared to the widely used activation functions prior to 2011, e.g., the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, as of 2017[update], the most popular activation function for deep neural networks.
Rectifying activation functions were used to separate specific excitation and unspecific inhibition in the neural abstraction pyramid, which was trained in a supervised way to learn several computer vision tasks. In 2011, the use of the rectifier as a non-linearity has been shown to enable training deep supervised neural networks without requiring unsupervised pre-training. Rectified linear units, compared to sigmoid function or similar activation functions, allow faster and effective training of deep neural architectures on large and complex datasets.
Leaky ReLUs allow a small, positive gradient when the unit is not active.
Parametric ReLUs (PReLUs) take this idea further by making the coefficient of leakage into a parameter that is learned along with the other neural-network parameters.
Note that for a ≤ 1, this is equivalent to
and thus has a relation to "maxout" networks.
This activation function is illustrated in the figure at the start of this article.
The SiLU (Sigmoid Linear Unit) is another smooth approximation first introduced in the GELU paper.[dubious ] 
A smooth approximation to the rectifier is the analytic function
A sharpness parameter may be included:
The derivative of softplus is the logistic function. Starting from the parametric version,
The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:
The LogSumExp function is
and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs.
where is a hyper-parameter to be tuned, and is a constraint.
The ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the form given the same interpretation of .
Rectifier and softplus activation functions. The second one is a smooth version of the first.CS1 maint: uses authors parameter (link)
Since the sigmoid h has a positive first derivative, its primitive, which we call softplus, is convex.