Module activation

Expand description

The activation module.

Functions§

gelu
Applies the Gaussian Error Linear Units function as described in the paper Gaussian Error Linear Units (GELUs).
hard_sigmoid
Applies the hard sigmoid function
leaky_relu
Applies the leaky rectified linear unit function.
log_sigmoid
Applies the log sigmoid function.
log_softmax
Applies the log softmax function on the input tensor along the given dimension.
mish
Applies the Mish function as described in the paper in Mish: A Self Regularized Non-Monotonic Neural Activation Function.
prelu
Applies Parametric ReLu activation function as described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. PReLu(x) = max(0,x) + \alpha * min(0,x) tensor is assumed to be of shape [batch_size, channels, …] alpha is assumed to be of shape [channels] or [1]
quiet_softmax
Applies the “quiet softmax” function on the input tensor along the given dimension. This function is similar to the softmax function, but it allows for “no selection”, e.g., all outputs can tend to zero.
relu
Applies the rectified linear unit function as described in the paper Deep Learning using Rectified Linear Units (ReLU).
sigmoid
Applies the sigmoid function.
silu
Applies the silu function
softmax
Applies the softmax function on the input tensor along the given dimension.
softmin
Applies the softmin function on the input tensor along the given dimension.
softplus
Applies the softplus function
tanh
Applies the tanh function