Module burn::tensor::activation
Expand description
The activation module.
Functions§
- Applies the Gaussian Error Linear Units function as described in the paper Gaussian Error Linear Units (GELUs).
- Applies the hard sigmoid function
- Applies the leaky rectified linear unit function.
- Applies the log sigmoid function.
- Applies the log softmax function on the input tensor along the given dimension.
- Applies the Mish function as described in the paper in Mish: A Self Regularized Non-Monotonic Neural Activation Function.
- Applies Parametric ReLu activation function as described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.
PReLu(x) = max(0,x) + \alpha * min(0,x)
tensor is assumed to be of shape [batch_size, channels, …] alpha is assumed to be of shape [channels] or [1] - Applies the “quiet softmax” function on the input tensor along the given dimension. This function is similar to the softmax function, but it allows for “no selection”, e.g., all outputs can tend to zero.
- Applies the rectified linear unit function as described in the paper Deep Learning using Rectified Linear Units (ReLU).
- Applies the sigmoid function.
- Applies the silu function
- Applies the softmax function on the input tensor along the given dimension.
- Applies the softmin function on the input tensor along the given dimension.
- Applies the softplus function
- Applies the tanh function