burn::tensor::ops

Trait ActivationOps

pub trait ActivationOps<B>
where B: Backend,
{ // Provided methods fn leaky_relu( tensor: <B as Backend>::FloatTensorPrimitive, negative_slope: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn relu( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn relu_backward( output: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn gelu( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn prelu( tensor: <B as Backend>::FloatTensorPrimitive, alpha: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn gelu_backward( x: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn sigmoid( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn sigmoid_backward( output: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn hard_sigmoid( tensor: <B as Backend>::FloatTensorPrimitive, alpha: <B as Backend>::FloatElem, beta: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn log_sigmoid( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } fn log_sigmoid_backward( x: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive { ... } }
Expand description

Activation function operations.

This trait let backend implementations override activation functions for better performance.

Provided Methods§

fn leaky_relu( tensor: <B as Backend>::FloatTensorPrimitive, negative_slope: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive

Applies the LeakyReLU activation function.

§Arguments
  • tensor - The tensor.
  • negative_slope - The negative_slope value that values smaller than 0 are multiplied with.
§Returns

The output tensor.

fn relu( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the ReLU activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn relu_backward( output: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the ReLU activation function backward.

§Arguments
  • output - The output tensor.
§Returns

The gradient.

fn gelu( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the Gelu activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn prelu( tensor: <B as Backend>::FloatTensorPrimitive, alpha: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the PReLu activation function.

§Arguments
  • tensor - The input tensor
  • alpha - The weight tensor

fn gelu_backward( x: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the Gelu activation function backward.

§Arguments
  • x - The tensor.
  • grad - The gradient.
§Returns

The output tensor.

fn sigmoid( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the Sigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn sigmoid_backward( output: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the Sigmoid activation function backward.

§Arguments
  • output - The output tensor of the sigmoid function.
  • grad - The gradient.
§Returns

The output tensor.

fn hard_sigmoid( tensor: <B as Backend>::FloatTensorPrimitive, alpha: <B as Backend>::FloatElem, beta: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive

Applies the hard Sigmoid activation function.

§Arguments
  • tensor - The tensor.
  • alpha - The alpha value that the tensor is multiplied with.
  • beta - The beta value that is added to the tensor
§Returns

The output tensor.

fn log_sigmoid( tensor: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the LogSigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn log_sigmoid_backward( x: <B as Backend>::FloatTensorPrimitive, grad: <B as Backend>::FloatTensorPrimitive, ) -> <B as Backend>::FloatTensorPrimitive

Applies the LogSigmoid activation function backward.

§Arguments
  • x - The input tensor.
  • grad - The gradient.
§Returns

The output gradient.

Dyn Compatibility§

This trait is not dyn compatible.

In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.

Implementations on Foreign Types§

§

impl<B> ActivationOps<Fusion<B>> for Fusion<B>
where B: FusionBackend,

§

impl<R> ActivationOps<BackendRouter<R>> for BackendRouter<R>
where R: RunnerChannel,

Implementors§

§

impl<B, C> ActivationOps<Autodiff<B, C>> for Autodiff<B, C>

§

impl<E, I, Q> ActivationOps<NdArray<E, I, Q>> for NdArray<E, I, Q>
where E: FloatNdArrayElement, I: IntNdArrayElement, Q: QuantElement,

§

impl<E, Q> ActivationOps<LibTorch<E, Q>> for LibTorch<E, Q>
where E: TchElement, Q: QuantElement,

§

impl<F, I> ActivationOps<Candle<F, I>> for Candle<F, I>
where F: FloatCandleElement, I: IntCandleElement,

§

impl<R, F, I, BT> ActivationOps<JitBackend<R, F, I, BT>> for JitBackend<R, F, I, BT>
where R: JitRuntime, F: FloatElement, I: IntElement, BT: BoolElement,