Trait burn::tensor::ops::ActivationOps

pub trait ActivationOps<B>
where B: Backend,
{ // Provided methods fn leaky_relu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, negative_slope: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn relu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn relu_backward<const D: usize>( output: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn gelu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn prelu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, alpha: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn gelu_backward<const D: usize>( x: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn sigmoid<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn sigmoid_backward<const D: usize>( output: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn hard_sigmoid<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, alpha: <B as Backend>::FloatElem, beta: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn log_sigmoid<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } fn log_sigmoid_backward<const D: usize>( x: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D> { ... } }
Expand description

Activation function operations.

This trait let backend implementations override activation functions for better performance.

Provided Methods§

fn leaky_relu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, negative_slope: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the LeakyReLU activation function.

§Arguments
  • tensor - The tensor.
  • negative_slope - The negative_slope value that values smaller than 0 are multiplied with.
§Returns

The output tensor.

fn relu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the ReLU activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn relu_backward<const D: usize>( output: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the ReLU activation function backward.

§Arguments
  • output - The output tensor.
§Returns

The gradient.

fn gelu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the Gelu activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn prelu<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, alpha: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the PReLu activation function.

§Arguments
  • tensor - The input tensor
  • alpha - The weight tensor

fn gelu_backward<const D: usize>( x: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the Gelu activation function backward.

§Arguments
  • x - The tensor.
  • grad - The gradient.
§Returns

The output tensor.

fn sigmoid<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the Sigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn sigmoid_backward<const D: usize>( output: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the Sigmoid activation function backward.

§Arguments
  • output - The output tensor of the sigmoid function.
  • grad - The gradient.
§Returns

The output tensor.

fn hard_sigmoid<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, alpha: <B as Backend>::FloatElem, beta: <B as Backend>::FloatElem, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the hard Sigmoid activation function.

§Arguments
  • tensor - The tensor.
  • alpha - The alpha value that the tensor is multiplied with.
  • beta - The beta value that is added to the tensor
§Returns

The output tensor.

fn log_sigmoid<const D: usize>( tensor: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the LogSigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

fn log_sigmoid_backward<const D: usize>( x: <B as Backend>::FloatTensorPrimitive<D>, grad: <B as Backend>::FloatTensorPrimitive<D>, ) -> <B as Backend>::FloatTensorPrimitive<D>

Applies the LogSigmoid activation function backward.

§Arguments
  • x - The input tensor.
  • grad - The gradient.
§Returns

The output gradient.

Object Safety§

This trait is not object safe.

Implementations on Foreign Types§

§

impl<B> ActivationOps<Fusion<B>> for Fusion<B>
where B: FusionBackend,

Implementors§

§

impl<B, C> ActivationOps<Autodiff<B, C>> for Autodiff<B, C>

§

impl<E, Q> ActivationOps<LibTorch<E, Q>> for LibTorch<E, Q>
where E: TchElement, Q: QuantElement,

§

impl<E, Q> ActivationOps<NdArray<E, Q>> for NdArray<E, Q>
where E: FloatNdArrayElement, Q: QuantElement,

§

impl<F, I> ActivationOps<Candle<F, I>> for Candle<F, I>
where F: FloatCandleElement, I: IntCandleElement,

§

impl<R, F, I> ActivationOps<JitBackend<R, F, I>> for JitBackend<R, F, I>
where R: JitRuntime, F: FloatElement, I: IntElement,