Struct SmoothL1Loss
pub struct SmoothL1Loss {
pub beta: f32,
}Expand description
Computes the Smooth L1 Loss between predictions and targets.
This loss function uses L2 loss for small errors (below beta) and L1 loss for large errors (above beta), providing robustness to outliers while maintaining smooth gradients near |x - y| = 0.
§Mathematical Definition
For predictions x and targets y, the element-wise loss is:
- L_i = 0.5 * (x_i - y_i)² / beta , if |x_i - y_i| < beta
- L_i = |x_i - y_i| - 0.5 * beta , otherwise
§Notes
Smooth L1 loss is closely related to HuberLoss since it is equivalent to HuberLoss
scaled by 1/beta:
SmoothL1(x, y, beta) = Huber(x, y, beta) / beta
This leads to the following differences:
- As beta approaches 0, Smooth L1 loss converges to L1Loss, while HuberLoss converges to 0.
When beta = 0, Smooth L1 loss is equivalent to L1 loss. Thus, the
betaparameter in Burn must be positive. L1Loss should be used for beta = 0. - As beta approaches positive infinity, Smooth L1 loss converges to a constant 0 loss, while HuberLoss converges to L2Loss.
§Example
use burn_nn::loss::{SmoothL1LossConfig, Reduction};
use burn::tensor::Tensor;
// Create Smooth L1 loss with the default beta=1.0
let smooth_l1 = SmoothL1LossConfig::new().init();
let predictions: Tensor<Backend, 2> = /* model output */;
let targets: Tensor<Backend, 2> = /* ground truth */;
// Compute element-wise loss without reduction
let element_wise = smooth_l1.forward(predictions.clone(), targets.clone());
// Compute loss with mean reduction
let loss = smooth_l1.forward_with_reduction(predictions.clone(), targets.clone(), Reduction::Mean);
// Per-image loss: reduce over C, H, W → [batch, 1, 1, 1]
let loss_per_image = smooth_l1.forward_reduce_dims(predictions, targets, &[1, 2, 3]);Fields§
§beta: f32Specifies the threshold at which to change between L1 and L2 loss. The value must be positive. Default: 1.0
Implementations§
§impl SmoothL1Loss
impl SmoothL1Loss
pub fn forward<const D: usize, B>(
&self,
predictions: Tensor<B, D>,
targets: Tensor<B, D>,
) -> Tensor<B, D>where
B: Backend,
pub fn forward<const D: usize, B>(
&self,
predictions: Tensor<B, D>,
targets: Tensor<B, D>,
) -> Tensor<B, D>where
B: Backend,
Computes the element-wise smooth L1 loss without reduction.
§Arguments
predictions- The model’s predicted values.targets- The ground truth target values.
§Returns
A tensor of the same shape as the inputs, containing the smooth L1 loss for each element.
§Shapes
- predictions:
[...dims]- Any shape - targets:
[...dims]- Must match predictions shape - output:
[...dims]- Same shape as inputs
pub fn forward_with_reduction<const D: usize, B>(
&self,
predictions: Tensor<B, D>,
targets: Tensor<B, D>,
reduction: Reduction,
) -> Tensor<B, 1>where
B: Backend,
pub fn forward_with_reduction<const D: usize, B>(
&self,
predictions: Tensor<B, D>,
targets: Tensor<B, D>,
reduction: Reduction,
) -> Tensor<B, 1>where
B: Backend,
Computes the smooth L1 loss with reduction.
§Arguments
predictions- The model’s predicted values.targets- The ground truth target values.reduction- Specifies how to reduce the element-wise losses:Reduction::MeanorReduction::Auto: Returns the mean of all element-wise losses.Reduction::Sum: Returns the sum of all element-wise losses.
§Returns
A scalar tensor containing the reduced loss value.
§Shapes
- predictions:
[...dims]- Any shape - targets:
[...dims]- Must match predictions shape - output:
[1]- Scalar loss value
pub fn forward_reduce_dims<const D: usize, B>(
&self,
predictions: Tensor<B, D>,
targets: Tensor<B, D>,
dims: &[usize],
) -> Tensor<B, D>where
B: Backend,
pub fn forward_reduce_dims<const D: usize, B>(
&self,
predictions: Tensor<B, D>,
targets: Tensor<B, D>,
dims: &[usize],
) -> Tensor<B, D>where
B: Backend,
Computes the smooth L1 loss with reduction over specified dimensions.
Calculates element-wise smooth L1 loss, then takes the mean over the specified dimensions. Useful for per-sample or per-channel losses.
Dimensions can be provided in any order. They are sorted internally and reduced from highest to lowest to ensure indices remain valid.
§Arguments
predictions- The model’s predicted values.targets- The ground truth target values.dims- Dimensions to reduce over.
§Returns
A tensor with the specified dimensions reduced to size 1.
§Example
// Consider image tensor with shape [batch, C, H, W]
let smooth_l1 = SmoothL1LossConfig::new().init();
// Per-image loss: reduce over C, H, W → [batch, 1, 1, 1]
let loss_per_image = smooth_l1.forward_reduce_dims(predictions, targets, &[1, 2, 3]);Trait Implementations§
§impl<B> AutodiffModule<B> for SmoothL1Losswhere
B: AutodiffBackend,
impl<B> AutodiffModule<B> for SmoothL1Losswhere
B: AutodiffBackend,
§type InnerModule = SmoothL1Loss
type InnerModule = SmoothL1Loss
§fn valid(&self) -> <SmoothL1Loss as AutodiffModule<B>>::InnerModule
fn valid(&self) -> <SmoothL1Loss as AutodiffModule<B>>::InnerModule
§fn from_inner(
module: <SmoothL1Loss as AutodiffModule<B>>::InnerModule,
) -> SmoothL1Loss
fn from_inner( module: <SmoothL1Loss as AutodiffModule<B>>::InnerModule, ) -> SmoothL1Loss
§impl Clone for SmoothL1Loss
impl Clone for SmoothL1Loss
§fn clone(&self) -> SmoothL1Loss
fn clone(&self) -> SmoothL1Loss
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more§impl Debug for SmoothL1Loss
impl Debug for SmoothL1Loss
§impl Display for SmoothL1Loss
impl Display for SmoothL1Loss
§impl<B> Module<B> for SmoothL1Losswhere
B: Backend,
impl<B> Module<B> for SmoothL1Losswhere
B: Backend,
§type Record = EmptyRecord
type Record = EmptyRecord
§fn visit<V>(&self, _visitor: &mut V)where
V: ModuleVisitor<B>,
fn visit<V>(&self, _visitor: &mut V)where
V: ModuleVisitor<B>,
§fn map<M>(self, _mapper: &mut M) -> SmoothL1Losswhere
M: ModuleMapper<B>,
fn map<M>(self, _mapper: &mut M) -> SmoothL1Losswhere
M: ModuleMapper<B>,
§fn load_record(
self,
_record: <SmoothL1Loss as Module<B>>::Record,
) -> SmoothL1Loss
fn load_record( self, _record: <SmoothL1Loss as Module<B>>::Record, ) -> SmoothL1Loss
§fn into_record(self) -> <SmoothL1Loss as Module<B>>::Record
fn into_record(self) -> <SmoothL1Loss as Module<B>>::Record
§fn to_device(self, _: &<B as BackendTypes>::Device) -> SmoothL1Loss
fn to_device(self, _: &<B as BackendTypes>::Device) -> SmoothL1Loss
§fn fork(self, _: &<B as BackendTypes>::Device) -> SmoothL1Loss
fn fork(self, _: &<B as BackendTypes>::Device) -> SmoothL1Loss
§fn collect_devices(
&self,
devices: Vec<<B as BackendTypes>::Device>,
) -> Vec<<B as BackendTypes>::Device>
fn collect_devices( &self, devices: Vec<<B as BackendTypes>::Device>, ) -> Vec<<B as BackendTypes>::Device>
§fn devices(&self) -> Vec<<B as BackendTypes>::Device>
fn devices(&self) -> Vec<<B as BackendTypes>::Device>
§fn train<AB>(self) -> Self::TrainModulewhere
AB: AutodiffBackend<InnerBackend = B>,
Self: HasAutodiffModule<AB>,
fn train<AB>(self) -> Self::TrainModulewhere
AB: AutodiffBackend<InnerBackend = B>,
Self: HasAutodiffModule<AB>,
§fn num_params(&self) -> usize
fn num_params(&self) -> usize
§fn save_file<FR, PB>(
self,
file_path: PB,
recorder: &FR,
) -> Result<(), RecorderError>
fn save_file<FR, PB>( self, file_path: PB, recorder: &FR, ) -> Result<(), RecorderError>
§fn load_file<FR, PB>(
self,
file_path: PB,
recorder: &FR,
device: &<B as BackendTypes>::Device,
) -> Result<Self, RecorderError>
fn load_file<FR, PB>( self, file_path: PB, recorder: &FR, device: &<B as BackendTypes>::Device, ) -> Result<Self, RecorderError>
§fn quantize_weights(self, quantizer: &mut Quantizer) -> Self
fn quantize_weights(self, quantizer: &mut Quantizer) -> Self
§impl ModuleDisplay for SmoothL1Loss
impl ModuleDisplay for SmoothL1Loss
§fn format(&self, passed_settings: DisplaySettings) -> String
fn format(&self, passed_settings: DisplaySettings) -> String
§fn custom_settings(&self) -> Option<DisplaySettings>
fn custom_settings(&self) -> Option<DisplaySettings>
§impl ModuleDisplayDefault for SmoothL1Loss
impl ModuleDisplayDefault for SmoothL1Loss
Auto Trait Implementations§
impl Freeze for SmoothL1Loss
impl RefUnwindSafe for SmoothL1Loss
impl Send for SmoothL1Loss
impl Sync for SmoothL1Loss
impl Unpin for SmoothL1Loss
impl UnwindSafe for SmoothL1Loss
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<C> CloneExpand for Cwhere
C: Clone,
impl<C> CloneExpand for Cwhere
C: Clone,
fn __expand_clone_method(&self, _scope: &mut Scope) -> C
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more§impl<T> Pointable for T
impl<T> Pointable for T
§impl<T> ToCompactString for Twhere
T: Display,
impl<T> ToCompactString for Twhere
T: Display,
§fn try_to_compact_string(&self) -> Result<CompactString, ToCompactStringError>
fn try_to_compact_string(&self) -> Result<CompactString, ToCompactStringError>
ToCompactString::to_compact_string()] Read more§fn to_compact_string(&self) -> CompactString
fn to_compact_string(&self) -> CompactString
CompactString]. Read more