Overview
This release marks the debut of our CubeCL integration, which brings cross-platform GPU
programming capabilities directly to Rust. With CubeCL now supporting both CUDA and
WebGPU, Burn benefits from a new CUDA backend that can be enabled using the cuda-jit
feature. Please note that this backend is still considered experimental, and some
operations, particularly those related to vision, may experience issues.
Additionally, this release features significant enhancements to ONNX support, including
bug fixes, new operators, and improvements in code generation.
As always, it also includes numerous bug fixes, performance enhancements, new tensor
operations, and improved documentation.
Burn 0.14.0 introduces a new tensor data format that significantly enhances serialization
and deserialization speeds and introduces Quantization, a new Beta feature included in
this release. The format is not compatible with previous versions of Burn, but you can
migrate your previously saved records using
this guide.
Module & Tensor
• Add 0-dim tensor checks for creation ops and validate TensorData shape w/ num values
#2137@laggui
• Fix bug: Filling tensor containing f32::NEG_INFINITY will result in NaN for burn-ndarray
#2095@antimora
• Enhance slice operation to support more range variation
#1989@antimora
• Print module - implement module display for remaining modules (part2)
#1933@antimora
• Add seq start position when applying RoPE encoding
#1796@laggui Bug Fixes
• Fix indices dim check in gather_update_outputs
#2149@laggui
• Bug/Remove Squeeze Panic for Multiple Dimensions
#2035@agelas ONNX Support
• Add 1d and 2d modules for interpolate with scaling (also fix ONNX Resize op)
#2081@antimora
• Improve pickle (CandleTensor) conversions to NestedValue
#1944@antimora
• Feat: Implement ONNX RandomUniform + RandomNormal in burn-import
#1806@hexd0t
• [ONNX] Add not op and extend cast support to tensors
#1634@laggui Bug Fixes
• Fix checks_channels_div_groups condition and ONNX conv import with groups
#2051@laggui
• Fix ONNX and PyTorch import section links in burn book
#1681@laggui Enhancements
• Add scientific notation formatting for small metric values
#2136@laggui
• Consistent sync/async handling, allow more functions to be async for wasm.
#1936@ArthurBrussee
• Move HandleContainer and Tensor Ops descriptions from burn-fusion to burn-tensor
#1654@syl20bnr Refactoring
Documentation & Examples
• Remove mention of example in backend section of the book
#2014@syl20bnr
• Fix image-classsification-web + autotune flag usage
#2011@laggui
• update ARCHITECTURE.md links to project architecture section in contributor book
#1759@benbaarber
• Add hidden code snippets to guide example in Burn book [redo]
#1742@jwric
• Add info about enabling debugging for new contributors
#1719@AntBlo CubeCL
• Remove CubeCL GELU kernel example reference (moved to CubeCL repo)
#2150@laggui
• Rename revision key to rev for cubecl dependencies in Cargo.toml
#2086@syl20bnr
• Fix cubecl version in Cargo.toml to correctly fecth the version tag
@syl20bnr
• Cube: variable reusability + refactor in cube macros
#1885@louisfd
• Cube: Vectorization + simple matmul implementation
#1866@louisfd
• [Refactor - Breaking] Refactor cube operations with better names & Support subgroup
operations
#1839@nathanielsimard
• Cube: support method call + prettier tensor metadata
#1829@louisfd
• Cube: first ported kernel + comptime support + variable reuse + cleanup
#1797@louisfd Miscellaneous
• remove lto linker option to make build successful
#2123@tiruka
• Modify contributing md scripts to solve conflicts between doc and scripts
#2107@tiruka
• Bump rust minimal version to 1.79
@syl20bnr
• Added parameter trust_remote_code to hf dataset call.
#2013@Haislich
• Set DEFAULT_MAX_TASKS to 1 when running tests
@syl20bnr
• LearnerBuilder "with_checkpointing_strategy" should use builder pattern
#1841@Icekey
• Add configurable application logger to learner builder
#1774@jwric
• Add Clone trait to the `OptimizerAdaptor` and Clone implementations to the optimizers
#1770@getumen
• Refactor: replace trait TemplateKernel by existing trait JitKernel
#1737@sebhtml
• Refactor element type to be decoupled from runtime
#1693@laggui Bug Fixes
• modified mnist image link in the Hugging face
#2134@tiruka
• Fix warnings when using `record-backward-compat`
#1977@laggui
• Fix `DataSerialize` conversion for elements of the same type
#1832@laggui
• Fix inverted epoch - iteration counts in valid progress
#1699@laggui