burn/lib.rs
1#![cfg_attr(not(feature = "std"), no_std)]
2#![warn(missing_docs)]
3
4//! # Burn
5//!
6//! Burn is a new comprehensive dynamic Deep Learning Framework built using Rust
7//! with extreme flexibility, compute efficiency and portability as its primary goals.
8//!
9//! ## Performance
10//!
11//! Because we believe the goal of a deep learning framework is to convert computation
12//! into useful intelligence, we have made performance a core pillar of Burn.
13//! We strive to achieve top efficiency by leveraging multiple optimization techniques:
14//!
15//! - Automatic kernel fusion
16//! - Asynchronous execution
17//! - Thread-safe building blocks
18//! - Intelligent memory management
19//! - Automatic kernel selection
20//! - Hardware specific features
21//! - Custom Backend Extension
22//!
23//! ## Training & Inference
24//!
25//! The whole deep learning workflow is made easy with Burn, as you can monitor your training progress
26//! with an ergonomic dashboard, and run inference everywhere from embedded devices to large GPU clusters.
27//!
28//! Burn was built from the ground up with training and inference in mind. It's also worth noting how Burn,
29//! in comparison to frameworks like PyTorch, simplifies the transition from training to deployment,
30//! eliminating the need for code changes.
31//!
32//! ## Backends
33//!
34//! Burn strives to be as fast as possible on as many hardwares as possible, with robust implementations.
35//! We believe this flexibility is crucial for modern needs where you may train your models in the cloud,
36//! then deploy on customer hardwares, which vary from user to user.
37//!
38//! Compared to other frameworks, Burn has a very different approach to supporting many backends.
39//! By design, most code is generic over the Backend trait, which allows us to build Burn with swappable backends.
40//! This makes composing backend possible, augmenting them with additional functionalities such as
41//! autodifferentiation and automatic kernel fusion.
42//!
43//! - WGPU (WebGPU): Cross-Platform GPU Backend
44//! - Candle: Backend using the Candle bindings
45//! - LibTorch: Backend using the LibTorch bindings
46//! - NdArray: Backend using the NdArray primitive as data structure
47//! - Autodiff: Backend decorator that brings backpropagation to any backend
48//! - Fusion: Backend decorator that brings kernel fusion to backends that support it
49//!
50//! # Quantization (Beta)
51//!
52//! Quantization techniques perform computations and store tensors in lower precision data types like 8-bit integer
53//! instead of floating point precision. There are multiple approaches to quantize a deep learning model. In most cases,
54//! the model is trained in floating point precision and later converted to the lower precision data type. This is called
55//! post-training quantization (PTQ). On the other hand, quantization aware training (QAT) models the effects of quantization
56//! during training. Quantization errors are thus modeled in the forward and backward passes, which helps the model learn
57//! representations that are more robust to the reduction in precision.
58//!
59//! Quantization support in Burn is currently in active development. It supports the following modes on some backends:
60//! - Static per-tensor quantization to signed 8-bit integer (`i8`)
61//!
62//! ## Feature Flags
63//!
64//! The following feature flags are available.
65//! By default, the feature `std` is activated.
66//!
67//! - Training
68//! - `train`: Enables features `dataset` and `autodiff` and provides a training environment
69//! - `tui`: Includes Text UI with progress bar and plots
70//! - `metrics`: Includes system info metrics (CPU/GPU usage, etc.)
71//! - Dataset
72//! - `dataset`: Includes a datasets library
73//! - `audio`: Enables audio datasets (SpeechCommandsDataset)
74//! - `sqlite`: Stores datasets in SQLite database
75//! - `sqlite_bundled`: Use bundled version of SQLite
76//! - `vision`: Enables vision datasets (MnistDataset)
77//! - Backends
78//! - `wgpu`: Makes available the WGPU backend
79//! - `webgpu`: Makes available the `wgpu` backend with the WebGPU Shading Language (WGSL) compiler
80//! - `vulkan`: Makes available the `wgpu` backend with the alternative SPIR-V compiler
81//! - `cuda`: Makes available the CUDA backend
82//! - `rocm`: Makes available the ROCm backend
83//! - `candle`: Makes available the Candle backend
84//! - `tch`: Makes available the LibTorch backend
85//! - `ndarray`: Makes available the NdArray backend
86//! - Backend specifications
87//! - `accelerate`: If supported, Accelerate will be used
88//! - `blas-netlib`: If supported, Blas Netlib will be use
89//! - `openblas`: If supported, Openblas will be use
90//! - `openblas-system`: If supported, Openblas installed on the system will be use
91//! - `autotune`: Enable running benchmarks to select the best kernel in backends that support it.
92//! - `fusion`: Enable operation fusion in backends that support it.
93//! - Backend decorators
94//! - `autodiff`: Makes available the Autodiff backend
95//! - Others:
96//! - `std`: Activates the standard library (deactivate for no_std)
97//! - `server`: Enables the remote server.
98//! - `network`: Enables network utilities (currently, only a file downloader with progress bar)
99//! - `experimental-named-tensor`: Enables named tensors (experimental)
100//!
101//! You can also check the details in sub-crates [`burn-core`](https://docs.rs/burn-core) and [`burn-train`](https://docs.rs/burn-train).
102
103pub use burn_core::*;
104
105/// Train module
106#[cfg(feature = "train")]
107pub mod train {
108 pub use burn_train::*;
109}
110
111/// Backend module.
112pub mod backend;
113
114#[cfg(feature = "server")]
115pub use burn_remote::server;