Getting Started
Burn is a deep learning framework in the Rust programming language. Therefore, it goes without saying that one must understand the basic notions of Rust. Reading the first chapters of the Rust Book is recommended, but don't worry if you're just starting out. We'll try to provide as much context and reference to external resources when required. Just look out for the 🦀 Rust Note indicators.
Installing Rust
For installation instructions, please refer to the installation page. It explains in details the most convenient way for you to install Rust on your computer, which is the very first thing to do to start using Burn.
Creating a Burn application
Once Rust is correctly installed, create a new Rust application by using Rust's build system and package manager Cargo. It is automatically installed with Rust.
🦀 Cargo Cheat Sheet
Cargo is a very useful tool to manage Rust projects because it handles a lot of tasks. More precisely, it is used to compile your code, download the libraries/packages your code depends on, and build said libraries.
Below is a quick cheat sheet of the main cargo
commands you might use throughout this guide.
Command | Description |
---|---|
cargo new path | Create a new Cargo package in the given directory. |
cargo add crate | Add dependencies to the Cargo.toml manifest file. |
cargo build | Compile the local package and all of its dependencies (in debug mode, use -r for release). |
cargo check | Check the local package for compilation errors (much faster). |
cargo run | Run the local package binary. |
For more information, check out Hello, Cargo! in the Rust Book.
In the directory of your choice, run the following:
cargo new my_burn_app
This will initialize the my_burn_app
project directory with a Cargo.toml
file and a src
directory with an auto-generated main.rs
file inside. Head inside the directory to check:
cd my_burn_app
Then, add Burn as a dependency:
cargo add burn --features wgpu
Finally, compile the local package by executing the following:
cargo build
That's it, you're ready to start! You have a project configured with Burn and the WGPU backend, which allows to execute low-level operations on any platform using the GPU.
Writing a code snippet
The src/main.rs
was automatically generated by Cargo, so let's replace its content with the
following:
use burn::tensor::Tensor;
use burn::backend::Wgpu;
// Type alias for the backend to use.
type Backend = Wgpu;
fn main() {
let device = Default::default();
// Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
let tensor_1 = Tensor::<Backend, 2>::from_data([[2., 3.], [4., 5.]], &device);
let tensor_2 = Tensor::<Backend, 2>::ones_like(&tensor_1);
// Print the element-wise addition (done with the WGPU backend) of the two tensors.
println!("{}", tensor_1 + tensor_2);
}
🦀 Use Declarations
To bring any of the Burn module or item into scope, a use
declaration is added.
In the example above, we wanted bring the Tensor
struct and Wgpu
backend into scope with the
following:
use burn::tensor::Tensor;
use burn::backend::Wgpu;
This is pretty self-explanatory in this case. But, the same declaration could be written as a shortcut to simultaneously binding of multiple paths with a common prefix:
use burn::{tensor::Tensor, backend::backend::Wgpu};
In this example, the common prefix is pretty short and there are only two items to bind locally.
Therefore, the first usage with two use
declarations might be preferred. But know that both
examples are valid. For more details on the use
keyword, take a look at
this section
of the Rust Book or the
Rust reference.
🦀 Generic Data Types
If you're new to Rust, you're probably wondering why we had to use Tensor::<Backend, 2>::...
.
That's because the Tensor
struct is generic
over multiple concrete data types. More specifically, a Tensor
can be defined using three generic
parameters: the backend, the number of dimensions (rank) and the data type (defaults to Float
).
Here, we only specify the backend and number of dimensions since a Float
tensor is used by
default. For more details on the Tensor
struct, take a look at
this section.
Most of the time when generics are involved, the compiler can infer the generic parameters
automatically. In this case, the compiler needs a little help. This can usually be done in one of
two ways: providing a type annotation or binding the gereneric parameter via the turbofish ::<>
syntax. In the example above we used the so-called turbofish syntax, but we could have used type
annotations instead like this:
let tensor_1: Tensor<Backend, 2> = Tensor::from_data([[2., 3.], [4., 5.]]);
let tensor_2 = Tensor::ones_like(&tensor_1);
You probably noticed that we provided a type annotation for the first tensor only and yet this
example still works. That's because the compiler (correctly) inferred that tensor_2
had the same
generic parameters. The same could have been done in the original example, but specifying the
parameters for both is more explicit.
By running cargo run
, you should now see the result of the addition:
Tensor {
data:
[[3.0, 4.0],
[5.0, 6.0]],
shape: [2, 2],
device: BestAvailable,
backend: "wgpu",
kind: "Float",
dtype: "f32",
}
While the previous example is somewhat trivial, the upcoming basic workflow section will walk you through a much more relevant example for deep learning applications.
Using prelude
Burn comes with a variety of things in its core library. When creating a new model or using an existing one for inference, you may need to import every single component you used, which could be a little verbose.
To address it, a prelude
module is provided, allowing you to easily import commonly used structs
and macros as a group:
use burn::prelude::*;
which is equal to:
use burn::{
config::Config,
module::Module,
nn,
tensor::{
backend::Backend, Bool, Device, ElementConversion, Float, Int, Shape, Tensor,
TensorData,
},
};
For the sake of simplicity, the subsequent chapters of this book will all use this form of importing except in the Building Blocks chapter, as explicit importing aids users in grasping the usage of particular structures and macros.