Getting Started

Burn is a deep learning framework in the Rust programming language. Therefore, it goes without saying that one must understand the basic notions of Rust. Reading the first chapters of the Rust Book is recommended, but don't worry if you're just starting out. We'll try to provide as much context and reference to external resources when required. Just look out for the 🦀 Rust Note indicators.

Installing Rust

For installation instructions, please refer to the installation page. It explains in details the most convenient way for you to install Rust on your computer, which is the very first thing to do to start using Burn.

Creating a Burn application

Once Rust is correctly installed, create a new Rust application by using Rust's build system and package manager Cargo. It is automatically installed with Rust.

🦀 Cargo Cheat Sheet

Cargo is a very useful tool to manage Rust projects because it handles a lot of tasks. More precisely, it is used to compile your code, download the libraries/packages your code depends on, and build said libraries.

Below is a quick cheat sheet of the main cargo commands you might use throughout this guide.

CommandDescription
cargo new pathCreate a new Cargo package in the given directory.
cargo add crateAdd dependencies to the Cargo.toml manifest file.
cargo buildCompile the local package and all of its dependencies (in debug mode, use -r for release).
cargo checkCheck the local package for compilation errors (much faster).
cargo runRun the local package binary.

For more information, check out Hello, Cargo! in the Rust Book.


In the directory of your choice, run the following:

cargo new my_burn_app

This will initialize the my_burn_app project directory with a Cargo.toml file a a src directory with an auto-generated main.rs file inside. Head inside the directory to check:

cd my_burn_app

Then, add Burn as a dependency:

cargo add burn --features wgpu

Finally, compile the local package by executing the following:

cargo build

That's it, you're ready to start! You have a project configured with Burn and the WGPU backend, which allows to execute low-level operations on any platform using the GPU.

Writing a code snippet

The src/main.rs was automatically generated by Cargo, so let's replace its content with the following:

use burn::tensor::Tensor;
use burn::backend::Wgpu;

// Type alias for the backend to use.
type Backend = Wgpu;

fn main() {
    let device = Default::default();
    // Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
    let tensor_1 = Tensor::<Backend, 2>::from_data([[2., 3.], [4., 5.]], &device);
    let tensor_2 = Tensor::<Backend, 2>::ones_like(&tensor_1);

    // Print the element-wise addition (done with the WGPU backend) of the two tensors.
    println!("{}", tensor_1 + tensor_2);
}
🦀 Use Declarations

To bring any of the Burn module or item into scope, a use declaration is added.

In the example above, we wanted bring the Tensor struct and Wgpu backend into scope with the following:

use burn::tensor::Tensor;
use burn::backend::Wgpu;

This is pretty self-explanatory in this case. But, the same declaration could be written as a shortcut to simultaneously binding of multiple paths with a common prefix:

use burn::{tensor::Tensor, backend::backend::Wgpu};

In this example, the common prefix is pretty short and there are only two items to bind locally. Therefore, the first usage with two use declarations might be preferred. But know that both examples are valid. For more details on the use keyword, take a look at this section of the Rust Book or the Rust reference.


🦀 Generic Data Types

If you're new to Rust, you're probably wondering why we had to use Tensor::<Backend, 2>::.... That's because the Tensor struct is generic over multiple concrete data types. More specifically, a Tensor can be used for 3 generic parameters: a Tensor struct has 3 generic arguments: the backend, the number of dimensions (rank) and the data type (defaults to Float). Here, we only specify the backend and number of dimensions since a Float tensor is used by default. For more details on the Tensor struct, take a look at this section.

Most of the time when generics are involved, the compiler can infer the generic parameters automatically. In this case, the compiler needs a little help. This can usually be done in one of two ways: providing a type annotation or binding the gereneric parameter via the turbofish ::<> syntax. In the example we used the so-called turbofish syntax, but we could have used type annotations instead.

let tensor_1: Tensor<Backend, 2> = Tensor::from_data([[2., 3.], [4., 5.]]);
let tensor_2 = Tensor::ones_like(&tensor_1);

You probably noticed that we provided a type annotation for the first tensor, yet it still worked. That's because the compiler (correctly) inferred that tensor_2 had the same generic parameters. The same could have been done in the original example, but specifying the parameters for both is more explicit.


By running cargo run, you should now see the result of the addition:

Tensor {
  data:
[[3.0, 4.0],
 [5.0, 6.0]],
  shape:  [2, 2],
  device:  BestAvailable,
  backend:  "wgpu",
  kind:  "Float",
  dtype:  "f32",
}

While the previous example is somewhat trivial, the upcoming basic workflow section will walk you through a much more relevant example for deep learning applications.

Running examples

Many additional Burn examples available in the examples directory. To run one, please refer to the example's README.md for the specific command to execute.

Note that some examples use the datasets library by HuggingFace to download the datasets required in the examples. This is a Python library, which means that you will need to install Python before running these examples. This requirement will be clearly indicated in the example's README when applicable.