Burn

Deep Learning Framework

Flexibility · Portability · Performance

climbingBurn
What is Burn?

A next generation Deep Learning Framework

Burn emphasizes performance, flexibility, and portability for both training and inference. Developed in Rust, it is designed to empower machine learning engineers and researchers across industry and academia.

Key Features Bannerburn

Key Features

Burn ships with everything you need to build and train your models

Burn is not just a Rust-based clone of PyTorch or TensorFlow. It embodies a fresh perspective, carefully balancing trade-offs in key areas to enable remarkable flexibility, top-tier performance, and a smooth developer experience.

Blazingly Fast

fast

We believe that the essence of a deep learning framework lies in transforming computation into valuable intelligence, so we've placed performance at the heart of Burn.

Flexible

flexible

Maximizes composability for unparalleled flexibility, enabling the realization of even the most ambitious ideas without compromising on reliability and efficiency.

Portable

portable

Abstracts the backend implementation for unmatched portability across all hardware devices, enabling development on your laptop GPU, model training in the cloud, and inference on embedded devices.

Reliable

reliable

Leverages Rust's memory safety and concurrency features for enhanced performance with security.

Intuitive

intuitive

Features a clearly defined and extensively documented modeling API.

Dynamic

dynamic

Includes a dynamic computational graph with a custom just-in-time compiler for enhanced flexibility and efficiency.

Backend CPU / GPU

backend

Offers multiple backend implementations with support for both CPU and GPU.

Batteries Included

batteriesIncluded

Provides comprehensive support for logging, metrics, and checkpointing during model training.

Community Driven

communityDriven

Open-source from the outset, backed by a vibrant and dedicated community.

What for?

Whatever you're up to.

Looking to develop an AI application? Burn enables you to train and deploy your model on mobile devices, browsers, desktops, embedded systems, or large GPU clusters. Engaged in fundamental research? There's no need to master GPU programming to optimize your code; we handle that for you, so you can concentrate on your research.

use case list

Spend less time
debugging, more
time building.

Burn adheres to the best practices of the Rust programming language, renowned for its reliability, enhancing development speed and helping you achieve your goals more quickly.

everyone in footer

Crafted from scratch using Rust … ?

Burn introduces a unique architecture based on tensor operation streams, fully optimized at runtime and auto-tuned for your hardware by a just-in-time compiler.

This approach relies on Rust's ownership rules to precisely track tensor usage, a feature unattainable without a robust type system. It enables you to create highly dynamic models with the performance of an extensively optimized static graph.

Stay connected

Join our community! We'd love to keep you in the loop with our newsletter.

unsubscribed