Blazingly Fast
We believe that the essence of a deep learning framework lies in transforming computation into valuable intelligence, so we've placed performance at the heart of Burn.
Flexibility · Portability · Performance
Burn emphasizes performance, flexibility, and portability for both training and inference. Developed in Rust, it is designed to empower machine learning engineers and researchers across industry and academia.
Burn is not just a Rust-based clone of PyTorch or TensorFlow. It embodies a fresh perspective, carefully balancing trade-offs in key areas to enable remarkable flexibility, top-tier performance, and a smooth developer experience.
We believe that the essence of a deep learning framework lies in transforming computation into valuable intelligence, so we've placed performance at the heart of Burn.
Maximizes composability for unparalleled flexibility, enabling the realization of even the most ambitious ideas without compromising on reliability and efficiency.
Abstracts the backend implementation for unmatched portability across all hardware devices, enabling development on your laptop GPU, model training in the cloud, and inference on embedded devices.
Leverages Rust's memory safety and concurrency features for enhanced performance with security.
Features a clearly defined and extensively documented modeling API.
Includes a dynamic computational graph with a custom just-in-time compiler for enhanced flexibility and efficiency.
Offers multiple backend implementations with support for both CPU and GPU.
Provides comprehensive support for logging, metrics, and checkpointing during model training.
Open-source from the outset, backed by a vibrant and dedicated community.
Looking to develop an AI application? Burn enables you to train and deploy your model on mobile devices, browsers, desktops, embedded systems, or large GPU clusters. Engaged in fundamental research? There's no need to master GPU programming to optimize your code; we handle that for you, so you can concentrate on your research.
Burn adheres to the best practices of the Rust programming language, renowned for its reliability, enhancing development speed and helping you achieve your goals more quickly.
Burn introduces a unique architecture based on tensor operation streams, fully optimized at runtime and auto-tuned for your hardware by a just-in-time compiler.
This approach relies on Rust's ownership rules to precisely track tensor usage, a feature unattainable without a robust type system. It enables you to create highly dynamic models with the performance of an extensively optimized static graph.