Blog

Space digital art generated by stable diffusion.

Optimal Performance without Static Graphs by Fusing Tensor Operation Streams

Tue Mar 19 2024
Nathaniel Simard

This post explores Burn's tensor operation stream strategy, optimizing models through an eager API by creating custom kernels with fused operations. Our cusotm GELU experiment reveals a remarkable improvement of up to 78 times on our WGPU backend.

Space digital art generated by stable diffusion.

Autotune for GPU Kernels: Ensuring Consistent Peak Performance

Fri Dec 15 2023
Louis Fortier-Dubois

Crafting high-performance GPU kernels for common deep learning operations, such as matrix multiplication (matmul) and reduction, requires finesse. The speed of these kernels varies depending on input shapes and the GPU device in use, meaning the fastest one may change based on the context. In Burn, Autotune automates the task of dynamically performing kernel selection, allowing one to create a plethora of kernel variations with confidence that the best-performing one will be executed in every situation.

Space digital art generated by stable diffusion.

Creating High Performance Asynchronous Backends With Burn-Compute

Tue Nov 07 2023
Louis Fortier-Dubois

Developing new high-performance deep learning backends in Burn has become remarkably easy, as it can be readily enhanced with advanced capabilities such as asynchronous computations, intelligent memory management, and autotuning mechanisms. The innovative Burn-Compute crate lays the architectural foundation for in-house backends, effortlessly equipping them with advanced features to maximize efficiency.

Space digital art generated by stable diffusion.

Burn's New Cross-Platform GPU Backend

Tue Jul 25 2023
Nathaniel Simard, Louis Fortier-Dubois

Introducing Burn's new Cross-Platform GPU Backend built using WGPU. Burn now supports running deep learning models on a variety of hardware configurations, leveraging graphics APIs such as Vulkan, DirectX 11/12, Metal, OpenGL, and WebGPU. We discuss the possible applications in various domains and glimpse into the promising future of the framework.

Space digital art generated by stable diffusion.

Reduced Memory Usage: Burn's Rusty Approach to Tensor Handling

Tue Mar 21 2023
Nathaniel Simard

The latest release of Burn includes significant changes to its memory management strategy, and tensor-allocated memory can now be reused way more often. Overall, these changes significantly reduce memory usage, especially on the CPU compared to PyTorch.

Space digital art generated by stable diffusion.

A Case for Rust in Deep Learning

Sat Feb 11 2023
Nathaniel Simard

In this blog post, we'll explore the case for Rust in deep learning and why it may be a better option than Python. With its ability to handle complexity through safe and concurrent abstractions, Rust has the potential to tackle this field's biggest challenges in a way that Python cannot.

Stay connected

Join our community! We'd love to keep you in the loop with our newsletter.

unsubscribed