Ignite.jl: A brighter way to train neural networks

07/26/2023, 2:00 PM — 2:30 PM UTC
Online talks and posters

Abstract:

We introduce Ignite.jl, a package that streamlines neural network training and validation by replacing traditional for/while loops and callback functions with a powerful and flexible event-based pipeline. The key feature of Ignite.jl is the separation of the training step from the training loop, which gives users the flexibility to easily incorporate events such as artifact saving, metric logging, and model validation into their training pipelines without having to modify existing code.

Description:

We introduce Ignite.jl (https://github.com/jondeuce/Ignite.jl), a package for simplifying neural network training and validation loops in Julia. Based on the popular Python library ignite, Ignite.jl provides a high-level, flexible event-based approach for training and evaluating neural networks in Julia with flexibility.

The traditional way of training a neural network typically involves using for/while loops and callback functions. However, this approach can become complex and difficult to maintain as the number of events increases and the training pipeline becomes more intricate. Ignite.jl addresses this issue by providing a simple yet flexible engine and event system that allows for the easy composition of training pipelines with various events such as artifact saving, metric logging, and model validation.

Event-based training abstracts away the traditional training loop, replacing it with:

  1. a training engine which executes a single training step,
  2. a data loader iterator which provides data batches, and
  3. events and corresponding handlers which are attached to the engine and are configured to fire at specific points during training.

Event handlers are much more flexible compared to other approaches like callbacks: they can be any callable, multiple handlers can be attached to a single event, multiple events can trigger the same handler, and custom events can be defined to fire at user-specified points during training. This makes adding functionality to your training pipeline easy, minimizing the need to modify existing code.

To illustrate the benefits of using Ignite.jl, we provide a minimal working example in the README.md file that demonstrates how to train a simple neural network while periodically logging evaluation metrics. In particular, we highlight that periodic model and optimizer saving can be added with no changes to the rest of the training pipeline – simply add another event. This example illustrates the benefits of separating the training step from the training loop, and how to simple it is to customize the training pipeline.

In summary, Ignite.jl is a simple and flexible package that improves the process of training neural networks by using events and handlers. It allows for easy modifications to training pipelines without needing to change existing code, it is well-documented, and thoroughly tested. We believe Ignite.jl will be a valuable addition to the Julia machine learning ecosystem.

Platinum sponsors

JuliaHub

Gold sponsors

ASML

Silver sponsors

Pumas AIQuEra Computing Inc.Relational AIJeffrey Sarnoff

Bronze sponsors

Jolin.ioBeacon BiosignalsMIT CSAILBoeing

Academic partners

NAWA

Local partners

Postmates

Fiscal Sponsor

NumFOCUS