Nonlinear programming on the GPU

07/29/2021, 4:30 PM5:00 PM UTC
JuMP Track


So far, most nonlinear optimization modelers and solvers have primarily targeted CPU architectures. However, with the emergence of heterogeneous computing architectures, leveraging massively parallel accelerators in nonlinear optimization has become crucial for performance. As part of the Exascale Computing Project ExaSGD, we are studying how to efficiently run nonlinear optimization algorithms at exascale using GPU accelerators.


This talk walks over our recent experiences in our development efforts. The parallel layout of GPUs requires running as many operations as possible in batch mode, in a massively parallel fashion. We will detail how we have adapted the automatic differentiation, the linear algebra and the optimization solvers in a batch setting and present the different challenges we have addressed. Our efforts have led to the development of different prototypes, all addressing a specific issue on the GPU: ExaPF for batch automatic differentiation, ExaTron as a batch optimization solver, ProxAL for distributed parallelism. The future research opportunities are manyfold for the nonlinear optimization community: how can we leverage new automatic differentiation backends developed in the machine learning community for optimization purpose? How can we exploit the Julia language to develop a vectorized nonlinear optimization modeler targeting massively parallel accelerators?

Platinum sponsors

Julia Computing

Gold sponsors

Relational AI

Silver sponsors

Invenia LabsConningPumas AIQuEra Computing Inc.King Abdullah University of Science and TechnologyDataChef.coJeffrey Sarnoff

Media partners

Packt Publication

Fiscal Sponsor