ExaTron.jl: a scalable GPU-MPI-based batch solver for small NLPs

07/29/2021, 1:30 PM2:00 PM UTC
Blue

Abstract:

We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.

Description:

We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. Its algorithm is based on a trust-region Newton algorithm for solving bound constrained nonlinear nonconvex problems. In contrast to existing work in the literature, it completely works on GPUs without requiring data transfers between CPU and GPU during its procedure. This enables us to eliminate one of the main performance bottlenecks under memory-bound situation. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We have implemented an ADMM algorithm for solving alternating current optimal power flow, where tens of thousands of small nonlinear nonconvex problems are solved by ExaTron.jl. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.

Platinum sponsors

Julia Computing

Gold sponsors

Relational AI

Silver sponsors

Invenia LabsConningPumas AIQuEra Computing Inc.King Abdullah University of Science and TechnologyDataChef.coJeffrey Sarnoff

Media partners

Packt Publication

Fiscal Sponsor

NumFOCUS