Linear programming by first-order methods

07/30/2021, 7:00 PM7:30 PM UTC
JuMP Track


We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications.


PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP compares favorably with SCS on medium-sized instances when solving both to moderate and high accuracy. Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP outperforms a commercial LP solver. The prototype of PDLP is written in Julia and available at

Platinum sponsors

Julia Computing

Gold sponsors

Relational AI

Silver sponsors

Invenia LabsConningPumas AIQuEra Computing Inc.King Abdullah University of Science and TechnologyDataChef.coJeffrey Sarnoff

Media partners

Packt Publication

Fiscal Sponsor