We present the key ideas for finding first-order critical points of multi-objective optimization problems with nonlinear objectives and constraints. A gradient-based trust-region algorithm is modified to employ local, derivative-free surrogate models instead, and a so-called Filter ensures convergence towards feasibility. We show results of a prototype implementation in Julia, relying heavily on JuMP and suitable LP or QP solvers, that confirm the use of surrogates to reduce function calls.
The presentation is aimed at anyone with a basic understanding of gradient-based optimization or with an interest in trust-region methods and surrogate modeling. That is because many of the concepts in our multi-objective setting are generalizations of well-known single-objective pendants. Like in single-objective optimization, optimization problems with multiple objectives and nonlinear constraint functions might arise in a multitude of areas in the natural sciences, engineering, or economics. Whereas trust-region ideas allow for a relatively straightforward modeling of objective functions to design derivative-free descent algorithms, they cannot handle “difficult” constraint functions without modifications. But the composite-step approach and the Filter mechanism can be adapted from single-objective optimization, enabling the same modeling techniques for constraints as well. Now it is possible to use suitable, “fully linear” models, such as linear polynomials or radial basis function models, to achieve convergence to first-order critical points, without gradient queries for the true functions. In our pre-print (https://arxiv.org/abs/2208.12094), we refer to a prototype implementation in a Pluto notebook. This proof-of-concept uses JuMP and COSMO to find the inexact steps. Moreover, we currently use NLopt for nonlinear restoration and Makie for plotting. Our main goal, at the moment, is to package the algorithm and integrate it with the SciML ecosystem for nonlinear optimization. We hope to benefit from existing solutions, such as automatic differentiation or symbolic manipulations. For example, we currently investigate the possibility of only modeling subcomponents of a symbolically defined function. Of course, we hope to be able to provide a glimpse of this, too.