116:20 — An inexact restoration direct multisearch filter approach to constrained optimization

Direct Multisearch (DMS) is a class of methods for multiobjective derivative-free optimization that has a well-established convergence analysis and competitive computational implementations, being often used as benchmark for new algorithms or in practical applications. From a theoretical point of view, DMS was developed for continuous optimization with general constraints, using an extreme barrier approach where only feasible points are evaluated. In this work, we propose the integration of an inexact restoration filter approach in DMS, to address optimization problems with general constraints. Like in any filter approach, violations of the relaxable constraints are addressed as an additional objective that needs to be minimized. The inexact restoration approach attempts to recover feasibility when the poll center is infeasible. Under mild assumptions, we prove that the so-called DMS-FILTER-IR algorithm generates feasible and/or infeasible subsequences of iterates that converge to either a Pareto stationary point, in the feasible case, or to a Pareto stationary point for the problem that only considers the unrelaxable constraints, potentially serving as a Pareto stationary point of the original problem, in the infeasible case. We will detail the proposed algorithm, provide theoretical results on convergence, and report numerical experiments that state the good performance of this approach to address multiobjective problems with general constraints.

216:50 — A comparison of search strategies for multiobjective direct search algorithm

The last decade has seen the development of new convergence-based derivative-free multiobjective algorithms. Most of them are extensions of reliable single-objective methods. Direct search algorithms are iterative procedures that alternate between search and poll steps. The poll, a local step around the current incumbent, guarantees their convergence. The search step is optional. However, well-designed search strategies can significantly improve the performance of these methods. Search strategies have been extensively studied for single-objective optimization. The proposal of such strategies for multiobjective optimization is in its infancy. In this talk, we propose several extensions of search strategies for multiobjective optimization: once is based on the quadratic models. Numerical experiments illustrate the performance of these new approaches.

317:20 — Gradient-based and derivative-free multiobjective optimization via iterative single-objective optimization: MO-BFGS and MO-SLSQP

Many multiobjective solvers approach the simultaneous optimization of two or more objective functions by solving varying single-objective formulations (such as a weighted sum of the objective values) iteratively. Their scalar formulations of the multiobjective problem are thereby typically independent of any solutions found and not easy to set in practice (in particular when solving non-linear objective functions). Evolutionary multiobjective optimization (EMO) algorithms, on the contrary, typically "evolve" a fixed set of $\mu$ solutions (aka their "population") in parallel towards the Pareto set---exploiting information about already explored search space regions in terms of set-dominance and diversity criteria and without the need of a priori information from a human.

In this talk, I will discuss another possibility: optimizing iteratively a set-based scalarizing function which dynamically changes with the increasing set of found solutions and which is based on the hypervolume indicator. This allows to use efficient single-objective solvers without the need to specify (human) preferences in classical scalarizing approaches. In numerical experiments with SLSQP and L-BFGS as single-objective solvers, we observe how the new iterative approach performs in both gradient-based and derivative-free mode on a variety of test functions and compare the two resulting algorithms MO-SLSQP and MO-BFGS to a multitude of previously ran algorithms.

This is joint work with Anne Auger, Nikolaus Hansen, and Baptiste Plaquevent-Jourdain.