116:20 — Improving MADS search step with a quadratic programming solver

Our presentation will discuss improvements in the resolution of constrained blackbox optimization (BBO) problems using the mesh adaptive direct search (MADS) algorithm implemented in the Nomad software package. MADS alternates between Search and Poll steps to generate trial points. The presentation focuses on the quadratic model search. This search uses a subset of all the previously obtained blackbox evaluations to build function models and solves an optimization sub-problem. This can be done with the MADS algorithm but depending on the problem at hand the results are not satisfactory. In particular, the COCO platform BBOB-constrained Sphere problem in 10 dimensions with one linear inequality constraint has a single known optimum. However, MADS converges very slowly to the sub-problem optimum. This work focuses on combining a Quadratic Programming (QP) solver and MADS for the sub-problem. We will present the candidate QP solvers and their implementation in Nomad. Results showing the benefits of combining a QP solver and MADS to will be given.

216:50 — Advancing derivative-free optimization: exploring logarithmic barrier and exterior penalty in direct search

This research explores an approach to addressing mathematical optimization problems characterized by nonlinear objective functions and constraints, particularly in situations where derivatives are unavailable. A Sequential Penalty Approach is introduced, incorporating a Logarithmic Barrier for managing inequality constraints and an Exterior Penalty for equality constraints.
Traditionally, optimization problems without access to derivative information pose significant challenges. Our proposed method aims to overcome these hurdles by introducing a sequential penalty framework that effectively addresses both inequality and equality constraints. The Logarithmic Barrier ensures a smooth handling of inequality constraints, while the Exterior Penalty seamlessly accommodates equality constraints, providing a comprehensive solution for a wide range of nonlinear optimization challenges.
This strategy, proposed for the first time in a derivative-free setting in the framework of linesearch methods, is adapted and incorporated into SID-PSM algorithm, a generalized pattern search method, allowing to effectively handle general nonlinear constraints. Under reasonable assumptions regarding the smoothness of the functions, convergence is established, without any convexity assumptions.
Empirical validation is conducted through experimentation on a diverse set of optimization problems. The results corroborate not only the efficacy of the proposed approach but also its practical utility. The direct search algorithm, when complemented by the Logarithmic Barrier and Exterior Penalty, consistently exhibits promising numerical results, demonstrating the method's potential.

317:20 — A hybrid direct search and projected simplex gradient method for convex constrained optimization

In this work, we propose a new Derivative-free Optimization (DFO) approach for solving convex constrained minimization problems. The feasible set is assumed to be the nonempty intersection of a finite collection of closed convex sets, such that the projection onto each of these individual convex sets is simple and inexpensive to compute. The proposed iterative approach alternates between steps that use Directional Direct Search (DDS), considering adequate poll directions, and a Spectral Projected Gradient (SPG) method, replacing the real gradient by a simplex gradient, under a DFO approach. The simplex gradient is computed at no extra cost in terms of functions evaluations, by reusing previously evaluated points. In the SPG steps, if the convex feasible set is simple, then a direct projection is computed. If the feasible set is the intersection of finitely many convex simple sets, then Dykstra’s alternating projection method is applied. Convergence properties are established under standard assumptions usually associated to DFO methods. Some preliminary numerical experiments are reported to illustrate the performance of the proposed algorithm, in particular by comparing it with a classical DDS method. Our results indicate that the hybrid algorithm is a robust and effective approach for derivative-free convex constrained optimization.