114:00 — The cosine measure relative to a subspace

The cosine measure is a tool that tells you how well a set of directions is covering the space R^n. In this talk, we extend the concept of cosine measure by defining the cosine measure relative to a subspace. This novel definition might be useful for subspace decomposition optimization methods. We propose a deterministic algorithm to compute it and discuss the situation in which the set of directions is infinite.

214:30 — A (forgotten) subspace framework for large-scale derivative-free optimization

We re-introduce a subspace framework for solving problems of $10^4$ variables without using derivatives. Studied in Sec. 5.3 of [Zhang, On Derivative-Free Optimization Methods (in Chinese), PhD thesis, CAS, 2012] and presented in ICCOPT 2013 (Lisbon), it remains nearly unknown to the community. It was implemented by Zhang in MATLAB in 2011, ported to Module-3 by Nystroem (Intel) in 2017, and included in cm3 in 2019 (https: //github.com/modula3/cm3/blob/master/caltech-other/newuoa/src/NewUOAs.m3).

315:00 — Parallel versions of the mesh adaptive direct search algorithm

This presentation surveys the different parallel variants of the mesh adaptive direct search (mads) algorithm for constrained blackbox optimization. These problems can inherently imply high computational costs due to the possible large number of variables and multi-modality of the search space. In addition, the potential time-intensive nature and time heterogeneity of the blackboxes defining the problem prompts the need for efficient implementations. With the increasing use of high-performance computing, parallelism emerges as an actionable solution to mitigate computation time. The reviewed methods employ diverse levels of parallelism and distinct parallel strategies to effectively tackle each aspect outlined above. The presentation details the practical implementations, provides computational results, and offers insights into the advantages and limitations of each mads parallel method.

415:30 — Worst case complexity bounds for linesearch-type derivative-free algorithms

In this talk we are concerned with derivative-free methods for the solution of unconstrained black-box optimization problems. We show that derivative-free algorithms which are based on a linesearch-type extrapolation technique with sufficient decrease have the same worst case complexity proved for direct search methods. Furthermore, thanks to the linesearch approach with sufficient decrease, they also have the property that the number of iterations (in the worst case) for which the norm of the gradient is above a given threshold is of the order of $\epsilon^{-2}$. This last property considerably enriches the worst case analysis of derivative-free algorithm and, to the best of our knowledge, is new in this context. It is worth noticing that the property characterizes the behavior of the derivative-free algorithm better than the usual complexity result. Indeed, typical complexity results give the number of iterations required to drive the norm of the gradient below a prefixed tolerance for the first time. If we let the method run, the norm of the gradient might well rise above the tolerance again. The property we prove in this paper indicates that the total number of iterations with a gradient norm above a specified tolerance is bounded by a constant that depends on $\epsilon^{-2}$.