114:00 — Stochastic derivative-free optimization algorithms using random subspace strategies

We propose trust-region and direct-search frameworks for large-scale stochastic derivative-free optimization by introducing the STARS and StoDARS algorithms. While STARS achieves scalability by minimizing random models that approximate the objective in low-dimensional affine subspaces thus significantly reducing per-iteration costs in terms of function evaluations, these goals are achieved by StoDARS through the exploration of the decision space by means of poll directions generated in random subspaces. These subspaces and their dimension are chosen via Johnson-Lindenstrauss transforms such as those obtained from Haar-distributed orthogonal random matrices. The quality of the subspaces and sets of poll directions, as well as the accuracies of estimates and models used by the algorithms are required to hold with sufficiently high, but fixed, probabilities. Convergence and complexity results are obtained for both methods using martingale theory. In particular, by leveraging the ability of StoDARS to generate a dense set of poll directions, its almost sure convergence to Clarke stationary points is established. Moreover, the analysis of second-order behavior of the well-known mesh adaptive direct-search algorithms using a second-order-like extension of the Rademacher's theorem-based definition of the Clarke subdifferential (so-called generalized Hessian) is extended to the StoDARS framework, making it the first in a stochastic direct-search setting to the best of our knowledge.

214:30 — Non-convergence analysis of probabilistic direct search

Direct search is a popular method in derivative-free optimization. Probabilistic direct search has attracted increasing attention in recent years due to both its practical success and theoretical appeal. It is proved to converge under certain conditions at the same global rate as its deterministic counterpart, but the cost per iteration is much lower, leading to significant advantages in practice. However, a fundamental question has been lacking a systematic theoretical investigation: when will probabilistic direct search fail to converge? We answer this question by establishing the non-convergence theory of probabilistic direct search. We prove that probabilistic direct search fails to converge if the searching set is probabilistic ascent. Our theory not only deepens our understanding of the behavior of the algorithm, but also clarifies the limit of reducing the cost per iteration by randomization, and hence provides guidance for practical implementations of probabilistic direct search.

315:00 — Full-low evaluation methods for bound and linearly constrained derivative-free optimization

Derivative-free optimization consists in finding the best value of an objective function without relying on derivatives. To tackle such problems, one may build approximate derivatives, using for instance finite-difference estimates. One may also design algorithmic strategies that perform space exploration and seek improvement over the current point. The first type of strategy often provides good performance on smooth problems but at the expense of more function evaluations. The second type is cheaper and typically handles non-smoothness or noise in the objective better. Recently, full-low evaluation methods have
been proposed as a hybrid class of DFO algorithms that combine both strategies, respectively denoted as Full-Eval and Low-Eval. In the unconstrained case, these methods showed promising numerical performance.

In this talk, we present an extension of the full-low evaluation framework to bound and linearly constrained derivative-free optimization problems. We derive convergence results for an instance of this framework, that combines finite-difference quasi-Newton steps with probabilistic direct-search steps. The former are projected onto the feasible set, while the latter are defined within tangent cones identified by nearby active constraints. We also showcase the practical advantages of our instance over algorithms that rely solely on Full-eval or Low-eval iterations on standard linearly constrained problems, that we adapt to introduce noisy evaluations as well as non-smoothness.

415:30 — Expected decrease for derivative-free algorithms using random subspaces

Derivative-free algorithms seek the minimum of a given function based only on function values queried at appropriate points. Their performance is known to worsen as the problem dimension increases. Recent advances in developing high dimension derivative-free techniques have tackled this issue by working in low-dimensional subspaces that are drawn at random in an iterative fashion. In this talk, we present analysis for derivative-free algorithms that employing random subspaces to obtain understanding of the expected decrease achieved per function evaluation. We develop an analysis for derivative-free algorithms (both direct search and model based approaches) employing random subspaces. Our results leverage linear local approximations of smooth functions to obtain understanding of the expected decrease achieved per function evaluation. Although the quantities of interest involve multidimensional integrals with no closed-form expression, a relative comparison for different subspace dimensions suggest that low dimension is preferable. Numerical computation of the quantities of interest confirm the benefit of operating in low-dimensional subspaces.