108:30 — A derivative-free approach to partitioned optimization

We introduce the "partitioned optimization" framework to solve some large-scale derivative-free optimization problems.
A typical usecase is when the objective function has an explicit structure consisting in a full-dimensional component simple to minimize and a challenging nonsmooth component depending on a few variables only (let us call them the nonsmooth variables).
We partition the variables space into a continuum of hyperplanes, each consisting on the subspace leaving all the nonsmooth variables fixed to a given set of values.
As a result, for each hyperplane, the restriction of the objective function to that hyperplane is smooth so a dedicated algorithm can compute a minimizer.
Then, we use a derivative-free algorithm to identify the hyperplane which minimizes the minimum of the restriction of the objective function to that hyperplane.
The "partitioned optimization" framework formalizes this strategy and generalizes its applicability to a broad class of large-scale derivative-free problems.
Our framework is applicable to any problem such that, first, the difficult components have an explicit small-dimensional dependency and, second, the remaining components are accessible and may be minimized at an acceptable cost.
Some numerical examples are provided to highlight a gain in performance.
In particular, we apply the framework to solve a class of discontinuous optimal control problems that makes usual strategies from the literature to fail.

209:00 — Blockwise direct search method

Direct search methods are one class of derivative-free optimization algorithms that evaluate the objective function only based on the comparison of function values. We introduce a new framework of direct search method called blockwise direct search (BDS), which divides the searching set into blocks. For every iteration, each block has its step size. We develop this framework into a solver, which is open-source and easy to use. In numerical experiments, we observe that BDS can also be compared with model-based methods in some cases. In addition, our solver shows its efficiency and stability under noise without introducing specific techniques. BDS can also be used to tune the hyperparameters itself to improve the performance.

309:30 — Using concurrent objective evaluations in derivative-free optimization

We consider the case where concurrent evaluations of an objective function can be performed. We are particularly interested in scenarios where the objective function is expensive to evaluate, with each evaluation possibly using considerable parallel computing resources or relying on evaluating a real-world system or experiment. As such, the number of parameters being optimized is relatively few, as is the number of concurrent evaluations that can be performed. We conduct extensive numerical tests to study the utilization of such concurrency for both direct-search and model-based trust-region methods. We demonstrate how increasing levels of concurrency affect the efficiency of these algorithms. We investigate the impact of varying levels of concurrency and how they affect the observed performance of these algorithms. Numerical tests are run on a collection of synthetic benchmark problems and real-world simulations. Additionally, we propose a set of guidelines for choosing between direct-search and model-based methods based on the problem characteristics and available computational resources. The utilization of parallel computing frameworks in our approach showcases a practical and scalable solution for complex simulation optimization problems. Application problems include beamline design and operation in high-energy physics as well as the design of stellarator and tokamak fusion power plants.

410:00 — A direct-search method for decentralized derivative-free optimization

Derivative-free algorithms are particularly useful to tackle optimization problems in which the function to optimize arises from complex, expensive procedures that prevent from applying classical, derivative-based algorithms. In data-driven systems, this can manifest by having to perform calculations stored on different computer nodes, which significantly hardens the optimization process.

In this talk, we will present a direct-search framework for minimizing a sum of functions that are dispatched across a network of communicating agents. The proposed method combines the use of positive spanning sets, a standard feature of direct-search techniques, together with stepsize sequences that are prone to decentralized optimization. We establish global convergence of the proposed algorithm, and provide a numerical comparison illustrating the benefit of our approach over strategies that explicitly attempt to approximate derivatives.