108:30 — Multifidelity Monte Carlo estimates for accelerating stochastic optimization with trust regions

In recent years, multifidelity stochastic optimization problems have gained significant attention as they arise in a wide variety of application areas including virtual biofuel engineering, the design of autonomous laboratory experiments, controlling the energy demand of buildings, multi-scale modeling for scale-up challenges, and many more. Multifidelity stochastic optimization has the capacity to reduce the computational workload by exploiting information from lower-fidelity models during the optimization of the high-fidelity target. However, there are still persistent challenges to contend with, including the crucial issue of determining the sample sizes to obtain accurate function estimates for the optimization task while operating within the constraints of a limited computational budget in the context of multifidelity models. In this talk, we present our ongoing work in designing a sample-efficient trust region algorithm utilizing multifidelity Monte Carlo estimates with the aim of achieving almost sure convergence to the first-order critical point. We will show how the proposed sampling method can enable us to optimally adjust the fidelity of the function value estimates, thus minimizing the computational resource requirements. We will also present preliminary results of our research on benchmark problems and compare with state of the art stochastic optimization approaches.

209:00 — Hierarchically constrained multi-fidelity blackbox optimization

This work introduces a novel multi-fidelity blackbox optimization algorithm designed to alleviate the resource-intensive task of evaluating infeasible points. This algorithm is an intermediary component bridging a direct search solver and a blackbox, resulting in reduced computation time per evaluation, all while preserving the efficiency and convergence properties of the chosen solver. This is made possible by assessing feasibility through a broad range of fidelities, leveraging information from cost-effective evaluations before committing to a full computation. These feasibility estimations are generated through a hierarchical evaluation of constraints, tailored to the multi-fidelity nature of the blackbox problem, and defined by a biadjacency matrix, for which we propose a construction. A series of computational tests using the NOMAD solver on the solar family of blackbox problems are conducted to validate the approach. The results show a significant improvement in solution quality when an initial feasible starting point is known in advance of the optimization process. When this condition is not met, the outcomes are contingent upon certain properties of the blackbox.

309:30 — Zeroth-order bilevel optimization

Studying bilevel optimization problems has attracted much interest over the past few years as they arising in several application domains such as automated machine learning and hyperparameter tuning. The main challenge in designing algorithms for bilevel problems is to estimate the gradient of upper level function which is itself a function of the gradient of the optimal solution to the lower problem. This estimation involves calculating blocks of the Hessian matrix of the lower level problem and their inverse which is computationally expensive. To overcome this computational challenge, fully first-order and partial zeroth-order methods have been developed which still require some unbiased gradient estimators.
In this talk, we present a fully zeroth-order stochastic approximation algorithm for solving stochastic bilevel optimization problems assuming that neither the upper/lower loss functions, nor their unbiased gradient estimates are available.
To do so, we first generalize the Gaussian convolution technique to the functions with two block variables and establish all corresponding relationships between such functions and their smoothed Gaussian approximations. Using two independent smoothing parameters for each block, will give us the flexibility of exploiting zeroth-order derivative estimates over just one block. Using these results, we estimate the first- and second-order derivatives, provide our zeroth-order double-loop algorithm, and establish its non-asymptotic convergence analysis.

410:00 — Inexact direct-search methods for bilevel optimization problems

In this work, we introduce new direct-search schemes for the solution of bilevel optimization (BO) problems. Our methods rely on a fixed accuracy blackbox oracle for the lower-level problem, and deal both with smooth and potentially nonsmooth true objectives. We thus analyze for the first time in the literature direct-search schemes in these settings, giving convergence guarantees to approximate stationary points, as well as complexity bounds in the smooth case. We also propose the first adaptation of mesh adaptive direct-search schemes for BO. Some preliminary numerical results on a standard set of bilevel optimization problems show the effectiveness of our new approaches.