114:00 — Optimality Conditions with Probabilistic State Constraints

In this talk, we discuss optimization problems subject to random state constraints, where we distinguish between the chance-constrained case and the almost sure formulation. We highlight some of the difficulties in the infinite-dimensional setting, which is of interest in physics-based models where a control belonging to a Banach space acts on a system described by a partial differential equation (PDE) with random inputs or parameters. We study the setting in which the obtained state should be bounded uniformly over the physical domain with high probability, or even probability one. We apply our results to a model with a random elliptic PDE, where the randomness is induced by the right-hand side. For the chance-constrained setting, this structure allows us to obtain an explicit representation for the Clarke subdifferential of the probability function using the spherical radial decomposition of Gaussian random vectors. This representation is used for the numerical solution in a discretize-then-optimize approach. For the almost sure setting, we use a Moreau-Yosida regularization and solve a sequence of regularized problems in an optimize-then-discretize approach. The solutions are compared, providing insights for the development of further algorithms.

214:30 — Convergence rates for ensemble-based solutions to optimal control of uncertain dynamical systems

We consider optimal control problems with nonlinear ordinary differential equations with uncertain inputs. We approximate the optimal control problem using the sample average approximation, yielding optimal control problems with ensembles of deterministic dynamical systems. Utilizing techniques developed for deriving metric entropy bounds for suprema of sub-Gaussian processes, we derive nonasymptotic Monte Carlo-type convergence rates for the ensemble-based solutions. Moreover, we explore the challenges associated with extending our approach to establishing convergence rates for optimal control under uncertainty with time-dependent partial differential equations. We illustrate our theoretical framework on a catalyst mixing problem under uncertainty.

315:00 — Adaptive Surrogate Modeling for Trajectory Optimization with Model Inexactness

In many applications one must compute optimal trajectories from imperfect knowledge of the dynamics. For example, solving trajectory optimization problems for hypersonic vehicles requires the computation of lift and drag coefficients at many flight configurations. Determining these coefficients requires expensive high-fidelity computations using detailed representations of the hypersonic vehicle. This talk proposes the use of computationally inexpensive adaptive Gaussian process models constructed from high-fidelity samples to approximate the components of the dynamics that are expensive to evaluate. To reduce the effect of model errors on the optimal trajectory, the current Gaussian process model is updated as needed at the cost of evaluating the components of the dynamics at a small number of additional sample points. First, the optimal control problem is solved using the mean of the current Gaussian process model to represent the dynamics. Next, sensitivity analysis is combined with properties of the Gaussian process model, such as its variance, to determine whether the Gaussian process model needs to updated, and if so, at which samples the dynamics should be evaluated to update the Gaussian process model. This talk outlines our current model refinement procedure and demonstrates its performance on a trajectory optimization problem for a hypersonic vehicle with lift and drag models that are known, but expensive to evaluate.

415:30 — Empirical estimators for risk-neutral composite optimal control with applications to bang-bang control

We consider risk-neutral composite optimal control problems where the objective functional consists of
an expectation functional involving the solution operator of a parametrized PDE and a nonsmooth but convex regularizer. Our particular interest lies in cases in which the latter lacks strong convexity, e.g. ``bang-bang-off'' regularizers given by the sum of a sparsifying term and additional box constraints.
For the practical realization, the expectation term is treated by a Monte Carlo sample-based approach. We study the asymptotic consistency of this ansatz and derive nonasymptotic sample size estimates. Our analyses leverage problem structures commonly encountered in PDE-constrained optimization problems, including compact embeddings. Our theoretical results are confirmed by extensive numerical results for both, linear and bilinear, control problems.