1 — 08:30 — Sensor path planning for optimal experimental design
In scenarios where static sensors are impractical, moving sensors are used to gather data to solve inverse problems. We optimize the path that such a mobile sensor takes for various optimal experimental design objectives applied to Bayesian inverse problems. We demonstrate our approach using a reaction-advection-diffusion equation that models the spread of a pollutant using a finite element approach. We impose constraints on the observation path such as a system of differential algebraic equations governing the movement of the sensor and obstacle avoidance constraints.
2 — 09:00 — Low-rank Gradient Flow – a First Order Algorithm for Non-convex Optimization
We introduce a novel class of first-order methods for unconstrained optimization, called low-rank gradient flows (LRGFs). The idea behind these methods is to construct at every optimization step a low-rank quadratic surrogate for the cost function, followed by an analytic solve for the gradient flow on the surrogate model; the optimization step concludes with a line search on the curve representing the gradient flow. It is shown that the above steps are condensed in a very simple formula for the gradient flow, at a cost per step that is comparable to that of a nonlinear conjugate gradient algorithm. The fact that the line search is conducted along a curve distinguishes LRGF from other first order optimization methods, where the line search is conducted along a search direction, that is, a straight line. This may also help LRGF better navigate the geometry of the cost function, allowing the method to avoid local minima more often than other first-order methods, as shown by numerical experiments. For higher dimensional problems the convergence can be accelerated using a multilevel strategy based on reduced order models.
3 — 09:30 — Optimal Experimental Design for Universal Differential Equations
We are interested in using universal differential equations to model complex dynamical processes, e.g. in chemical engineering or medicine. Universal differential equations, or hybrid models, combine the advantages of first-principles models and data-driven approaches. Often the dynamical process is only partially known, i.e. it may consist of subprocesses that are not yet fully understood or difficult to describe from first principles. For these unknown or uncertain subprocesses, universal approximators, such as neural networks, can be embedded in the model and trained on data.
Since this training procedure requires experimental data, which are usually costly to collect, it is necessary to carefully design the experimental setup in advance. This amounts to deciding when to measure and how to stimulate the dynamic process in order to collect data with a high information content. This is addressed by the concept of optimal experimental design (OED), leading to a challenging, specifically structured optimal control problem. We formulate the OED problem for universal differential equations, where the number of parameters to be estimated can be very large, and discuss the properties of the resulting optimization problem. We also propose different methods to reduce the complexity resulting from the large number of parameters.
4 — 10:00 — Lazified Generalized Conditional Gradient
The primal-dual active point strategy (PDAP) is known to achieve asymptotically linear convergence rates on the task of minimizing the sum of a smooth, convex loss function and a non-smooth convex regularizer over a Banach space, possibly with PDE consraints. This approach outperforms other non-accelerated conditional gradient methods at the cost of requiring solutions to two finite-dimensional optimization problems in each iteration. One of these problems is smooth and possibly non-convex, while the other is convex and non-smoothly regularized. In the present work, we investigate the properties of a lazified approach, where the subproblems are not required to be solved exactly. We demonstrate that in this setting there exist practical error tolerances on the subproblems such that asymptotical linear convergence is preserved.