114:00 — Dynamic optimal combination of forecasting models

In many real-world forecasting problems, we have access to several error-minimization models whose performance may depend on the current state of the studied system. It is often possible to improve prediction performance by merging these models together in an appropriate way, which depends on the system conditions. Therefore, the optimal model combination must be dynamically updated according to the available data and target.

We review various approaches that allow us to reduce prediction error, measured by a given performance indicator such as the root mean-squared error, and formulate a general mathematical programming framework to dynamically find the optimal combination of forecasting models with respect to the chosen indicator. We then analyze the mathematical properties of the resulting optimization problem and the solution techniques required to obtain solutions. Since the data distribution is expected to evolve over time, we further discuss how to efficiently construct the training sample along with the training and validation samples.

We illustrate the proposed methodology using short-term electricity demand forecasting, relying on real data and machine-learning models provided by Hydro-Québec.

214:30 — On the global convergence of a decomposition algorithm for nonconvex two-stage problems

We discuss the global convergence of a decomposition algorithm for nonlinear nonconvex continuous two-stage problems. To render the second-stage response function smooth, a smoothing technique based on a barrier function is used so that standard nonlinear second-order optimization methods can be utilized for both the first-stand and the second-stage problems. The challenge in this context is the existance of local minima and other stationary points in the subproblem besides a global minimum. We introduce a new definition of a local minimum and show how global convergence to such points can be proven.

315:00 — Modified Line Search Sequential Quadratic Methods for Equality-Constrained Optimization with Unified Global and Local Convergence Guarantees

In this paper, we propose a method that has foundations in the line search sequential quadratic programming paradigm for solving general nonlinear equality constrained optimization problems. The method employs a carefully designed modified line search strategy that utilizes second-order information of both the objective and constraint functions, as required, to mitigate the Maratos effect. Contrary to classical line search sequential quadratic programming methods, our proposed method is endowed with global convergence and local superlinear convergence guarantees. Moreover, we extend the method and analysis to the setting in which the constraint functions are deterministic but the objective function is stochastic or can be represented as a finite-sum. We also design and implement a practical inexact matrix-free variant of the method. Finally, numerical results illustrate the efficiency and efficacy of the method.

415:30 — Comparing stochastic oracles arizing in robust gradient estimation

We will discuss several robust gradient estimators, such as median of means, and the properties of the stochastic oracles based on these estimators. We will demonstrate how these properties fit into convergence analysis of stochastic optimization methods that rely on such oracles and we will compare their performance with the standard stochastic gradient based on a minibatch.