108:30 — Differentially Private Distributed Optimization with Constraints

In distributed optimization (DO), multiple agents collaborate to minimize a global objective function by iteratively exchanging intermediate local solutions. Despite not directly transmitting locally stored data, it is feasible to reconstruct the data from these exchanges. To tackle this issue, the integration of differential privacy (DP) has been crucial, ensuring the privacy of each individual's data within the dataset. Within a DPDO framework, noisy local solutions are repeatedly communicated to safeguard data privacy, which may compromise solution quality due to the trade-off between privacy and accuracy. In this talk, we will first demonstrate that the existing DP mechanism exacerbates the trade-off when constraints are present. Subsequently, we will introduce our DP mechanism that significantly improves this trade-off. The relationship between our proposed DP mechanism and existing one will be presented, as well as our privacy and convergence analyses. Furthermore, we will showcase the performance of our approach numerically, illustrating its efficacy in applications such as federated learning and distributed optimal power flow.

209:00 — Uniform Convergence of Kernel Estimators

Estimating the conditional expectation is a central task in a variety of different optimization topics, including stochastic optimization and optimal control. The kernel estimator associated with kernel ridge regression provides a suitable tool for this objective, converging to the target function with respect to the $L^2$-norm. However, to relate the optimizer of the estimated objective with the optimizer of the true objective, uniform convergence of the estimator is crucial.

In this talk, we relate uniform convergence of kernel estimators to convergence in $L^2$ as well as smoothness conditions of the corresponding target function. These conditions are given in terms of the decay rate of the associated basis coefficients. We provide an interpolation bound for the uniform norm as well as applications.

309:30 — An Interval-based Stochastic Unit Commitment Model Under Demand Uncertainty

The unit commitment problem aims to find a minimum-cost on/off status and amount of generation for each generator while satisfying the electricity demand and operational requirements. In this talk, we address the stochastic unit commitment problem where the uncertain demand follows a certain probability distribution and the prediction error is period-wise independent. In this situation, the two-stage stochastic optimization models with a finite number of demand scenarios have been widely used, where the on/off status is decided in the first stage and the amount of generation is in the second stage. However, they often suffer from excessive computational burden as the number of scenarios increases. In addition, operations based on the solutions from the models may be ineffective where the demand is sequentially realized. To overcome these drawbacks, we propose an interval-based stochastic unit commitment model, where a range of the amount of generation, called “interval”, is determined in advance for the purpose of reliable operation. Furthermore, the proposed model can avoid the use of a large number of scenarios by using the upper and lower bounds on the expected second-stage cost. We show the effectiveness and efficiency of the proposed model through the computational experiments in the context of the actual operation.

410:00 — Integrated energy system planning under short-term and long-term uncertainty: Modelling and algorithms

We propose the REORIENT (REnewable resOuRce Investment for the ENergy Transition) model for energy systems planning with the following novelties: (1) integrating capacity expansion, retrofit and abandonment planning, and (2) using multi-horizon stochastic mixed-integer linear programming with short-term and long-term uncertainty. We apply the model to the European energy system considering: (a) investment in new hydrogen infrastructures, (b) capacity expansion of the European power system, (c) retrofitting oil and gas infrastructures in the North Sea region for hydrogen production and distribution, and abandoning existing infrastructures, and (d) long-term and short-term uncertainty. We utilise the structure of multi-horizon stochastic programming and propose enhanced Benders decomposition methods to solve the model efficiently. We propose: (1) stabilising Adaptive Benders with the level set method and adaptively selecting the subproblems to solve per iteration for more accurate information, (2) a centred point stabilisation approach when the level method problem is hard to solve, and (3) dynamic level set management to improve the robustness of the algorithm by adjusting the level set per iteration.

We first conduct a sensitivity analysis on retrofitting costs of oil and gas infrastructures. We then compare the REORIENT model with a conventional investment planning model regarding costs and investment decisions. Finally, four algorithms are implemented for solving LP instances with up to 1 billion variables and 4.5 billion constraints, and two algorithms are implemented for MILP instances with high degeneracy. The results show that: (1) when the retrofitting cost is below 20\% of the cost of building new ones, retrofitting is economical for most of the existing pipelines, (2) compared with a traditional investment planning model, the REORIENT model yields 24\% lower investment cost in the North Sea region, and (3) for a 1.00\% convergence tolerance, the enhanced Benders is up to 6.8 times faster than the reference algorithm for MILP instances, and is up to 113.7 times faster than standard Benders and 2.14 times faster than unstabilised Adaptive Benders for LP instances. Also, for a 0.10\% convergence tolerance, the enhanced Benders is up to 45.5 times faster than standard Benders for LP instances, and unstabilised Adaptive Benders cannot solve the largest instance to convergence tolerance due to severe oscillation. Finally, the dynamic level set management makes the algorithms more robust and is very helpful for solving large problems.