1 — 08:30 — Steepest-edge simplex algorithms for quadratic programming
In the field of optimization, one of the algorithms used to solve linear programming (LP) problems is the simplex method, which was proposed by G. B. Dantzig in 1947. The simplex method was extended to be applicable to quadratic programming (QP) problems in the 1950s and 1960s, but since then, it does not seem to be as popular as the LP simplex method. The QP simplex method has the advantage that the tools of linear algebra used in the LP simplex method (such as LU factorization, updates, etc.) can be utilized without modification, allowing for efficient implementation that fully takes advantage of sparsity. In this talk, we will provide an overview of the QP simplex method and discuss extending the steepest-edge rule to the QP simplex method.
2 — 09:00 — ** CANCELLED ** Enhancing Data-Driven Distributionally Robust Optimization: Jointly Incorporating First-Order Moment and Wasserstein Distance
Distribution distance and moment information are two widely used methods for characterizing distribution properties. In this research, we study the distributionally robust optimization problems where the ambiguity sets encompass both the Wasserstein distance and first-order moment information. These ambiguity sets offer a more effective approach for leveraging sample data, thereby mitigating conservativeness. To harness the advantages of the finite-sum and asymmetrical structure inherent in such problems, we propose a novel variance-reduced asymmetrical primal-dual algorithm for efficiently solving the associated dual problem. Additionally, for a specific class of problems, we exploit the regularization structure to achieve enhanced theoretical results. Through extensive numerical investigations involving three distinct problem domains (namely, machine learning, portfolio selection, and the newsvendor problem), we compare the performance of our proposed model against that of an ambiguity set based solely on the Wasserstein distance. The results show the substantial improvement in out-of-sample performance achieved by incorporating first-order information.