108:30 — Robust Portfolio Optimization Meets Arbitrage Pricing Theory

Robust portfolio optimization models are crucial in mitigating the impact of significant forecasting errors in expected asset returns. However, despite their significance, existing approaches often overlook a fundamental characteristic of financial markets: the absence of arbitrage opportunities. This paper presents a novel portfolio optimization model that integrates the classical mean-variance approach, the Fama and French Factor Model, and the Arbitrage Pricing Theory within a robust optimization framework. Our model utilizes return statistics to shape the uncertainty set boundaries but further enhances its representation by explicitly incorporating the no-arbitrage condition. As a result, we obtain a non-convex uncertainty set and face a trilevel optimization problem. To effectively address these challenges, we propose a cutting-plane algorithm. Our numerical experiments using different datasets and transaction cost levels demonstrate the consistent outperformance of our proposed model over benchmark models in terms of cumulative returns and risk-adjusted metrics.

209:00 — Data-Driven Robust Sequential Search

Models of sequential search arise in a variety of problems, including organizations hiring new employees, firms developing novel technologies, and individuals seeking housing or other investment opportunities. The classical papers in the sequential search literature rely on the assumption that the decision maker (DM) fully knows (or has some prior belief about) the underlying distribution of alternative values. In many applications, however, the DM explores alternatives with limited (if any) prior information on the alternative value distribution. As a motivating example, we can consider a recent graduate with no experience searching for a job with no prior knowledge of the wage distribution in the market or a seller searching for the best price among several buyers without any access to the distribution of buyers' valuation or any historical data for guidance. In these settings, it is natural to assume that the alternative value distribution is not known in advance. Since the previously prescribed decision rules in the sequential search literature no longer apply, it is necessary to develop new decision rules to facilitate search under limited information on the value distribution. Motivated by these considerations, we study a generalization of the classical Pandora's problem in which the alternative value distribution is unknown prior to the search. In each period, the DM can either explore an alternative at a cost to continue the search or select and acquire any previously explored alternative to end the search. The goal is to find a stopping rule that maximizes the worst-case ratio of the expected reward compared with an oracle with full knowledge of the alternative value distribution. We consider several variations of this problem. For each variation, we design simple policies that perform competitively with the oracle policy. Furthermore, we develop upper bounds on the relative performance of any feasible policy with respect to the oracle benchmark. The resulting upper bounds show that our policies provide nearly optimal performance.

309:30 — Missing data in data-driven optimization

Missing data is a common issue for many practical data-driven stochastic programming problems. The state-of-the-art approaches first estimate the missing data values and then separately solve the corresponding stochastic programming. Accurate estimation of missing values is typically inaccessible as it requires enormous data and sophisticated statistical methods. Therefore, this talk proposes an integrated approach, a distributionally robust optimization (DRO) framework, that simultaneously tackles the missing data problem and data-driven stochastic optimization by hedging against the uncertainties of the missing values. We construct several classes of ambiguity sets for our DRO model utilizing incomplete data sets, maximum likelihood estimation method, and different metrics. We prove the statistical consistency and finite sample guarantees of the corresponding models and provide tractable reformulations of our model for different scenarios. We perform computational studies on the multi-item inventory control problem and portfolio optimization using synthetic and real-world data. We validate that our method outperforms the traditional estimate-then-optimized approaches.

410:00 — Pivotal Sampling for Online Bayesian Matching

We study the polynomial-time approximability of the optimal online stochastic bipartite matching algorithm, initiated by Papadimitriou et al. (EC'21). Here, nodes on one side of the graph are given upfront, while at each timestep, an online node and its edge weights are drawn from a time-dependent distribution. The optimal algorithm is PSPACE-hard to approximate within some universal constant.

We provide a state-of-the-art 0.678-approximate algorithm. Building on our algorithms and the recent black-box reduction of Banihashem et al. (SODA'24), we provide polytime (pricing-based) truthful mechanisms which 0.678-approximate the social welfare of the optimal online allocation for bipartite matching markets. Our online allocation algorithm relies on the classic pivotal sampling algorithm (Srinivasan FOCS'01, Gandhi et al. J.ACM'06), along with careful discarding to obtain strong negative correlations between offline nodes. Our analysis requires new tail expectation bounds on the weighted sum of negatively correlated Bernoulli variables.