Online Allocation and Pricing: Constant Regret via Bellman Inequalities

9/17/20 | 4:15pm | Online only





Itai Gurvich


Abstract: In this talk, I will discuss a family of dynamic resource allocation problems that includes, as specific instances, network revenue management and dynamic pricing. I will show that the value of information (or the regret) -- the expected gap in performance of the best sequential algorithm and its offline counterpart (a ``prophet’’ equipped with more information)---is bounded by a constant irrespective of the horizon length or the initial resource budget. This constant regret is achieved by appealingly tractable algorithms that resolve a linear program at every period and carefully translate the LP solution into dynamic actions. Since the offline is an upper bound on the optimal online policy, these tractable policies have a constant optimality gap. Our results follow from a framework that, starting from the Bellman equations of the offline decision maker, gives rise to online’s LP-based actions as suitable ``guesses’’---based on online’s smaller information set---of offline’s optimal actions.

Based on joint work with Alberto Vera and Sid Banerjee 

Bio: Itai Gurvich is a Professor at Cornell Tech and in the Operations Research and Information Engineering Department at Cornell University. He earned a Ph.D. from the Decision, Risk and Operations department at Columbia University’s Graduate School of Business. He spent 8 years teaching at the Kellogg School of Management at Northwestern University. His research interests include performance analysis and optimization of processing networks, the theory of stochastic-process approximation and the application of operations research and statistical tools to healthcare processes.

Event Time: 

2020 - 16:15