ORC IAP Seminar 2019

1/30/19 | 9:30am-4:45pm | 32-141

Machine Learning and Operations Research 

Video of the seminar can be found here: https://www.youtube.com/playlist?list=PL6nqfd-VvqxHtR7RD3jsdT8iFMSiAUcCn


Description: Machine learning techniques are only as good as the data they are built on; optimization and OR models are needed to address data issues like robustness, interpretability, and unobserved data. The Operations Research Center IAP Seminar this year will discuss how these topics are being addressed both by researchers and practitioners.

If you have any questions, please contact us via email: orc_iapcoordinators@mit.edu.


Date: Wednesday, January 30th, 2019
Time: 9:30am-4:45pm 
Place:32-141

Schedule (timing of talks subject to change):


9:30am-10:00am

COFFEE AND REFRESHMENTS


10:00am-10:45am

Negin Golrezaei

Assistant Professor, MIT 

Title
Dynamic Incentive-Aware Learning: Robust Pricing in Contextual Auctions
Abstract
Motivated by pricing in ad exchange markets, we consider the problem of robust learning of reserve prices against strategic buyers in repeated contextual second-price auctions. Buyers’ valuations for an item depend on the context that describes the item. However, the seller is not aware of the relationship between the context and buyers’ valuations, i.e., buyers’ preferences. The seller’s goal is to design a learning policy to set reserve prices via observing the past sales data, and her objective is to minimize her regret for revenue, where the regret is computed against a clairvoyant policy that knows buyers’ heterogeneous preferences. Given the seller’s goal, utility-maximizing buyers have the incentive to bid untruthfully in order to manipulate the seller’s learning policy. We propose two learning policies that are robust to such strategic behavior. These policies use the outcomes of the auctions, rather than the submitted bids, to estimate the preferences while controlling the long-term effect of the outcome of each auction on the future reserve prices. The first policy called Contextual Robust Pricing (CORP) is designed for the setting where the market noise distribution is known to the seller and achieves a T-period regret of O(dlog(Td)log(T)), where d is the dimension of the contextual information. The second policy, which is a variant of the first policy, is called Stable CORP (SCORP). This policy is tailored to the setting where the market noise distribution is unknown to the seller and belongs to an ambiguity set. We show that the SCORP policy has a T-period regret of O( dlog(Td)T^(2/3)).
Bio

Negin Golrezaei is an Assistant Professor of Operations Management at the MIT Sloan School of Management. Her current research interests are in the area of machine learning, statistical learning theory, mechanism design, and optimization algorithms with applications to revenue management, pricing, and online markets. Before joining MIT, Negin spent a year as a postdoctoral fellow at Google Research in New York where she worked with the Market Algorithm team to develop, design, and test new mechanisms and algorithms for online marketplaces. She is the recipient of the 2017 George B. Dantzig Dissertation Award; the INFORMS Revenue Management and Pricing Section Dissertation Prize; University of Southern California (USC) Ph.D. Achievement Award (2017), and USC CAMS Graduate Student Prize, for excellence in research with a substantial mathematical component (2017), and USC Provost's Ph.D. Fellowship (2011). Negin received her BSc (2007) and MSc (2009) degrees in electrical engineering from the Sharif University of Technology, Iran, and a Ph.D. (2017) in operations research from USC.

 


11:00am-11:45am

Nathan Kallus

Assistant Professor, Cornell University

Title
Learning to Personalize from Observational Data Under Unobserved Confounding
Abstract

Recent work on counterfactual learning from observational data aims to leverage large-scale data -- much larger than any experiment can ever be -- to learn individual-level causal effects for personalized interventions. The hope is to transform electronic medical records to personalized treatment regimes, transactional records to personalized pricing strategies, and click- and "like"-streams to personalized advertising campaigns. Motivated by the richness of the data, existing approaches (including my own) make the simplifying assumption that there are no unobserved confounders: unobserved variables that affect both treatment and outcome and would induce non-causal correlations that cannot be accounted for. However, all observational data, which lacks experimental manipulation, no matter how rich, will inevitably be subject to some level of unobserved confounding and assuming otherwise can lead to personalized treatment policies that seek to exploit individual-level effects that are not really there, may intervene where not necessary, and may in fact lead to net harm rather than net good relative to current, non-personalized practices. The question is then how to use such powerfully rich data to safely improve upon current practices. In this talk, I will present a novel approach to the problem that calibrates policy learning to realistic violations of the unverifiable assumption of unconfoundedness. Our framework for confounding-robust policy improvement optimizes the minimax regret of a candidate policy against a baseline standard-of-care policy over an uncertainty set for propensity weights motivated by sensitivity analysis in causal inference. By establishing a finite-sample generalization bound, we prove that our robust policy, when applied in practice, is (almost) guaranteed to do no worse than the baseline and improve upon it if it is possible. We characterize the adversarial optimization subproblem and use efficient algorithmic solutions to optimize over policy spaces such as hyperplanes, score cards, and decision trees. We assess our methods on a large clinical trial of acute ischaemic stroke treatment, demonstrating that hidden confounding can hinder existing approaches and lead to overeager intervention and unwarranted harm, while our robust approach guarantees safety and focuses on well-evidenced improvement, a necessity for making personalized treatment policies learned from observational data usable in practice.

Bio

Nathan is an Assistant Professor in the School of Operations Research and Information Engineering and Cornell Tech at Cornell University. Nathan's research revolves around data-driven decision making, the interplay of optimization and statistics in decision making and in inference, and the analytical capacities and challenges of observational data. Nathan holds a PhD in Operations Research from MIT as well as a BA in Mathematics and a BS in Computer Science both from UC Berkeley. Before coming to Cornell, Nathan was a Visiting Scholar at USC's Department of Data Sciences and Operations and a Postdoctoral Associate at MIT's Operations Research and Statistics group.

 


12:00pm-1:30pm

LUNCH BREAK (not provided)


1:30pm-3:00pm 

PANEL DISCUSSION

Bala Chandran

Co-founder and CEO, Lumo

Bio

Bala Chandran is the Co-founder and CEO of Lumo, a Boston-based startup that forecasts flight delays. He is equal parts aviation geek and data nerd, and has a PhD from Berkeley where he worked on network optimization algorithms.

 

Virginia Goodwin

Technical Staff, Lincoln Labs

Bio

Ms. Goodwin is a member of the technical staff in the Homeland Protection Systems Group in the Homeland Protection and Air Traffic Control Division of MIT/Lincoln Laboratory. She joined the lab in 2004, in the Air, Missile, & Maritime Defense Technology division. Virginia’s work focuses on implementing novel computer vision and machine learning algorithms for decision support across multiple sensor modalities. She received her bachelor’s degree in Physics from Wellesley College in 2004 and her Master’s degree in Electrical Engineering from Harvard University in 2010.

 

Kermit Threatte

Director, Wayfair

Bio

Kermit Threatte is a Director on Wayfair’s Operations Research team and leads projects to optimize warehouse efficiency and order fulfillment in both North America and Europe. Before joining Wayfair, Kermit spent 17+ years consulting for Analytics Operations Engineering and McKinsey & Company.  Consulting for so many years provided the opportunity to work across a broad spectrum of problems and industries including supply chain optimization; machine learning for both operations and marketing and a host of stochastic problems from inventory management to call center optimization. Kermit received an SM from the Operations Research Center in 2001 and still lives in the Boston area, taking advantage of the great outdoor activities New England has to offer including hiking, cycling, skiing and snowshoeing.

Submit questions here.


3:00pm-3:45pm

Caroline Uhler

Associate Professor, MIT 

Title
Using Interventional Data for Causal Inference
Abstract
Large-scale interventional datasets are becoming available in various fields, most prominently in genomics and advertising. The availability of such data motivates the development of a causal inference framework that is based on observational and interventional data. We first characterize the causal relationships that are identifiable from interventional data. In particular, we show that imperfect interventions, which only modify (i.e., without necessarily eliminating) the dependencies between targeted variables and their causes, provide the same causal information as perfect interventions, despite being less invasive. Second, we present the first provably consistent algorithm for learning a causal network from a mix of observational and interventional data. We end by discussing applications of this causal inference framework to the estimation of gene regulatory networks.
Bio

Caroline Uhler joined the MIT faculty in 2015 and is currently the Henry L. and Grace Doherty associate professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. She holds an MSc in mathematics, a BSc in biology, and an MEd in high school mathematics education all from the University of Zurich. She obtained her PhD in statistics, with a designated emphasis in computational and genomic biology, from the University of California, Berkeley in 2011. She then spent a semester as a research fellow in the program on "Theoretical Foundations of Big Data Analysis" at the Simons Institute at UC Berkeley, postdoctoral positions at the Institute for Mathematics and its Applications at the University of Minnesota and at ETH Zurich, and 3 years as an assistant  professor at IST Austria. She is a Sloan Research Fellow and an elected member of the International Statistical Institute, and she received an NSF Career Award, a Sofja Kovalevskaja Award from the Humboldt Foundation, and a START Award from the Austrian Science Foundation. Her research focuses on mathematical statistics and computational biology, in particular on graphical models and causal inference with applications to gene regulation.

 


4:00pm-4:45pm

Bartolomeo Stellato

Postdoctoral Associate at the Operations Research Center

Title
The Voice of Optimization
Abstract

We present a new way to see optimization problems. Using machine learning techniques we are able to predict the strategy behind the optimal solution in any continuous and mixed-integer convex optimization problem as a function of its key parameters. The benefits of our approach are interpretability and speed. We use interpretable machine learning algorithms such as optimal classification trees (OCTs) to gain insights on the relationship between the problem parameters and the optimal solution. In this way, optimization is no longer a black-box and we can understand it. In addition, once we train the predictor, we can solve optimization problems at very high speed. This aspect is also relevant for non interpretable machine learning methods such as neural networks (NNs) since they can be evaluated very efficiently after the training phase.

We show on several realistic examples that the accuracy behind our approach is in the 90%-100% range, while even when the predictions are not correct, the degree of suboptimality or infeasibility is very low. We also benchmark the computation time beating state of the art solvers by multiple orders of magnitude.

Therefore, our method provides on the one hand a novel insightful understanding of the optimal strategies to solve a broad class of continuous and mixed-integer optimization problems and on the other hand a powerful computational tool to solve online optimization at very high speed.

Bio
Bartolomeo Stellato is a Postdoctoral Associate at the Operations Research Center under the supervision of Prof. Dimitris Bertsimas. He obtained a D.Phil. (Ph.D.) in Engineering Science (2017) from the University of Oxford under the supervision of Prof. Paul Goulart as part of the Marie Curie EU project TEMPO. He received a B.Sc. degree in Automation Engineering (2012) from Politecnico di Milano and a M.Sc. in Robotics, Systems and Control (2014) from ETH Zürich. His research focuses on the interplay between machine learning and optimization. He is also interested in fast numerical methods for online  optimization and optimal control. In 2016, he visited Prof. Stephen Boyd’s group at Stanford University where he developed the OSQP solver now widely used in academia and industry with tens of thousands of downloads per month. He is the recipient of the IEEE Transaction on Power Electronics 1st prize paper award.

A PDF of the schedule can be found here

Event Time: 

2019 - 09:30