Robust Certified Machine Learning and Stochastic Optimization

3/2/23 | 4:15pm | E25-111


 

 

 

 

Amine Bennouna

PhD Student (ORC Best Student Paper Award Winner)
MIT


Abstract:

We study the design of data-driven decision-making and machine learning methods that enjoy a guaranteed out-of-sample performance. Our objective is to identify the best-performing method that meets a certain desired level of robustness.

We start by examining the classical setting of independent and identically distributed (IID) data. We demonstrate the existence of optimal robust methods that undergo a phase transition based on the desired level of robustness. The optimal method can be interpreted as a Kullback-Leibler distributionally robust formulation for strong guarantees and a variance-penalized formulation for moderate guarantees.

We then consider the more realistic scenario where data points are either corrupted by noise or misspecified. Existing robust methods often overlook generalization issues and only protect against corruption, failing to provide guaranteed out-of-sample performance. We design a novel approach that offers a holistic solution that not only protects against corruption but also ensures strong generalization. This is achieved through a combination of Kullback-Leibler and Levy-Prokhorov ambiguity sets in the framework of distributionally robust optimization. Finally, we demonstrate the effectiveness of our methods in training neural networks, resulting in novel robust networks with state-of-the-art performance.

Bio: Amine Bennouna is a fourth-year Ph.D. student at the Operations Research Center at MIT, advised by professor Bart Van Parys. Prior to joining MIT, he received a Bachelor and Master of Science in Applied Mathematics from Ecole Polytechnique. His research interests lie at the intersection of machine learning and optimization, with a focus on developing novel robust, and reliable machine learning and data-driven decision-making methods.

Event Time: 

2023 - 16:15