Beyond Worst-Case Adversaries in Machine Learning

3/25/21 | 4:15pm | Online only


 

 

 

 

Nika Haghtalab

Assistant Professor
UC Berkeley


Abstract: The widespread application of machine learning in practice necessitates the design of robust learning algorithms that can withstand unpredictable and even adversarial environments. The holy grail of robust algorithm design is to provide performance guarantees that do not diminish in presence of all-powerful adversaries who know the algorithm and adapt to its decisions. However, for most fundamental problems in machine learning and optimization, such guarantees are impossible. This indicates that new models are required to provide rigorous guidance on the design of robust algorithms. This talk goes beyond the worst-case analysis and presents such models and perspectives for fundamental problems in machine learning and optimization. I will also present general-purpose techniques that lead to strong robustness guarantees in practical domains and for a wide range of applications, such as online learning, differential privacy, discrepancy theory, and learning-augmented algorithm design.

Bio: Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at University of California, Berkeley. She works broadly on the theoretical aspects of machine learning and algorithmic economics with a special focus on developing foundations for machine learning that accounts for social and strategic interactions. Prior to Berkeley, she was an assistant professor in the CS department of Cornell University in 2019-2020 and a postdoctoral researcher at Microsoft Research, New England in 2018-2019. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University in 2018. Her thesis titled Foundation of Machine Learning, by the People, for the People received the CMU School of Computer Science Dissertation Award and a SIGecom Dissertation Honorable Mention Award.

Event Time: 

2021 - 16:15