Optimization with Limited Memory?

2/16/23 | 4:15pm | E25-111


 

 

 

 

Greg Valiant

Associate Professor
Stanford


Abstract: In many natural high-dimensional optimization settings, there are significant gaps between the amount of data or number of iterations required by algorithms whose memory usage scales linearly with the dimension versus more complex and memory-intensive algorithms. Do some problems inherently require super-linear (or quadratic memory) to be solved efficiently? In this talk, we will survey the lay-of-the-land of fundamental tradeoffs between the amount of available memory, and the amount of data or queries to the function being optimized. This will include a discussion of our recent work showing that for optimizing a convex function, given the ability to query the function value/gradient at points, a (significantly) super-linear amount of memory is required to achieve the optimal rate of convergence (that is achieved by algorithms using more than quadratic memory). Throughout, the emphasis will be on the many open problems in this area.

Bio: Gregory Valiant is an Associate Professor in Stanford's Computer Science Department, working at the intersection of Algorithms, Machine Learning, and Information Theory. One of the main themes in his work is the design of efficient algorithms for inferring information about complex distributions, given limited amounts of data, or limits on other resources such as the computation time, available memory or communication, or the quality of the available data.

Event Time: 

2023 - 16:15