Robust Stochastic Optimization with Proxboost
发布时间:2019年12月18日
浏览次数:6390
发布者: Xiaoni Tan
主讲人: Junyu Zhang (University of Minnesota Twin Cities)
活动时间: 从 2019-12-20 15:00 到 16:00
场地: Lecture Hall, Jiayibing Building, Jingchunyuan 82, BICMR
Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation.
More nuanced high probability guarantees are rare, and typically either rely on “light-tail” noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both treaming (online) algorithms and offline algorithms based on empirical risk minimization.