Stochastic Methods for Convex Optimization with "Difficult" Constraints
Speaker(s): Mengdi Wang, Assistant Professor of Operations Research, Princeton University
Time: 00:00-00:00 July 4, 2014
Venue: Room 1114, Science Bldg 1
Speaker: Mengdi Wang, Assistant Professor of Operations Research, Princeton University
Time: Friday 7.4, 10:30am - 11:30am
Venue: Room 1114, Science Bldg 1
Abstract: Convex optimization problems involving large-scale data or expected values are challenging, especially when these difficulties are associated with the constraint set. We propose random algorithms for such problems, and focus on special structures that lend themselves to sampling, such as when the constraint is the intersection of many simpler sets, involves a system of inequalities, or involves expected values, and/or the objective function is an expected value, etc. We propose a class of new methods that combine elements of successive projection, stochastic gradient descent and proximal point algorithm. This class of methods also contain as special cases many known algorithms. We use a unified analytic framework to prove their almost sure convergence and the rate of convergence. Our framework allows the random algorithms to be applied with various sampling schemes (e.g., i.i.d., Markov, sequential, etc), which are suitable for applications involving distributed implementation, large data set, computing equilibriums, or statistical learning.