Optimized gradient method for smooth convex minimization

Donghwan Kim

University of Michigan


First-order algorithms are widely used to solve large-scale optimization problems in various fields such as signal and image processing, machine learning, communications and many other areas, since their computational cost per iteration is mildly dependent on the problem dimension. Among first-order methods, Nesterov's fast gradient method (FGM) has been celebrated in various applications as it is computationally efficient and achieves the optimal rate O(1/N^2) for decreasing the cost function, where N denotes the number of iterations. In search of the best-performing first-order methods, this talk presents a new first-order method, named optimized gradient method (OGM), that is computationally as efficient as FGM, and has a worst-case cost function convergence bound that is twice smaller than that of FGM and that is recently found to be optimal for large-dimensional smooth convex problems. In addition, this talk presents another new algorithm called OGM-OG (optimized over gradient) by optimizing the step coefficients with respect to the rate of gradient norm decrease, whereas we optimized the original OGM with respect to the cost function decrease.

Back to ACMS schedule