Papers
arxiv:2212.09413

Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis

Published on Dec 19, 2022
Authors:
,

Abstract

Gradient descent methods and their accelerated/stochastic variants are explained mathematically, focusing on convergence analysis and variance-reduced stochastic gradient schemes.

AI-generated summary

In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants. We aim at explaining these components from a mathematical point of view, including theoretical and practical aspects, but at an elementary level. We will focus on basic variants of the gradient descent method and then extend our view to recent variants, especially variance-reduced stochastic gradient schemes (SGD). Our approach relies on revealing the structures presented inside the problem and the assumptions imposed on the objective function. Our convergence analysis unifies several known results and relies on a general, but elementary recursive expression. We have illustrated this analysis on several common schemes.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2212.09413 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2212.09413 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2212.09413 in a Space README.md to link it from this page.

Collections including this paper 1