Article thumbnail

On the rates of convergence of Parallelized Averaged Stochastic Gradient Algorithms

By Antoine Godichon and Sofiane Saadane

Abstract

The growing interest for high dimensional and functional data analysis led in the last decade to an important research developing a consequent amount of techniques. Parallelized algorithms, which consist in distributing and treat the data into different machines, for example, are a good answer to deal with large samples taking values in high dimensional spaces. We introduce here a parallelized averaged stochastic gradient algorithm, which enables to treat efficiently and recursively the data, and so, without taking care if the distribution of the data into the machines is uniform. The rate of convergence in quadratic mean as well as the asymptotic normality of the parallelized estimates are given, for strongly and locally strongly convex objectives

Topics: Stochastic Gradient Descent, Averaging, Distributed estimation, Central Limit Theorem, Asynchronous parallel optimization, [STAT]Statistics [stat]
Publisher: HAL CCSD
Year: 2017
OAI identifier: oai:HAL:hal-01620943v1
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • https://hal.archives-ouvertes.... (external link)
  • https://hal.archives-ouvertes.... (external link)
  • https://hal.archives-ouvertes.... (external link)
  • https://hal.archives-ouvertes.... (external link)
  • https://hal.archives-ouvertes.... (external link)

  • To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

    Suggested articles