A line-search based SGD algorithm with Adaptive Importance Sampling

Abstract

Stochastic Gradient methods are widely used in the field of supervised learning associated with big data. In this context, importance sampling-based algorithms have been proposed to minimize the variance of the stochastic gradient by introducing practical strategies to approximate the optimal sampling distribution, which is otherwise only theoretically accessible. In this paper, we propose a scheme that combines stochastic gradient descent with adaptive importance sampling with automatic step-size selection based on a stochastic Armijo-type line-search. This approach makes the method robust to the choice of the initial step-size, which would otherwise require a tuning phase that is computationally expensive or even impractical in certain big data scenarios. Moreover, we introduce different mini-batch variants to foster the practical acceleration of the original scheme. Finally, numerical experiments are presented on real datasets to validate the proposed method in the context of supervised classification problems

Similar works

Full text

thumbnail-image

Archivio Istituzionale della Ricerca - Università degli Studi della Campania "Luigi Vanvitelli"

redirect
Last time updated on 11/12/2025

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.