67 research outputs found

    Accelerated Methods for α\alpha-Weakly-Quasi-Convex Problems

    Full text link
    Many problems encountered in training neural networks are non-convex. However, some of them satisfy conditions weaker than convexity, but which are still sufficient to guarantee the convergence of some first-order methods. In our work we show that some previously known first-order methods retain their convergence rates under these weaker conditions

    On the Approximation of Toeplitz Operators for Nonparametric H∞\mathcal{H}_\infty-norm Estimation

    Full text link
    Given a stable SISO LTI system GG, we investigate the problem of estimating the H∞\mathcal{H}_\infty-norm of GG, denoted ∣∣G∣∣∞||G||_\infty, when GG is only accessible via noisy observations. Wahlberg et al. recently proposed a nonparametric algorithm based on the power method for estimating the top eigenvalue of a matrix. In particular, by applying a clever time-reversal trick, Wahlberg et al. implement the power method on the top left n×nn \times n corner TnT_n of the Toeplitz (convolution) operator associated with GG. In this paper, we prove sharp non-asymptotic bounds on the necessary length nn needed so that ∣∣Tn∣∣||T_n|| is an ε\varepsilon-additive approximation of ∣∣G∣∣∞||G||_\infty. Furthermore, in the process of demonstrating the sharpness of our bounds, we construct a simple family of finite impulse response (FIR) filters where the number of timesteps needed for the power method is arbitrarily worse than the number of timesteps needed for parametric FIR identification via least-squares to achieve the same ε\varepsilon-additive approximation

    Training Deep Networks without Learning Rates Through Coin Betting

    Get PDF
    Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning rate free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms

    Learning Linear Dynamical Systems via Spectral Filtering

    Full text link
    We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize the class of LDSs by a polylogarithmic factor, in exchange for convexity of the loss functions. From this arises a polynomial-time algorithm with a near-optimal regret guarantee, with an analogous sample complexity bound for agnostic learning. Our algorithm is based on a novel filtering technique, which may be of independent interest: we convolve the time series with the eigenvectors of a certain Hankel matrix.Comment: Published as a conference paper at NIPS 201
    • …
    corecore