91,144 research outputs found

    Non-stationary Stochastic Optimization

    Full text link
    We consider a non-stationary variant of a sequential stochastic optimization problem, in which the underlying cost functions may change along the horizon. We propose a measure, termed variation budget, that controls the extent of said change, and study how restrictions on this budget impact achievable performance. We identify sharp conditions under which it is possible to achieve long-run-average optimality and more refined performance measures such as rate optimality that fully characterize the complexity of such problems. In doing so, we also establish a strong connection between two rather disparate strands of literature: adversarial online convex optimization; and the more traditional stochastic approximation paradigm (couched in a non-stationary setting). This connection is the key to deriving well performing policies in the latter, by leveraging structure of optimal policies in the former. Finally, tight bounds on the minimax regret allow us to quantify the "price of non-stationarity," which mathematically captures the added complexity embedded in a temporally changing environment versus a stationary one

    Non-stationary stochastic optimization of an oscillating water column

    Get PDF
    A non-stationary stochastic optimization methodology is applied to an OWC (oscillating water column) to find the design that maximizes the wave energy extraction. Different temporal cycles are considered to represent the long-term variability of the wave climate at the site in the optimization problem. The results of the non-stationary stochastic optimization problem are compared against those obtained by a stationary stochastic optimization problem. The comparative analysis reveals that the proposed non-stationary optimization provides designs with a better fit to reality. However, the stationarity assumption can be adequate when looking at averaged system response

    On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Optimization

    Full text link
    Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for non-convex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent and stochastic gradient descent with extrapolation for finding an approximate first-order stationary point in smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation can be accelerated than without extrapolation

    Variance Reduction for Faster Non-Convex Optimization

    Full text link
    We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in O(1/ε)O(1/\varepsilon) iterations for smooth objectives, and stochastic gradient descent that converges in O(1/ε2)O(1/\varepsilon^2) iterations for objectives that are sum of smooth functions. We provide the first improvement in this line of research. Our result is based on the variance reduction trick recently introduced to convex optimization, as well as a brand new analysis of variance reduction that is suitable for non-convex optimization. For objectives that are sum of smooth functions, our first-order minibatch stochastic method converges with an O(1/ε)O(1/\varepsilon) rate, and is faster than full gradient descent by Ω(n1/3)\Omega(n^{1/3}). We demonstrate the effectiveness of our methods on empirical risk minimizations with non-convex loss functions and training neural nets.Comment: polished writin
    • …
    corecore