4,051 research outputs found

    Bounds for deterministic and stochastic dynamical systems using sum-of-squares optimization

    Get PDF
    We describe methods for proving upper and lower bounds on infinite-time averages in deterministic dynamical systems and on stationary expectations in stochastic systems. The dynamics and the quantities to be bounded are assumed to be polynomial functions of the state variables. The methods are computer-assisted, using sum-of-squares polynomials to formulate sufficient conditions that can be checked by semidefinite programming. In the deterministic case, we seek tight bounds that apply to particular local attractors. An obstacle to proving such bounds is that they do not hold globally; they are generally violated by trajectories starting outside the local basin of attraction. We describe two closely related ways past this obstacle: one that requires knowing a subset of the basin of attraction, and another that considers the zero-noise limit of the corresponding stochastic system. The bounding methods are illustrated using the van der Pol oscillator. We bound deterministic averages on the attracting limit cycle above and below to within 1%, which requires a lower bound that does not hold for the unstable fixed point at the origin. We obtain similarly tight upper and lower bounds on stochastic expectations for a range of noise amplitudes. Limitations of our methods for certain types of deterministic systems are discussed, along with prospects for improvement.Comment: 25 pages; Added new Section 7.2; Added references; Corrected typos; Submitted to SIAD

    On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Optimization

    Full text link
    Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for non-convex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent and stochastic gradient descent with extrapolation for finding an approximate first-order stationary point in smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation can be accelerated than without extrapolation
    • …
    corecore