9,202 research outputs found

    New advances in H∞ control and filtering for nonlinear systems

    Get PDF
    The main objective of this special issue is to summarise recent advances in H∞ control and filtering for nonlinear systems, including time-delay, hybrid and stochastic systems. The published papers provide new ideas and approaches, clearly indicating the advances made in problem statements, methodologies or applications with respect to the existing results. The special issue also includes papers focusing on advanced and non-traditional methods and presenting considerable novelties in theoretical background or experimental setup. Some papers present applications to newly emerging fields, such as network-based control and estimation

    Geodesics on Calabi-Yau manifolds and winding states in nonlinear sigma models

    Get PDF
    We conjecture that a non-flat DD-real-dimensional compact Calabi-Yau manifold, such as a quintic hypersurface with D=6, or a K3 manifold with D=4, has locally length minimizing closed geodesics, and that the number of these with length less than L grows asymptotically as L^{D}. We also outline the physical arguments behind this conjecture, which involve the claim that all states in a nonlinear sigma model can be identified as "momentum" and "winding" states in the large volume limit.Comment: minor corrections, 43 pages, to appear in frontiers in mathematical physics. Frontiers in Physics, Dec 16, 201

    Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information

    Full text link
    We consider variants of trust-region and cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ϵ \epsilon -approximate second-order optimality which have shown to be tight. Our Hessian approximation conditions constitute a major relaxation over the existing ones in the literature. Consequently, we are able to show that such mild conditions allow for the construction of the approximate Hessian through various random sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.Comment: 32 page

    Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

    Full text link
    While first-order optimization methods such as stochastic gradient descent (SGD) are popular in machine learning (ML), they come with well-known deficiencies, including relatively-slow convergence, sensitivity to the settings of hyper-parameters such as learning rate, stagnation at high training errors, and difficulty in escaping flat regions and saddle points. These issues are particularly acute in highly non-convex settings such as those arising in neural networks. Motivated by this, there has been recent interest in second-order methods that aim to alleviate these shortcomings by capturing curvature information. In this paper, we report detailed empirical evaluations of a class of Newton-type methods, namely sub-sampled variants of trust region (TR) and adaptive regularization with cubics (ARC) algorithms, for non-convex ML problems. In doing so, we demonstrate that these methods not only can be computationally competitive with hand-tuned SGD with momentum, obtaining comparable or better generalization performance, but also they are highly robust to hyper-parameter settings. Further, in contrast to SGD with momentum, we show that the manner in which these Newton-type methods employ curvature information allows them to seamlessly escape flat regions and saddle points.Comment: 21 pages, 11 figures. Restructure the paper and add experiment
    • …
    corecore