14,017 research outputs found

    Fourth Order Gradient Symplectic Integrator Methods for Solving the Time-Dependent Schr\"odinger Equation

    Get PDF
    We show that the method of splitting the operator eϵ(T+V){\rm e}^{\epsilon(T+V)} to fourth order with purely positive coefficients produces excellent algorithms for solving the time-dependent Schr\"odinger equation. These algorithms require knowing the potential and the gradient of the potential. One 4th order algorithm only requires four Fast Fourier Transformations per iteration. In a one dimensional scattering problem, the 4th order error coefficients of these new algorithms are roughly 500 times smaller than fourth order algorithms with negative coefficient, such as those based on the traditional Ruth-Forest symplectic integrator. These algorithms can produce converged results of conventional second or fourth order algorithms using time steps 5 to 10 times as large. Iterating these positive coefficient algorithms to 6th order also produced better converged algorithms than iterating the Ruth-Forest algorithm to 6th order or using Yoshida's 6th order algorithm A directly.Comment: 11 pages, 2 figures, submitted to J. Chem. Phy

    Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent

    Get PDF
    The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost

    Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

    Full text link
    The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present. These algorithms are used when the risk functions are non-smooth and involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate αi\alpha^i to within an O(μ)−O(\mu)-neighborhood of the optimizer, for some α∈(0,1)\alpha \in (0,1) and small step-size μ\mu. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems (such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker assumptions automatically (but not the previously used conditions from the literature). These results revealed that sub-gradient learning methods have more favorable behavior than originally thought when used to enable continuous adaptation and learning. The results of Part I were exclusive to single-agent adaptation. The purpose of the current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ)O(\mu) of the optimizer

    Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

    Full text link
    The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present. These algorithms are used when the risk functions are non-smooth and involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate αi\alpha^i to within an O(μ)−O(\mu)-neighborhood of the optimizer, for some α∈(0,1)\alpha \in (0,1) and small step-size μ\mu. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems (such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker assumptions automatically (but not the previously used conditions from the literature). These results revealed that sub-gradient learning methods have more favorable behavior than originally thought when used to enable continuous adaptation and learning. The results of Part I were exclusive to single-agent adaptation. The purpose of the current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ)O(\mu) of the optimizer

    FE tool for drape modelling and resin pocket prediction of fully embedded optical fiber sensor system

    Get PDF
    This work highlights some of the achievements obtained within the EU FP7 SmartFiber project, aiming to develop a fully embeddable optical fiber sensor system including the interrogator chip. The focus is on resolving issues holding back the industrial uptake of optical sensing technology. In a first section, the development of a placement head for automated lay-down of an optical sensor line (including the SmartFiber interrogator system) during composite manufacturing is discussed. In a second section, the attention is shifted to the occurrence of resin pockets surrounding inclusions such as the SmartFiber interrogator. A computationally efficient F.E. approach is presented capable of accurately predicting resin pocket geometries. Both small (i.e. optical fiber sensors) and large (i.e. the SmartFiber interrogator) inclusions are considered, and the F.E. predictions are validated with experimental observations
    • …
    corecore