1,799 research outputs found

    Singular and non-singular eigenvectors for the Gaudin model

    Full text link
    We present a method to construct a basis of singular and non-singular common eigenvectors for Gaudin Hamiltonians in a tensor product module of the Lie algebra SL(2). The subset of singular vectors is completely described by analogy with covariant differential operators. The relation between singular eigenvectors and the Bethe Ansatz is discussed. In each weight subspace the set of singular eigenvectors is completed to a basis, by a family of non-singular eigenvectors. We discuss also the generalization of this method to the case of an arbitrary Lie algebra.Comment: 19 page

    Stabilization of Unstable Procedures: The Recursive Projection Method

    Get PDF
    Fixed-point iterative procedures for solving nonlinear parameter dependent problems can converge for some interval of parameter values and diverge as the parameter changes. The Recursive Projection Method (RPM), which stabilizes such procedures by computing a projection onto the unstable subspace is presented. On this subspace a Newton or special Newton iteration is performed, and the fixed-point iteration is used on the complement. As continuation in the parameter proceeds, the projection is efficiently updated, possibly increasing or decreasing the dimension of the unstable subspace. The method is extremely effective when the dimension of the unstable subspace is small compared to the dimension of the system. Convergence proofs are given and pseudo-arclength continuation on the unstable subspace is introduced to allow continuation past folds. Examples are presented for an important application of the RPM in which a “black-box” time integration scheme is stabilized, enabling it to compute unstable steady states. The RPM can also be used to accelerate iterative procedures when slow convergence is due to a few slowly decaying modes

    Constructive updating/downdating of oblique projectors: a generalization of the Gram-Schmidt process

    Get PDF
    A generalization of the Gram-Schmidt procedure is achieved by providing equations for updating and downdating oblique projectors. The work is motivated by the problem of adaptive signal representation outside the orthogonal basis setting. The proposed techniques are shown to be relevant to the problem of discriminating signals produced by different phenomena when the order of the signal model needs to be adjusted.Comment: As it will appear in Journal of Physics A: Mathematical and Theoretical (2007

    Superadiabatic transitions in quantum molecular dynamics

    Get PDF
    We study the dynamics of a molecule’s nuclear wave function near an avoided crossing of two electronic energy levels for one nuclear degree of freedom. We derive the general form of the Schrödinger equation in the nth superadiabatic representation for all n є N. Using these results, we obtain closed formulas for the time development of the component of the wave function in an initially unoccupied energy subspace when a wave packet travels through the transition region. In the optimal superadiabatic representation, which we define, this component builds up monotonically. Finally, we give an explicit formula for the transition wave function away from the avoided crossing, which is in excellent agreement with high-precision numerical calculations

    A note on incremental POD algorithms for continuous time data

    Full text link
    In our earlier work [Fareed et al., Comput. Math. Appl. 75 (2018), no. 6, 1942-1960], we developed an incremental approach to compute the proper orthogonal decomposition (POD) of PDE simulation data. Specifically, we developed an incremental algorithm for the SVD with respect to a weighted inner product for the discrete time POD computations. For continuous time data, we used an approximate approach to arrive at a discrete time POD problem and then applied the incremental SVD algorithm. In this note, we analyze the continuous time case with simulation data that is piecewise constant in time such that each data snapshot is expanded in a finite collection of basis elements of a Hilbert space. We first show that the POD is determined by the SVD of two different data matrices with respect to weighted inner products. Next, we develop incremental algorithms for approximating the two matrix SVDs with respect to the different weighted inner products. Finally, we show neither approximate SVD is more accurate than the other; specifically, we show the incremental algorithms return equivalent results
    corecore