9 research outputs found

    Alternatives to the Rayleigh quotient for the quadratic eigenvalue problem

    Get PDF
    We consider the quadratic eigenvalue problem a²Ax + aBx + Cx = 0. Suppose that u is an approximation to an eigenvector x (for instance obtained by a subspace method), and that we want to determine an approximation to the corresponding eigenvalue a. The usual approach is to impose the Galerkin condition r(ø, u) = (ø²A + øB + C)u | u from which it follows that ø must be one of the two solutions to the quadratic equation (u*Au)ø² + (u*Bu)ø + (u*Cu) = 0. An unnatural aspect is that if u = x, the second solution has in general no meaning. When u is not very accurate, it may not be clear which solution is the best. Moreover, when the discriminant of the equation is small, the solutions may be very sensitive to perturbations in u. In this paper we therefore examine alternative approximations to a. We compare the approaches theoretically and by numerical experiments. The methods are extended to approximations from subspaces and to the polynomial eigenvalue problem

    Alternatives to the Rayleigh quotient for the quadratic eigenvalue problem

    Get PDF

    A Novel Zeroing Neural Network for Solving Time-Varying Quadratic Matrix Equations against Linear Noises

    Get PDF
    The solving of quadratic matrix equations is a fundamental issue which essentially exists in the optimal control domain. However, noises exerted on the coefficients of quadratic matrix equations may affect the accuracy of the solutions. In order to solve the time-varying quadratic matrix equation problem under linear noise, a new error-processing design formula is proposed, and a resultant novel zeroing neural network model is developed. The new design formula incorporates a second-order error-processing manner, and the double-integration-enhanced zeroing neural network (DIEZNN) model is further proposed for solving time-varying quadratic matrix equations subject to linear noises. Compared with the original zeroing neural network (OZNN) model, finite-time zeroing neural network (FTZNN) model and integration-enhanced zeroing neural network (IEZNN) model, the DIEZNN model shows the superiority of its solution under linear noise; that is, when solving the problem of a time-varying quadratic matrix equation in the environment of linear noise, the residual error of the existing model will maintain a large level due to the influence of linear noise, which will eventually lead to the solution’s failure. The newly proposed DIEZNN model can guarantee a normal solution to the time-varying quadratic matrix equation task no matter how much linear noise there is. In addition, the theoretical analysis proves that the neural state of the DIEZNN model can converge to the theoretical solution even under linear noise. The computer simulation results further substantiate the superiority of the DIEZNN model in solving time-varying quadratic matrix equations under linear noise

    Roots of bivariate polynomial systems via determinantal representations

    Get PDF
    We give two determinantal representations for a bivariate polynomial. They may be used to compute the zeros of a system of two of these polynomials via the eigenvalues of a two-parameter eigenvalue problem. The first determinantal representation is suitable for polynomials with scalar or matrix coefficients, and consists of matrices with asymptotic order n2/4n^2/4, where nn is the degree of the polynomial. The second representation is useful for scalar polynomials and has asymptotic order n2/6n^2/6. The resulting method to compute the roots of a system of two bivariate polynomials is competitive with some existing methods for polynomials up to degree 10, as well as for polynomials with a small number of terms.Comment: 22 pages, 9 figure

    Alternatives to the Rayleigh quotient for the quadratic eigenvalue problem

    Get PDF
    We consider the quadratic eigenvalue problem ¿2Ax + ¿Bx + Cx = 0. Suppose that u is an approximation to an eigenvector x (for instance, obtained by a subspace method) and that we want to determine an approximation to the correspondingeig envalue ¿. The usual approach is to impose the Galerkin condition r(¿, u) = (¿2A + ¿B + C)u ¿ u, from which it follows that ¿ must be one of the two solutions to the quadratic equation (u*Au)¿2 +(u*Bu)¿+(u*Cu) = 0. An unnatural aspect is that if u = x, the second solution has in general no meaning. When u is not very accurate,it may not be clear which solution is the best. Moreover, when the discriminant of the equation is small, the solutions may be very sensitive to perturbations in u.In this paper we therefore examine alternative approximations to ¿. We compare the approaches theoretically and by numerical experiments. The methods are extended to approximations from subspaces and to the polynomial eigenvalue problem

    Alternatives to the Rayleigh quotient for the quadratic eigenvalue problem

    No full text
    Abstract. We consider the quadratic eigenvalue problem λ 2 Ax + λBx + Cx =0. Suppose that u is an approximation to an eigenvector x (for instance, obtained by a subspace method) and that we want to determine an approximation to the correspondingeigenvalue λ. The usual approach is to impose the Galerkin condition r(θ, u) =(θ 2 A + θB + C)u ⊥ u, from which it follows that θ must be one of the two solutions to the quadratic equation (u ∗ Au)θ 2 +(u ∗ Bu)θ +(u ∗ Cu) = 0. An unnatural aspect is that if u = x, the second solution has in general no meaning. When u is not very accurate, it may not be clear which solution is the best. Moreover, when the discriminant of the equation is small, the solutions may be very sensitive to perturbations in u. In this paper we therefore examine alternative approximations to λ. We compare the approaches theoretically and by numerical experiments. The methods are extended to approximations from subspaces and to the polynomial eigenvalue problem
    corecore