3,492 research outputs found

    The Lyapunov matrix equation. Matrix analysis from a computational perspective

    Full text link
    Decay properties of the solution XX to the Lyapunov matrix equation AX+XAT=DAX + X A^T = D are investigated. Their exploitation in the understanding of equation matrix properties, and in the development of new numerical solution strategies when DD is not low rank but possibly sparse is also briefly discussed.Comment: This work is a contribution to the Seminar series "Topics in Mathematics", of the PhD Program of the Mathematics Department, Universit\`a di Bologna, Ital

    Approximation of functions of large matrices with Kronecker structure

    Full text link
    We consider the numerical approximation of f(A)bf({\cal A})b where bRNb\in{\mathbb R}^{N} and A\cal A is the sum of Kronecker products, that is A=M2I+IM1RN×N{\cal A}=M_2 \otimes I + I \otimes M_1\in{\mathbb R}^{N\times N}. Here ff is a regular function such that f(A)f({\cal A}) is well defined. We derive a computational strategy that significantly lowers the memory requirements and computational efforts of the standard approximations, with special emphasis on the exponential function, for which the new procedure becomes particularly advantageous. Our findings are illustrated by numerical experiments with typical functions used in applications

    Matrix-equation-based strategies for convection-diffusion equations

    Full text link
    We are interested in the numerical solution of nonsymmetric linear systems arising from the discretization of convection-diffusion partial differential equations with separable coefficients and dominant convection. Preconditioners based on the matrix equation formulation of the problem are proposed, which naturally approximate the original discretized problem. For certain types of convection coefficients, we show that the explicit solution of the matrix equation can effectively replace the linear system solution. Numerical experiments with data stemming from two and three dimensional problems are reported, illustrating the potential of the proposed methodology

    Inexact Arnoldi residual estimates and decay properties for functions of non-Hermitian matrices

    Get PDF
    We derive a priori residual-type bounds for the Arnoldi approximation of a matrix function and a strategy for setting the iteration accuracies in the inexact Arnoldi approximation of matrix functions. Such results are based on the decay behavior of the entries of functions of banded matrices. Specifically, we will use a priori decay bounds for the entries of functions of banded non-Hermitian matrices by using Faber polynomial series. Numerical experiments illustrate the quality of the results

    Order reduction methods for solving large-scale differential matrix Riccati equations

    Full text link
    We consider the numerical solution of large-scale symmetric differential matrix Riccati equations. Under certain hypotheses on the data, reduced order methods have recently arisen as a promising class of solution strategies, by forming low-rank approximations to the sought after solution at selected timesteps. We show that great computational and memory savings are obtained by a reduction process onto rational Krylov subspaces, as opposed to current approaches. By specifically addressing the solution of the reduced differential equation and reliable stopping criteria, we are able to obtain accurate final approximations at low memory and computational requirements. This is obtained by employing a two-phase strategy that separately enhances the accuracy of the algebraic approximation and the time integration. The new method allows us to numerically solve much larger problems than in the current literature. Numerical experiments on benchmark problems illustrate the effectiveness of the procedure with respect to existing solvers

    Reward sharpens orientation coding independently on attention

    Get PDF
    Rewarding improves performance. Is it due to modulations of the output modules of the neural systems or are there mechanisms favoring more 'generous' inputs? Some recent study included V1 in the the circuitry of reward-based modulations, but the effects of reward can easily be confused with effects of attention. Here we address this issue with a psychophysical dual task to control attention while orientation sensitivity on targets associated to different levels of reward is measured. We found that different reward rates improve orientation discrimination and sharpen the internal response distributions. Data are unaffected by changing attentional load nor by dissociating the feature of the reward cue from the feature relevant for the task. This suggests that reward may act independently on attention by modulating the activity of early sensory stages, perhaps V1, through a SNR improvement of task-relevant channels. Reward acts like attention, but using separate channels

    Numerical methods for large-scale Lyapunov equations with symmetric banded data

    Full text link
    The numerical solution of large-scale Lyapunov matrix equations with symmetric banded data has so far received little attention in the rich literature on Lyapunov equations. We aim to contribute to this open problem by introducing two efficient solution methods, which respectively address the cases of well conditioned and ill conditioned coefficient matrices. The proposed approaches conveniently exploit the possibly hidden structure of the solution matrix so as to deliver memory and computation saving approximate solutions. Numerical experiments are reported to illustrate the potential of the described methods
    corecore