12,086 research outputs found

    Beyond the Spectral Theorem: Spectrally Decomposing Arbitrary Functions of Nondiagonalizable Operators

    Full text link
    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, often the linear operator techniques that one would then use simply fail since the operators cannot be diagonalized. This curse is well known. It also occurs for finite-dimensional linear operators. We circumvent it by developing a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. It extends the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics are relevant, including memoryful stochastic processes, open non unitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator. In particular, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a general method to construct it. We provide new formulae for constructing projection operators and delineate the relations between projection operators, eigenvectors, and generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples.Comment: 29 pages, 4 figures, expanded historical citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/bst.ht

    Graphical Markov models, unifying results and their interpretation

    Full text link
    Graphical Markov models combine conditional independence constraints with graphical representations of stepwise data generating processes.The models started to be formulated about 40 years ago and vigorous development is ongoing. Longitudinal observational studies as well as intervention studies are best modeled via a subclass called regression graph models and, especially traceable regressions. Regression graphs include two types of undirected graph and directed acyclic graphs in ordered sequences of joint responses. Response components may correspond to discrete or continuous random variables and may depend exclusively on variables which have been generated earlier. These aspects are essential when causal hypothesis are the motivation for the planning of empirical studies. To turn the graphs into useful tools for tracing developmental pathways and for predicting structure in alternative models, the generated distributions have to mimic some properties of joint Gaussian distributions. Here, relevant results concerning these aspects are spelled out and illustrated by examples. With regression graph models, it becomes feasible, for the first time, to derive structural effects of (1) ignoring some of the variables, of (2) selecting subpopulations via fixed levels of some other variables or of (3) changing the order in which the variables might get generated. Thus, the most important future applications of these models will aim at the best possible integration of knowledge from related studies.Comment: 34 Pages, 11 figures, 1 tabl

    The Energy Operator for a Model with a Multiparametric Infinite Statistics

    Full text link
    In this paper we consider energy operator (a free Hamiltonian), in the second-quantized approach, for the multiparameter quon algebras: aiajqijajai=δij,i,jIa_{i}a_{j}^{\dagger}-q_{ij}a_{j}^{\dagger}a_{i} = \delta_{ij}, i,j\in I with (qij)i,jI(q_{ij})_{i,j\in I} any hermitian matrix of deformation parameters. We obtain an elegant formula for normally ordered (sometimes called Wick-ordered) series expansions of number operators (which determine a free Hamiltonian). As a main result (see Theorem 1) we prove that the number operators are given, with respect to a basis formed by "generalized Lie elements", by certain normally ordered quadratic expressions with coefficients given precisely by the entries of the inverses of Gram matrices of multiparticle weight spaces. (This settles a conjecture of two of the authors (S.M and A.P), stated in [8]). These Gram matrices are hermitian generalizations of the Varchenko's matrices, associated to a quantum (symmetric) bilinear form of diagonal arrangements of hyperplanes (see [12]). The solution of the inversion problem of such matrices in [9] (Theorem 2.2.17), leads to an effective formula for the number operators studied in this paper. The one parameter case, in the monomial basis, was studied by Zagier [15], Stanciu [11] and M{\o}ller [6].Comment: 24 pages. accepted in J. Phys. A. Math. Ge

    Singular Value Decomposition of Operators on Reproducing Kernel Hilbert Spaces

    Full text link
    Reproducing kernel Hilbert spaces (RKHSs) play an important role in many statistics and machine learning applications ranging from support vector machines to Gaussian processes and kernel embeddings of distributions. Operators acting on such spaces are, for instance, required to embed conditional probability distributions in order to implement the kernel Bayes rule and build sequential data models. It was recently shown that transfer operators such as the Perron-Frobenius or Koopman operator can also be approximated in a similar fashion using covariance and cross-covariance operators and that eigenfunctions of these operators can be obtained by solving associated matrix eigenvalue problems. The goal of this paper is to provide a solid functional analytic foundation for the eigenvalue decomposition of RKHS operators and to extend the approach to the singular value decomposition. The results are illustrated with simple guiding examples

    MCMC Methods for Multi-Response Generalized Linear Mixed Models: The MCMCglmm R Package

    Get PDF
    Generalized linear mixed models provide a flexible framework for modeling a range of data, although with non-Gaussian response variables the likelihood cannot be obtained in closed form. Markov chain Monte Carlo methods solve this problem by sampling from a series of simpler conditional distributions that can be evaluated. The R package MCMCglmm implements such an algorithm for a range of model fitting problems. More than one response variable can be analyzed simultaneously, and these variables are allowed to follow Gaussian, Poisson, multi(bi)nominal, exponential, zero-inflated and censored distributions. A range of variance structures are permitted for the random effects, including interactions with categorical or continuous variables (i.e., random regression), and more complicated variance structures that arise through shared ancestry, either through a pedigree or through a phylogeny. Missing values are permitted in the response variable(s) and data can be known up to some level of measurement error as in meta-analysis. All simu- lation is done in C/ C++ using the CSparse library for sparse linear systems.

    Penalized likelihood estimation and iterative kalman smoothing for non-gaussian dynamic regression models

    Get PDF
    Dynamic regression or state space models provide a flexible framework for analyzing non-Gaussian time series and longitudinal data, covering for example models for discrete longitudinal observations. As for non-Gaussian random coefficient models, a direct Bayesian approach leads to numerical integration problems, often intractable for more complicated data sets. Recent Markov chain Monte Carlo methods avoid this by repeated sampling from approximative posterior distributions, but there are still open questions about sampling schemes and convergence. In this article we consider simpler methods of inference based on posterior modes or, equivalently, maximum penalized likelihood estimation. From the latter point of view, the approach can also be interpreted as a nonparametric method for smoothing time-varying coefficients. Efficient smoothing algorithms are obtained by iteration of common linear Kalman filtering and smoothing, in the same way as estimation in generalized linear models with fixed effects can be performed by iteratively weighted least squares estimation. The algorithm can be combined with an EM-type method or cross-validation to estimate unknown hyper- or smoothing parameters. The approach is illustrated by applications to a binary time series and a multicategorical longitudinal data set
    corecore