79,848 research outputs found

    A Localization Approach to Improve Iterative Proportional Scaling in Gaussian Graphical Models

    Full text link
    We discuss an efficient implementation of the iterative proportional scaling procedure in the multivariate Gaussian graphical models. We show that the computational cost can be reduced by localization of the update procedure in each iterative step by using the structure of a decomposable model obtained by triangulation of the graph associated with the model. Some numerical experiments demonstrate the competitive performance of the proposed algorithm.Comment: 12 page

    Iterative Scaling in Curved Exponential Families

    Get PDF
    The paper describes a generalized iterative proportional fitting procedure that can be used for maximum likelihood estimation in a special class of the general log-linear model. The models in this class, called relational, apply to multivariate discrete sample spaces that do not necessarily have a Cartesian product structure and may not contain an overall effect. When applied to the cell probabilities, the models without the overall effect are curved exponential families and the values of the sufficient statistics are reproduced by the MLE only up to a constant of proportionality. The paper shows that Iterative Proportional Fitting, Generalized Iterative Scaling, and Improved Iterative Scaling fail to work for such models. The algorithm proposed here is based on iterated Bregman projections. As a by-product, estimates of the multiplicative parameters are also obtained. An implementation of the algorithm is available as an R-package

    Copula-like inference for discrete bivariate distributions with rectangular support

    Full text link
    After reviewing a large body of literature on the modeling of bivariate discrete distributions with finite support, \cite{Gee20} made a compelling case for the use of the iterative proportional fitting procedure (IPFP), also known as Sinkhorn's algorithm or matrix scaling in the literature, as a sound way to attempt to decompose a bivariate probability mass function into its two univariate margins and a bivariate probability mass function with uniform margins playing the role of a discrete copula. After stating what could be regarded as a discrete analog of Skar's theorem, we investigate, for starting bivariate p.m.f.s with rectangular support, nonparametric and parametric estimation procedures as well as goodness-of-fit tests for the underlying discrete copula. Related asymptotic results are provided and build upon a new differentiability result for the iterative proportional fitting procedure which can be of independent interest. Theoretical results are complemented by finite-sample experiments and a data example.Comment: 44 pages, 1 figure, 9 table

    Putting Iterative Proportional Fitting on the researcher’s desk

    Get PDF
    ‘Iterative Proportional Fitting’ (IPF) is a mathematical procedure originally developed to combine the information from two or more datasets. IPF is a well-established technique with the theoretical and practical considerations behind the method thoroughly explored and reported. In this paper the theory of IPF is investigated with a mathematical definition of the procedure and a review of the relevant literature given. So that IPF can be readily accessible to researchers the procedure has been automated in Visual Basic and a description of the program and a ‘User Guide’ are provided. IPF is employed in various disciplines but has been particularly useful in census-related analysis to provide updated population statistics and to estimate individual-level attribute characteristics. To illustrate the practical application of IPF various case studies are described. In the future, demand for individual-level data is thought likely to increase and it is believed that the IPF procedure and Visual Basic program have the potential to facilitate research in geography and other disciplines

    Optimal control theory for unitary transformations

    Full text link
    The dynamics of a quantum system driven by an external field is well described by a unitary transformation generated by a time dependent Hamiltonian. The inverse problem of finding the field that generates a specific unitary transformation is the subject of study. The unitary transformation which can represent an algorithm in a quantum computation is imposed on a subset of quantum states embedded in a larger Hilbert space. Optimal control theory (OCT) is used to solve the inversion problem irrespective of the initial input state. A unified formalism, based on the Krotov method is developed leading to a new scheme. The schemes are compared for the inversion of a two-qubit Fourier transform using as registers the vibrational levels of the X1Σg+X^1\Sigma^+_g electronic state of Na2_2. Raman-like transitions through the A1Σu+A^1\Sigma^+_u electronic state induce the transitions. Light fields are found that are able to implement the Fourier transform within a picosecond time scale. Such fields can be obtained by pulse-shaping techniques of a femtosecond pulse. Out of the schemes studied the square modulus scheme converges fastest. A study of the implementation of the QQ qubit Fourier transform in the Na2_2 molecule was carried out for up to 5 qubits. The classical computation effort required to obtain the algorithm with a given fidelity is estimated to scale exponentially with the number of levels. The observed moderate scaling of the pulse intensity with the number of qubits in the transformation is rationalized.Comment: 32 pages, 6 figure

    Fermi-LAT Observations of High- and Intermediate-Velocity Clouds: Tracing Cosmic Rays in the Halo of the Milky Way

    Full text link
    It is widely accepted that cosmic rays (CRs) up to at least PeV energies are Galactic in origin. Accelerated particles are injected into the interstellar medium where they propagate to the farthest reaches of the Milky Way, including a surrounding halo. The composition of CRs coming to the solar system can be measured directly and has been used to infer the details of CR propagation that are extrapolated to the whole Galaxy. In contrast, indirect methods, such as observations of gamma-ray emission from CR interactions with interstellar gas, have been employed to directly probe the CR densities in distant locations throughout the Galactic plane. In this article we use 73 months of data from the Fermi Large Area Telescope in the energy range between 300 MeV and 10 GeV to search for gamma-ray emission produced by CR interactions in several high- and intermediate-velocity clouds located at up to ~ 7 kpc above the Galactic plane. We achieve the first detection of intermediate-velocity clouds in gamma rays and set upper limits on the emission from the remaining targets, thereby tracing the distribution of CR nuclei in the halo for the first time. We find that the gamma-ray emissivity per H atom decreases with increasing distance from the plane at 97.5% confidence level. This corroborates the notion that CRs at the relevant energies originate in the Galactic disk. The emissivity of the upper intermediate-velocity Arch hints at a 50% decline of CR densities within 2 kpc from the plane. We compare our results to predictions of CR propagation models.Comment: Accepted for publication in the Astrophysical Journa

    A Parallel Iterative Method for Computing Molecular Absorption Spectra

    Full text link
    We describe a fast parallel iterative method for computing molecular absorption spectra within TDDFT linear response and using the LCAO method. We use a local basis of "dominant products" to parametrize the space of orbital products that occur in the LCAO approach. In this basis, the dynamical polarizability is computed iteratively within an appropriate Krylov subspace. The iterative procedure uses a a matrix-free GMRES method to determine the (interacting) density response. The resulting code is about one order of magnitude faster than our previous full-matrix method. This acceleration makes the speed of our TDDFT code comparable with codes based on Casida's equation. The implementation of our method uses hybrid MPI and OpenMP parallelization in which load balancing and memory access are optimized. To validate our approach and to establish benchmarks, we compute spectra of large molecules on various types of parallel machines. The methods developed here are fairly general and we believe they will find useful applications in molecular physics/chemistry, even for problems that are beyond TDDFT, such as organic semiconductors, particularly in photovoltaics.Comment: 20 pages, 17 figures, 3 table
    corecore