39,217 research outputs found

    Intrinsic data depth for Hermitian positive definite matrices

    Full text link
    Nondegenerate covariance, correlation and spectral density matrices are necessarily symmetric or Hermitian and positive definite. The main contribution of this paper is the development of statistical data depths for collections of Hermitian positive definite matrices by exploiting the geometric structure of the space as a Riemannian manifold. The depth functions allow one to naturally characterize most central or outlying matrices, but also provide a practical framework for inference in the context of samples of positive definite matrices. First, the desired properties of an intrinsic data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally fast pointwise and integrated data depth functions that satisfy each of these requirements and investigate several robustness and efficiency aspects. As an application, we construct depth-based confidence regions for the intrinsic mean of a sample of positive definite matrices, which is applied to the exploratory analysis of a collection of covariance matrices associated to a multicenter research trial

    CosmoHammer: Cosmological parameter estimation with the MCMC Hammer

    Get PDF
    We study the benefits and limits of parallelised Markov chain Monte Carlo (MCMC) sampling in cosmology. MCMC methods are widely used for the estimation of cosmological parameters from a given set of observations and are typically based on the Metropolis-Hastings algorithm. Some of the required calculations can however be computationally intensive, meaning that a single long chain can take several hours or days to calculate. In practice, this can be limiting, since the MCMC process needs to be performed many times to test the impact of possible systematics and to understand the robustness of the measurements being made. To achieve greater speed through parallelisation, MCMC algorithms need to have short auto-correlation times and minimal overheads caused by tuning and burn-in. The resulting scalability is hence influenced by two factors, the MCMC overheads and the parallelisation costs. In order to efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure, we developed a Python framework called CosmoHammer which embeds emcee, an implementation by Foreman-Mackey et al. (2012) of the affine invariant ensemble sampler by Goodman and Weare (2010). We test the performance of CosmoHammer for cosmological parameter estimation from cosmic microwave background data. While Metropolis-Hastings is dominated by overheads, CosmoHammer is able to accelerate the sampling process from a wall time of 30 hours on a dual core notebook to 16 minutes by scaling out to 2048 cores. Such short wall times for complex data sets opens possibilities for extensive model testing and control of systematics.Comment: Published version. 17 pages, 6 figures. The code is available at http://www.astro.ethz.ch/refregier/research/Software/cosmohamme

    Multi-Architecture Monte-Carlo (MC) Simulation of Soft Coarse-Grained Polymeric Materials: SOft coarse grained Monte-carlo Acceleration (SOMA)

    Full text link
    Multi-component polymer systems are important for the development of new materials because of their ability to phase-separate or self-assemble into nano-structures. The Single-Chain-in-Mean-Field (SCMF) algorithm in conjunction with a soft, coarse-grained polymer model is an established technique to investigate these soft-matter systems. Here we present an im- plementation of this method: SOft coarse grained Monte-carlo Accelera- tion (SOMA). It is suitable to simulate large system sizes with up to billions of particles, yet versatile enough to study properties of different kinds of molecular architectures and interactions. We achieve efficiency of the simulations commissioning accelerators like GPUs on both workstations as well as supercomputers. The implementa- tion remains flexible and maintainable because of the implementation of the scientific programming language enhanced by OpenACC pragmas for the accelerators. We present implementation details and features of the program package, investigate the scalability of our implementation SOMA, and discuss two applications, which cover system sizes that are difficult to reach with other, common particle-based simulation methods

    Alternative fidelity measure for quantum states

    Get PDF
    We propose an alternative fidelity measure (namely, a measure of the degree of similarity) between quantum states and benchmark it against a number of properties of the standard Uhlmann-Jozsa fidelity. This measure is a simple function of the linear entropy and the Hilbert-Schmidt inner product between the given states and is thus, in comparison, not as computationally demanding. It also features several remarkable properties such as being jointly concave and satisfying all of "Jozsa's axioms". The trade-off, however, is that it is supermultiplicative and does not behave monotonically under quantum operations. In addition, new metrics for the space of density matrices are identified and the joint concavity of the Uhlmann-Jozsa fidelity for qubit states is established.Comment: 12 pages, 3 figures. v2 includes minor changes, new references and new numerical results (Sec. IV

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    An Optimized and Scalable Eigensolver for Sequences of Eigenvalue Problems

    Get PDF
    In many scientific applications the solution of non-linear differential equations are obtained through the set-up and solution of a number of successive eigenproblems. These eigenproblems can be regarded as a sequence whenever the solution of one problem fosters the initialization of the next. In addition, in some eigenproblem sequences there is a connection between the solutions of adjacent eigenproblems. Whenever it is possible to unravel the existence of such a connection, the eigenproblem sequence is said to be correlated. When facing with a sequence of correlated eigenproblems the current strategy amounts to solving each eigenproblem in isolation. We propose a alternative approach which exploits such correlation through the use of an eigensolver based on subspace iteration and accelerated with Chebyshev polynomials (ChFSI). The resulting eigensolver is optimized by minimizing the number of matrix-vector multiplications and parallelized using the Elemental library framework. Numerical results show that ChFSI achieves excellent scalability and is competitive with current dense linear algebra parallel eigensolvers.Comment: 23 Pages, 6 figures. First revision of an invited submission to special issue of Concurrency and Computation: Practice and Experienc
    corecore