10,839 research outputs found

    A new efficient and accurate spline algorithm for the matrix exponential computation

    Full text link
    [EN] In this work an accurate and efficient method based on matrix splines for computing matrix exponential is given. An algorithm and a MATLAB implementation have been developed and compared with the state-of-the-art algorithms for computing the matrix exponential. We also developed a parallel implementation for large scale problems. This implementation allowed us to get a much better performance when working with this kind of problems.This work has been supported by Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF) grant TIN2014-59294-P.Defez Candel, E.; Ibáñez González, JJ.; Sastre, J.; Peinado Pinilla, J.; Alonso-Jordá, P. (2018). A new efficient and accurate spline algorithm for the matrix exponential computation. Journal of Computational and Applied Mathematics. 337(1):354-365. https://doi.org/10.1016/j.cam.2017.11.029S354365337

    Nonparametric likelihood based estimation of linear filters for point processes

    Full text link
    We consider models for multivariate point processes where the intensity is given nonparametrically in terms of functions in a reproducing kernel Hilbert space. The likelihood function involves a time integral and is consequently not given in terms of a finite number of kernel evaluations. The main result is a representation of the gradient of the log-likelihood, which we use to derive computable approximations of the log-likelihood and the gradient by time discretization. These approximations are then used to minimize the approximate penalized log-likelihood. For time and memory efficiency the implementation relies crucially on the use of sparse matrices. As an illustration we consider neuron network modeling, and we use this example to investigate how the computational costs of the approximations depend on the resolution of the time discretization. The implementation is available in the R package ppstat.Comment: 10 pages, 3 figure

    Statistical Gravitational Waveform Models: What to Simulate Next?

    Full text link
    Models of gravitational waveforms play a critical role in detecting and characterizing the gravitational waves (GWs) from compact binary coalescences. Waveforms from numerical relativity (NR), while highly accurate, are too computationally expensive to produce to be directly used with Bayesian parameter estimation tools like Markov-chain-Monte-Carlo and nested sampling. We propose a Gaussian process regression (GPR) method to generate accurate reduced-order-model waveforms based only on existing accurate (e.g. NR) simulations. Using a training set of simulated waveforms, our GPR approach produces interpolated waveforms along with uncertainties across the parameter space. As a proof of concept, we use a training set of IMRPhenomD waveforms to build a GPR model in the 2-d parameter space of mass ratio qq and equal-and-aligned spin χ1=χ2\chi_1=\chi_2. Using a regular, equally-spaced grid of 120 IMRPhenomD training waveforms in q∈[1,3]q\in[1,3] and χ1∈[−0.5,0.5]\chi_1 \in [-0.5,0.5], the GPR mean approximates IMRPhenomD in this space to mismatches below 4.3×10−54.3\times 10^{-5}. Our approach can alternatively use training waveforms directly from numerical relativity. Beyond interpolation of waveforms, we also present a greedy algorithm that utilizes the errors provided by our GPR model to optimize the placement of future simulations. In a fiducial test case we find that using the greedy algorithm to iteratively add simulations achieves GPR errors that are ∼1\sim 1 order of magnitude lower than the errors from using Latin-hypercube or square training grids

    Tethered Monte Carlo: computing the effective potential without critical slowing down

    Get PDF
    We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L=1024.Comment: 32 pages, 8 eps figures. Corrected Eq. (36), which is wrong in the published pape
    • …
    corecore