10,839 research outputs found
A new efficient and accurate spline algorithm for the matrix exponential computation
[EN] In this work an accurate and efficient method based on matrix splines for computing
matrix exponential is given. An algorithm and a MATLAB implementation have been
developed and compared with the state-of-the-art algorithms for computing the matrix
exponential. We also developed a parallel implementation for large scale problems. This
implementation allowed us to get a much better performance when working with this kind
of problems.This work has been supported by Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF) grant TIN2014-59294-P.Defez Candel, E.; Ibáñez González, JJ.; Sastre, J.; Peinado Pinilla, J.; Alonso-Jordá, P. (2018). A new efficient and accurate spline algorithm for the matrix exponential computation. Journal of Computational and Applied Mathematics. 337(1):354-365. https://doi.org/10.1016/j.cam.2017.11.029S354365337
Nonparametric likelihood based estimation of linear filters for point processes
We consider models for multivariate point processes where the intensity is
given nonparametrically in terms of functions in a reproducing kernel Hilbert
space. The likelihood function involves a time integral and is consequently not
given in terms of a finite number of kernel evaluations. The main result is a
representation of the gradient of the log-likelihood, which we use to derive
computable approximations of the log-likelihood and the gradient by time
discretization. These approximations are then used to minimize the approximate
penalized log-likelihood. For time and memory efficiency the implementation
relies crucially on the use of sparse matrices. As an illustration we consider
neuron network modeling, and we use this example to investigate how the
computational costs of the approximations depend on the resolution of the time
discretization. The implementation is available in the R package ppstat.Comment: 10 pages, 3 figure
Statistical Gravitational Waveform Models: What to Simulate Next?
Models of gravitational waveforms play a critical role in detecting and
characterizing the gravitational waves (GWs) from compact binary coalescences.
Waveforms from numerical relativity (NR), while highly accurate, are too
computationally expensive to produce to be directly used with Bayesian
parameter estimation tools like Markov-chain-Monte-Carlo and nested sampling.
We propose a Gaussian process regression (GPR) method to generate accurate
reduced-order-model waveforms based only on existing accurate (e.g. NR)
simulations. Using a training set of simulated waveforms, our GPR approach
produces interpolated waveforms along with uncertainties across the parameter
space. As a proof of concept, we use a training set of IMRPhenomD waveforms to
build a GPR model in the 2-d parameter space of mass ratio and
equal-and-aligned spin . Using a regular, equally-spaced grid of
120 IMRPhenomD training waveforms in and ,
the GPR mean approximates IMRPhenomD in this space to mismatches below
. Our approach can alternatively use training waveforms
directly from numerical relativity. Beyond interpolation of waveforms, we also
present a greedy algorithm that utilizes the errors provided by our GPR model
to optimize the placement of future simulations. In a fiducial test case we
find that using the greedy algorithm to iteratively add simulations achieves
GPR errors that are order of magnitude lower than the errors from
using Latin-hypercube or square training grids
Tethered Monte Carlo: computing the effective potential without critical slowing down
We present Tethered Monte Carlo, a simple, general purpose method of
computing the effective potential of the order parameter (Helmholtz free
energy). This formalism is based on a new statistical ensemble, closely related
to the micromagnetic one, but with an extended configuration space (through
Creutz-like demons). Canonical averages for arbitrary values of the external
magnetic field are computed without additional simulations. The method is put
to work in the two dimensional Ising model, where the existence of exact
results enables us to perform high precision checks. A rather peculiar feature
of our implementation, which employs a local Metropolis algorithm, is the total
absence, within errors, of critical slowing down for magnetic observables.
Indeed, high accuracy results are presented for lattices as large as L=1024.Comment: 32 pages, 8 eps figures. Corrected Eq. (36), which is wrong in the
published pape
- …