14,739 research outputs found
A variational approach to modeling slow processes in stochastic dynamical systems
The slow processes of metastable stochastic dynamical systems are difficult
to access by direct numerical simulation due the sampling problem. Here, we
suggest an approach for modeling the slow parts of Markov processes by
approximating the dominant eigenfunctions and eigenvalues of the propagator. To
this end, a variational principle is derived that is based on the maximization
of a Rayleigh coefficient. It is shown that this Rayleigh coefficient can be
estimated from statistical observables that can be obtained from short
distributed simulations starting from different parts of state space. The
approach forms a basis for the development of adaptive and efficient
computational algorithms for simulating and analyzing metastable Markov
processes while avoiding the sampling problem. Since any stochastic process
with finite memory can be transformed into a Markov process, the approach is
applicable to a wide range of processes relevant for modeling complex
real-world phenomena
Evaluating Maximum Likelihood Estimation Methods to Determine the Hurst Coefficient
A maximum likelihood estimation method implemented in S-PLUS (S-MLE) to estimate the Hurst coefficient (H) is evaluated. The Hurst coefficient, with 0.5\u3cHS-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 210. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 211
Nonparametric inference of doubly stochastic Poisson process data via the kernel method
Doubly stochastic Poisson processes, also known as the Cox processes,
frequently occur in various scientific fields. In this article, motivated
primarily by analyzing Cox process data in biophysics, we propose a
nonparametric kernel-based inference method. We conduct a detailed study,
including an asymptotic analysis, of the proposed method, and provide
guidelines for its practical use, introducing a fast and stable regression
method for bandwidth selection. We apply our method to real photon arrival data
from recent single-molecule biophysical experiments, investigating proteins'
conformational dynamics. Our result shows that conformational fluctuation is
widely present in protein systems, and that the fluctuation covers a broad
range of time scales, highlighting the dynamic and complex nature of proteins'
structure.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS352 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Estimating the integral length scale on turbulent flows from the zero crossings of the longitudinal velocity fluctuation
The integral length scale () is considered to be characteristic
of the largest motions of a turbulent flow, and as such, it is an input
parameter in modern and classical approaches of turbulence theory and numerical
simulations. Its experimental estimation, however, could be difficult in
certain conditions, for instance, when the experimental calibration required to
measure is hard to achieve (hot-wire anemometry on large scale
wind-tunnels, and field measurements), or in 'standard' facilities using active
grids due to the behaviour of their velocity autocorrelation function
, which does not in general cross zero. In this work, we provide two
alternative methods to estimate using the variance of the
distance between successive zero crossings of the streamwise velocity
fluctuations, thereby reducing the uncertainty of estimating
under similar experimental conditions. These methods are applicable to variety
of situations such as active grids flows, field measurements, and large scale
wind tunnels
Higher-Order Improvements of the Sieve Bootstrap for Fractionally Integrated Processes
This paper investigates the accuracy of bootstrap-based inference in the case
of long memory fractionally integrated processes. The re-sampling method is
based on the semi-parametric sieve approach, whereby the dynamics in the
process used to produce the bootstrap draws are captured by an autoregressive
approximation. Application of the sieve method to data pre-filtered by a
semi-parametric estimate of the long memory parameter is also explored.
Higher-order improvements yielded by both forms of re-sampling are demonstrated
using Edgeworth expansions for a broad class of statistics that includes first-
and second-order moments, the discrete Fourier transform and regression
coefficients. The methods are then applied to the problem of estimating the
sampling distributions of the sample mean and of selected sample
autocorrelation coefficients, in experimental settings. In the case of the
sample mean, the pre-filtered version of the bootstrap is shown to avoid the
distinct underestimation of the sampling variance of the mean which the raw
sieve method demonstrates in finite samples, higher order accuracy of the
latter notwithstanding. Pre-filtering also produces gains in terms of the
accuracy with which the sampling distributions of the sample autocorrelations
are reproduced, most notably in the part of the parameter space in which
asymptotic normality does not obtain. Most importantly, the sieve bootstrap is
shown to reproduce the (empirically infeasible) Edgeworth expansion of the
sampling distribution of the autocorrelation coefficients, in the part of the
parameter space in which the expansion is valid
A recursive scheme for computing autocorrelation functions of decimated complex wavelet subbands
This paper deals with the problem of the exact computation of the autocorrelation function of a real or complex discrete wavelet subband of a signal, when the autocorrelation function (or Power Spectral Density, PSD) of the signal in the time domain (or spatial domain) is either known or estimated using a separate technique. The solution to this problem allows us to couple time domain noise estimation techniques to wavelet domain denoising algorithms, which is crucial for the development of blind wavelet-based denoising techniques. Specifically, we investigate the Dual-Tree complex wavelet transform (DT-CWT), which has a good directional selectivity in 2-D and 3-D, is approximately shift-invariant, and yields better denoising results than a discrete wavelet transform (DWT). The proposed scheme gives an analytical relationship between the PSD of the input signal/image and the PSD of each individual real/complex wavelet subband which is very useful for future developments. We also show that a more general technique, that relies on Monte-Carlo simulations, requires a large number of input samples for a reliable estimate, while the proposed technique does not suffer from this problem
- …