32,597 research outputs found
Spectral rate theory for projected two-state kinetics
Classical rate theories often fail in cases where the observable(s) or order
parameter(s) used are poor reaction coordinates or the observed signal is
deteriorated by noise, such that no clear separation between reactants and
products is possible. Here, we present a general spectral two-state rate theory
for ergodic dynamical systems in thermal equilibrium that explicitly takes into
account how the system is observed. The theory allows the systematic estimation
errors made by standard rate theories to be understood and quantified. We also
elucidate the connection of spectral rate theory with the popular Markov state
modeling (MSM) approach for molecular simulation studies. An optimal rate
estimator is formulated that gives robust and unbiased results even for poor
reaction coordinates and can be applied to both computer simulations and
single-molecule experiments. No definition of a dividing surface is required.
Another result of the theory is a model-free definition of the reaction
coordinate quality (RCQ). The RCQ can be bounded from below by the directly
computable observation quality (OQ), thus providing a measure allowing the RCQ
to be optimized by tuning the experimental setup. Additionally, the respective
partial probability distributions can be obtained for the reactant and product
states along the observed order parameter, even when these strongly overlap.
The effects of both filtering (averaging) and uncorrelated noise are also
examined. The approach is demonstrated on numerical examples and experimental
single-molecule force probe data of the p5ab RNA hairpin and the apo-myoglobin
protein at low pH, here focusing on the case of two-state kinetics
Designs for generalized linear models with random block effects via information matrix approximations
The selection of optimal designs for generalized linear mixed models is complicated by the fact that the Fisher information matrix, on which most optimality criteria depend, is computationally expensive to evaluate. Our focus is on the design of experiments for likelihood estimation of parameters in the conditional model. We provide two novel approximations that substantially reduce the computational cost of evaluating the information matrix by complete enumeration of response outcomes, or Monte Carlo approximations thereof: (i) an asymptotic approximation which is accurate when there is strong dependence between observations in the same block; (ii) an approximation via Kriging interpolators. For logistic random intercept models, we show how interpolation can be especially effective for finding pseudo-Bayesian designs that incorporate uncertainty in the values of the model parameters. The new results are used to provide the first evaluation of the efficiency, for estimating conditional models, of optimal designs from closed-form approximations to the information matrix derived from marginal models. It is found that correcting for the marginal attenuation of parameters in binary-response models yields much improved designs, typically with very high efficiencies. However, in some experiments exhibiting strong dependence, designs for marginal models may still be inefficient for conditional modelling. Our asymptotic results provide some theoretical insights into why such inefficiencies occur
Is all government capital productive?
Government spending policy ; Production (Economic theory)
The generalized shrinkage estimator for the analysis of functional connectivity of brain signals
We develop a new statistical method for estimating functional connectivity
between neurophysiological signals represented by a multivariate time series.
We use partial coherence as the measure of functional connectivity. Partial
coherence identifies the frequency bands that drive the direct linear
association between any pair of channels. To estimate partial coherence, one
would first need an estimate of the spectral density matrix of the multivariate
time series. Parametric estimators of the spectral density matrix provide good
frequency resolution but could be sensitive when the parametric model is
misspecified. Smoothing-based nonparametric estimators are robust to model
misspecification and are consistent but may have poor frequency resolution. In
this work, we develop the generalized shrinkage estimator, which is a weighted
average of a parametric estimator and a nonparametric estimator. The optimal
weights are frequency-specific and derived under the quadratic risk criterion
so that the estimator, either the parametric estimator or the nonparametric
estimator, that performs better at a particular frequency receives heavier
weight. We validate the proposed estimator in a simulation study and apply it
on electroencephalogram recordings from a visual-motor experiment.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS396 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …