18,079 research outputs found
Generalized Wald-type Tests based on Minimum Density Power Divergence Estimators
In testing of hypothesis the robustness of the tests is an important concern.
Generally, the maximum likelihood based tests are most efficient under standard
regularity conditions, but they are highly non-robust even under small
deviations from the assumed conditions. In this paper we have proposed
generalized Wald-type tests based on minimum density power divergence
estimators for parametric hypotheses. This method avoids the use of
nonparametric density estimation and the bandwidth selection. The trade-off
between efficiency and robustness is controlled by a tuning parameter .
The asymptotic distributions of the test statistics are chi-square with
appropriate degrees of freedom. The performance of the proposed tests are
explored through simulations and real data analysis.Comment: 26 pages, 10 figures. arXiv admin note: substantial text overlap with
arXiv:1403.033
The Kentucky Noisy Monte Carlo Algorithm for Wilson Dynamical Fermions
We develop an implementation for a recently proposed Noisy Monte Carlo
approach to the simulation of lattice QCD with dynamical fermions by
incorporating the full fermion determinant directly. Our algorithm uses a
quenched gauge field update with a shifted gauge coupling to minimize
fluctuations in the trace log of the Wilson Dirac matrix. The details of tuning
the gauge coupling shift as well as results for the distribution of noisy
estimators in our implementation are given. We present data for some basic
observables from the noisy method, as well as acceptance rate information and
discuss potential autocorrelation and sign violation effects. Both the results
and the efficiency of the algorithm are compared against those of Hybrid Monte
Carlo.
PACS Numbers: 12.38.Gc, 11.15.Ha, 02.70.Uu Keywords: Noisy Monte Carlo,
Lattice QCD, Determinant, Finite Density, QCDSPComment: 30 pages, 6 figure
Bibliographic Review on Distributed Kalman Filtering
In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud
The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area
Incrementally Learned Mixture Models for GNSS Localization
GNSS localization is an important part of today's autonomous systems,
although it suffers from non-Gaussian errors caused by non-line-of-sight
effects. Recent methods are able to mitigate these effects by including the
corresponding distributions in the sensor fusion algorithm. However, these
approaches require prior knowledge about the sensor's distribution, which is
often not available. We introduce a novel sensor fusion algorithm based on
variational Bayesian inference, that is able to approximate the true
distribution with a Gaussian mixture model and to learn its parametrization
online. The proposed Incremental Variational Mixture algorithm automatically
adapts the number of mixture components to the complexity of the measurement's
error distribution. We compare the proposed algorithm against current
state-of-the-art approaches using a collection of open access real world
datasets and demonstrate its superior localization accuracy.Comment: 8 pages, 5 figures, published in proceedings of IEEE Intelligent
Vehicles Symposium (IV) 201
Joint modeling of longitudinal drug using pattern and time to first relapse in cocaine dependence treatment data
An important endpoint variable in a cocaine rehabilitation study is the time
to first relapse of a patient after the treatment. We propose a joint modeling
approach based on functional data analysis to study the relationship between
the baseline longitudinal cocaine-use pattern and the interval censored time to
first relapse. For the baseline cocaine-use pattern, we consider both
self-reported cocaine-use amount trajectories and dichotomized use
trajectories. Variations within the generalized longitudinal trajectories are
modeled through a latent Gaussian process, which is characterized by a few
leading functional principal components. The association between the baseline
longitudinal trajectories and the time to first relapse is built upon the
latent principal component scores. The mean and the eigenfunctions of the
latent Gaussian process as well as the hazard function of time to first relapse
are modeled nonparametrically using penalized splines, and the parameters in
the joint model are estimated by a Monte Carlo EM algorithm based on
Metropolis-Hastings steps. An Akaike information criterion (AIC) based on
effective degrees of freedom is proposed to choose the tuning parameters, and a
modified empirical information is proposed to estimate the variance-covariance
matrix of the estimators.Comment: Published at http://dx.doi.org/10.1214/15-AOAS852 in the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Optimal selection of reduced rank estimators of high-dimensional matrices
We introduce a new criterion, the Rank Selection Criterion (RSC), for
selecting the optimal reduced rank estimator of the coefficient matrix in
multivariate response regression models. The corresponding RSC estimator
minimizes the Frobenius norm of the fit plus a regularization term proportional
to the number of parameters in the reduced rank model. The rank of the RSC
estimator provides a consistent estimator of the rank of the coefficient
matrix; in general, the rank of our estimator is a consistent estimate of the
effective rank, which we define to be the number of singular values of the
target matrix that are appropriately large. The consistency results are valid
not only in the classic asymptotic regime, when , the number of responses,
and , the number of predictors, stay bounded, and , the number of
observations, grows, but also when either, or both, and grow, possibly
much faster than . We establish minimax optimal bounds on the mean squared
errors of our estimators. Our finite sample performance bounds for the RSC
estimator show that it achieves the optimal balance between the approximation
error and the penalty term. Furthermore, our procedure has very low
computational complexity, linear in the number of candidate models, making it
particularly appealing for large scale problems. We contrast our estimator with
the nuclear norm penalized least squares (NNP) estimator, which has an
inherently higher computational complexity than RSC, for multivariate
regression models. We show that NNP has estimation properties similar to those
of RSC, albeit under stronger conditions. However, it is not as parsimonious as
RSC. We offer a simple correction of the NNP estimator which leads to
consistent rank estimation.Comment: Published in at http://dx.doi.org/10.1214/11-AOS876 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org) (some typos corrected
Efficient Optimization of Loops and Limits with Randomized Telescoping Sums
We consider optimization problems in which the objective requires an inner
loop with many steps or is the limit of a sequence of increasingly costly
approximations. Meta-learning, training recurrent neural networks, and
optimization of the solutions to differential equations are all examples of
optimization problems with this character. In such problems, it can be
expensive to compute the objective function value and its gradient, but
truncating the loop or using less accurate approximations can induce biases
that damage the overall solution. We propose randomized telescope (RT) gradient
estimators, which represent the objective as the sum of a telescoping series
and sample linear combinations of terms to provide cheap unbiased gradient
estimates. We identify conditions under which RT estimators achieve
optimization convergence rates independent of the length of the loop or the
required accuracy of the approximation. We also derive a method for tuning RT
estimators online to maximize a lower bound on the expected decrease in loss
per unit of computation. We evaluate our adaptive RT estimators on a range of
applications including meta-optimization of learning rates, variational
inference of ODE parameters, and training an LSTM to model long sequences
- …