22,108 research outputs found
Estimation from quantized Gaussian measurements: when and how to use dither
Subtractive dither is a powerful method for removing the signal dependence of quantization noise for coarsely quantized signals. However, estimation from dithered measurements often naively applies the sample mean or midrange, even when the total noise is not well described with a Gaussian or uniform distribution. We show that the generalized Gaussian distribution approximately describes subtractively dithered, quantized samples of a Gaussian signal. Furthermore, a generalized Gaussian fit leads to simple estimators based on order statistics that match the performance of more complicated maximum likelihood estimators requiring iterative solvers. The order statistics-based estimators outperform both the sample mean and midrange for nontrivial sums of Gaussian and uniform noise. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. Specifically, we find subtractive dither to be beneficial when the ratio between the Gaussian standard deviation and quantization interval length is roughly less than one-third. When that ratio is also greater than 0.822/K^0.930 for the number of measurements K > 20, estimators we present are more efficient than the midrange.https://arxiv.org/abs/1811.06856Accepted manuscrip
Labor supply models: unobserved heterogeneity, nonparticipation and dynamics
This chapter is concerned with the identification and estimation of models of labor supply. The focus is on the key issues that arise from unobserved heterogeneity, nonparticipation and dynamics. We examine the simple âstaticâ labor supply model with proportional taxes and highlight the problems surrounding nonparticipation and missing wages. The difference in differences
approach to estimation and identification is developed within the context of the labour supply model. We also consider the impact of incorporating nonlinear taxation and welfare programme participation. Family labor supply is looked at from botht e unitary and collective persepctives.
Finally we consider intertemporal models focusing on the difficulties that arise with participation and heterogeneity
Optimal estimates of the diffusion coefficient of a single Brownian trajectory
Modern developments in microscopy and image processing are revolutionizing
areas of physics, chemistry and biology as nanoscale objects can be tracked
with unprecedented accuracy. The goal of single particle tracking is to
determine the interaction between the particle and its environment. The price
paid for having a direct visualization of a single particle is a consequent
lack of statistics. Here we address the optimal way of extracting diffusion
constants from single trajectories for pure Brownian motion. It is shown that
the maximum likelihood estimator is much more efficient than the commonly used
least squares estimate. Furthermore we investigate the effect of disorder on
the distribution of estimated diffusion constants and show that it increases
the probability of observing estimates much smaller than the true (average)
value.Comment: 8 pages, 5 figure
Entropy and information in neural spike trains: Progress on the sampling problem
The major problem in information theoretic analysis of neural responses and
other biological data is the reliable estimation of entropy--like quantities
from small samples. We apply a recently introduced Bayesian entropy estimator
to synthetic data inspired by experiments, and to real experimental spike
trains. The estimator performs admirably even very deep in the undersampled
regime, where other techniques fail. This opens new possibilities for the
information theoretic analysis of experiments, and may be of general interest
as an example of learning from limited data.Comment: 7 pages, 4 figures; referee suggested changes, accepted versio
Shrinkage Estimation of the Power Spectrum Covariance Matrix
We seek to improve estimates of the power spectrum covariance matrix from a
limited number of simulations by employing a novel statistical technique known
as shrinkage estimation. The shrinkage technique optimally combines an
empirical estimate of the covariance with a model (the target) to minimize the
total mean squared error compared to the true underlying covariance. We test
this technique on N-body simulations and evaluate its performance by estimating
cosmological parameters. Using a simple diagonal target, we show that the
shrinkage estimator significantly outperforms both the empirical covariance and
the target individually when using a small number of simulations. We find that
reducing noise in the covariance estimate is essential for properly estimating
the values of cosmological parameters as well as their confidence intervals. We
extend our method to the jackknife covariance estimator and again find
significant improvement, though simulations give better results. Even for
thousands of simulations we still find evidence that our method improves
estimation of the covariance matrix. Because our method is simple, requires
negligible additional numerical effort, and produces superior results, we
always advocate shrinkage estimation for the covariance of the power spectrum
and other large-scale structure measurements when purely theoretical modeling
of the covariance is insufficient.Comment: 9 pages, 7 figures (1 new), MNRAS, accepted. Changes to match
accepted version, including an additional explanatory section with 1 figur
Optimal Correlation Estimators for Quantized Signals
Using a maximum-likelihood criterion, we derive optimal correlation
strategies for signals with and without digitization. We assume that the
signals are drawn from zero-mean Gaussian distributions, as is expected in
radio-astronomical applications, and we present correlation estimators both
with and without a priori knowledge of the signal variances. We demonstrate
that traditional estimators of correlation, which rely on averaging products,
exhibit large and paradoxical noise when the correlation is strong. However, we
also show that these estimators are fully optimal in the limit of vanishing
correlation. We calculate the bias and noise in each of these estimators and
discuss their suitability for implementation in modern digital correlators.Comment: 8 Pages, 3 Figures, Submitted to Ap
Levy-stable distributions revisited: tail index > 2 does not exclude the Levy-stable regime
Power-law tail behavior and the summation scheme of Levy-stable distributions
is the basis for their frequent use as models when fat tails above a Gaussian
distribution are observed. However, recent studies suggest that financial asset
returns exhibit tail exponents well above the Levy-stable regime (). In this paper we illustrate that widely used tail index estimates (log-log
linear regression and Hill) can give exponents well above the asymptotic limit
for close to 2, resulting in overestimation of the tail exponent in
finite samples. The reported value of the tail exponent around 3 may
very well indicate a Levy-stable distribution with .Comment: To be published in Int. J. Modern Physics C (2001) vol. 12 no.
Monte Carlo-based tail exponent estimator
In this paper we propose a new approach to estimation of the tail exponent in
financial stock markets. We begin the study with the finite sample behavior of
the Hill estimator under {\alpha}-stable distributions. Using large Monte Carlo
simulations, we show that the Hill estimator overestimates the true tail
exponent and can hardly be used on samples with small length. Utilizing our
results, we introduce a Monte Carlo-based method of estimation for the tail
exponent. Our proposed method is not sensitive to the choice of tail size and
works well also on small data samples. The new estimator also gives unbiased
results with symmetrical confidence intervals. Finally, we demonstrate the
power of our estimator on the international world stock market indices. On the
two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent
- âŠ