10,050 research outputs found
Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion
The Negative Binomial distribution becomes highly skewed under extreme
dispersion. Even at moderately large sample sizes, the sample mean exhibits a
heavy right tail. The standard Normal approximation often does not provide
adequate inferences about the data's mean in this setting. In previous work, we
have examined alternative methods of generating confidence intervals for the
expected value. These methods were based upon Gamma and Chi Square
approximations or tail probability bounds such as Bernstein's Inequality. We
now propose growth estimators of the Negative Binomial mean. Under high
dispersion, zero values are likely to be overrepresented in the data. A growth
estimator constructs a Normal-style confidence interval by effectively removing
a small, pre--determined number of zeros from the data. We propose growth
estimators based upon multiplicative adjustments of the sample mean and direct
removal of zeros from the sample. These methods do not require estimating the
nuisance dispersion parameter. We will demonstrate that the growth estimators'
confidence intervals provide improved coverage over a wide range of parameter
values and asymptotically converge to the sample mean. Interestingly, the
proposed methods succeed despite adding both bias and variance to the Normal
approximation
Generating and Revealing a Quantum Superposition of Electromagnetic Field Binomial States in a Cavity
We introduce the -photon quantum superposition of two orthogonal
generalized binomial states of electromagnetic field. We then propose, using
resonant atom-cavity interactions, non-conditional schemes to generate and
reveal such a quantum superposition for the two-photon case in a single-mode
high- cavity. We finally discuss the implementation of the proposed schemes.Comment: 4 pages, 3 figures. Title changed (published version
Zooplankton patchiness
This review considers three general aspects of research on zooplankton patchiness: the detection of patchiness, the description of patchiness and the causes of patchiness
Effective Classification using a small Training Set based on Discretization and Statistical Analysis
This work deals with the problem of producing a fast and accurate data classification, learning it from a possibly small set of records that are already classified. The proposed approach is based on the framework of the so-called Logical Analysis of Data (LAD), but enriched with information obtained from statistical considerations on the data. A number of discrete optimization problems are solved in the different steps of the procedure, but their computational demand can be controlled. The accuracy of the proposed approach is compared to that of the standard LAD algorithm, of Support Vector Machines and of Label Propagation algorithm on publicly available datasets of the UCI repository. Encouraging results are obtained and discusse
- XSummer - Transcendental Functions and Symbolic Summation in Form
Harmonic sums and their generalizations are extremely useful in the
evaluation of higher-order perturbative corrections in quantum field theory. Of
particular interest have been the so-called nested sums,where the harmonic sums
and their generalizations appear as building blocks, originating for example
from the expansion of generalized hypergeometric functions around integer
values of the parameters. In this Letter we discuss the implementation of
several algorithms to solve these sums by algebraic means, using the computer
algebra system Form.Comment: 21 pages, 1 figure, Late
Jeffreys-prior penalty, finiteness and shrinkage in binomial-response generalized linear models
Penalization of the likelihood by Jeffreys' invariant prior, or by a positive
power thereof, is shown to produce finite-valued maximum penalized likelihood
estimates in a broad class of binomial generalized linear models. The class of
models includes logistic regression, where the Jeffreys-prior penalty is known
additionally to reduce the asymptotic bias of the maximum likelihood estimator;
and also models with other commonly used link functions such as probit and
log-log. Shrinkage towards equiprobability across observations, relative to the
maximum likelihood estimator, is established theoretically and is studied
through illustrative examples. Some implications of finiteness and shrinkage
for inference are discussed, particularly when inference is based on Wald-type
procedures. A widely applicable procedure is developed for computation of
maximum penalized likelihood estimates, by using repeated maximum likelihood
fits with iteratively adjusted binomial responses and totals. These theoretical
results and methods underpin the increasingly widespread use of reduced-bias
and similarly penalized binomial regression models in many applied fields
- …
