1,709 research outputs found
Shrinkage Confidence Procedures
The possibility of improving on the usual multivariate normal confidence was
first discussed in Stein (1962). Using the ideas of shrinkage, through Bayesian
and empirical Bayesian arguments, domination results, both analytic and
numerical, have been obtained. Here we trace some of the developments in
confidence set estimation.Comment: Published in at http://dx.doi.org/10.1214/10-STS319 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Statistics for the Luria-Delbr\"uck distribution
The Luria-Delbr\"uck distribution is a classical model of mutations in cell
kinetics. It is obtained as a limit when the probability of mutation tends to
zero and the number of divisions to infinity. It can be interpreted as a
compound Poisson distribution (for the number of mutations) of exponential
mixtures (for the developing time of mutant clones) of geometric distributions
(for the number of cells produced by a mutant clone in a given time). The
probabilistic interpretation, and a rigourous proof of convergence in the
general case, are deduced from classical results on Bellman-Harris branching
processes. The two parameters of the Luria-Delbr\"uck distribution are the
expected number of mutations, which is the parameter of interest, and the
relative fitness of normal cells compared to mutants, which is the heavy tail
exponent. Both can be simultaneously estimated by the maximum likehood method.
However, the computation becomes numerically unstable as soon as the maximal
value of the sample is large, which occurs frequently due to the heavy tail
property. Based on the empirical generating function, robust estimators are
proposed and their asymptotic variance is given. They are comparable in
precision to maximum likelihood estimators, with a much broader range of
calculability, a better numerical stability, and a negligible computing time
On estimating the reliability in a multicomponent system based on progressively-censored data from Chen distribution
This research deals with classical, Bayesian, and generalized estimation of stress-strength reliability parameter, Rs;k = Pr(at least s of (X1;X2; :::;Xk) exceed Y) = Pr(Xks+1:k \u3eY) of an s-out-of-k : G multicomponent system, based on progressively type-II right-censored samples with random removals when stress and strength are two independent Chen random variables. Under squared-error and LINEX loss functions, Bayes estimates are developed by using Lindley’s approximation and Markov Chain Monte Carlo method. Generalized estimates are developed using generalized variable method while classical estimates - the maximum likelihood estimators, their asymptotic distributions, asymptotic confidence intervals, bootstrap-based confidence intervals - are also developed. A simulation study and a real-world data analysis are provided to illustrate the proposed procedures. The size of the test, adjusted and unadjusted power of the test, coverage probability and expected lengths of the confidence intervals, and biases of the estimators are also computed, compared and contrasted
Change-point Problem and Regression: An Annotated Bibliography
The problems of identifying changes at unknown times and of estimating the location of changes in stochastic processes are referred to as the change-point problem or, in the Eastern literature, as disorder .
The change-point problem, first introduced in the quality control context, has since developed into a fundamental problem in the areas of statistical control theory, stationarity of a stochastic process, estimation of the current position of a time series, testing and estimation of change in the patterns of a regression model, and most recently in the comparison and matching of DNA sequences in microarray data analysis.
Numerous methodological approaches have been implemented in examining change-point models. Maximum-likelihood estimation, Bayesian estimation, isotonic regression, piecewise regression, quasi-likelihood and non-parametric regression are among the methods which have been applied to resolving challenges in change-point problems. Grid-searching approaches have also been used to examine the change-point problem.
Statistical analysis of change-point problems depends on the method of data collection. If the data collection is ongoing until some random time, then the appropriate statistical procedure is called sequential. If, however, a large finite set of data is collected with the purpose of determining if at least one change-point occurred, then this may be referred to as non-sequential. Not surprisingly, both the former and the latter have a rich literature with much of the earlier work focusing on sequential methods inspired by applications in quality control for industrial processes. In the regression literature, the change-point model is also referred to as two- or multiple-phase regression, switching regression, segmented regression, two-stage least squares (Shaban, 1980), or broken-line regression.
The area of the change-point problem has been the subject of intensive research in the past half-century. The subject has evolved considerably and found applications in many different areas. It seems rather impossible to summarize all of the research carried out over the past 50 years on the change-point problem. We have therefore confined ourselves to those articles on change-point problems which pertain to regression.
The important branch of sequential procedures in change-point problems has been left out entirely. We refer the readers to the seminal review papers by Lai (1995, 2001). The so called structural change models, which occupy a considerable portion of the research in the area of change-point, particularly among econometricians, have not been fully considered. We refer the reader to Perron (2005) for an updated review in this area. Articles on change-point in time series are considered only if the methodologies presented in the paper pertain to regression analysis
- …