36 research outputs found
The local power of the gradient test
The asymptotic expansion of the distribution of the gradient test statistic
is derived for a composite hypothesis under a sequence of Pitman alternative
hypotheses converging to the null hypothesis at rate , being the
sample size. Comparisons of the local powers of the gradient, likelihood ratio,
Wald and score tests reveal no uniform superiority property. The power
performance of all four criteria in one-parameter exponential family is
examined.Comment: To appear in the Annals of the Institute of Statistical Mathematics,
this http://www.ism.ac.jp/editsec/aism-e.htm
Kernel density classification and boosting: an L2 sub analysis
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification. A relative newcomer to the classification portfolio is âboostingâ, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research
Texture Regimes for Entropy-Based Multiscale Image Analysis
Abstract. We present an approach to multiscale image analysis. It hinges on an operative definition of texture that involves a âsmall regionâ, where some (unknown) statistic is aggregated, and a âlarge regionâ within which it is stationary. At each point, multiple small and large regions co-exist at multiple scales, as image structures are pooled by the scaling and quantization process to form âtextures â and then transitions between textures define again âstructures. â We present a technique to learn and agglomerate sparse bases at multiple scales. To do so efficiently, we propose an analysis of cluster statistics after a clustering step is performed, and a new clustering method with linear-time performance. In both cases, we can infer all the âsmall â and âlarge â regions at multiple scale in one shot.