21 research outputs found

    Adaptive procedures in convolution models with known or partially known noise distribution

    Get PDF
    In a convolution model, we observe random variables whose distribution is the convolution of some unknown density f and some known or partially known noise density g. In this paper, we focus on statistical procedures, which are adaptive with respect to the smoothness parameter tau of unknown density f, and also (in some cases) to some unknown parameter of the noise density g. In a first part, we assume that g is known and polynomially smooth. We provide goodness-of-fit procedures for the test H_0:f=f_0, where the alternative H_1 is expressed with respect to L_2-norm. Our adaptive (w.r.t tau) procedure behaves differently according to whether f_0 is polynomially or exponentially smooth. A payment for adaptation is noted in both cases and for computing this, we provide a non-uniform Berry-Esseen type theorem for degenerate U-statistics. In the first case we prove that the payment for adaptation is optimal (thus unavoidable). In a second part, we study a wider framework: a semiparametric model, where g is exponentially smooth and stable, and its self-similarity index s is unknown. In order to ensure identifiability, we restrict our attention to polynomially smooth, Sobolev-type densities f. In this context, we provide a consistent estimation procedure for s. This estimator is then plugged-into three different procedures: estimation of the unknown density f, of the functional \int f^2 and test of the hypothesis H_0. These procedures are adaptive with respect to both s and tau and attain the rates which are known optimal for known values of s and tau. As a by-product, when the noise is known and exponentially smooth our testing procedure is adaptive for testing Sobolev-type densities.Comment: 35 pages + annexe de 8 page

    Sparse classification boundaries

    Get PDF
    Given a training sample of size mm from a dd-dimensional population, we wish to allocate a new observation ZRdZ\in \R^d to this population or to the noise. We suppose that the difference between the distribution of the population and that of the noise is only in a shift, which is a sparse vector. For the Gaussian noise, fixed sample size mm, and the dimension dd that tends to infinity, we obtain the sharp classification boundary and we propose classifiers attaining this boundary. We also give extensions of this result to the case where the sample size mm depends on dd and satisfies the condition (logm)/logdγ(\log m)/\log d \to \gamma, 0γ<10\le \gamma<1, and to the case of non-Gaussian noise satisfying the Cram\'er condition

    Adaptivity in convolution models with partially known noise distribution

    No full text
    International audienceWe consider a semiparametric convolution model. We observe random variables having a distribution given by the convolution of some unknown density ff and some partially known noise density gg. In this work, gg is assumed exponentially smooth with stable law having unknown self-similarity index ss. In order to ensure identifiability of the model, we restrict our attention to polynomially smooth, Sobolev-type densities ff, with smoothness parameter β\beta. In this context, we first provide a consistent estimation procedure for ss. This estimator is then plugged-into three different procedures: estimation of the unknown density ff, of the functional f2\int f^2 and goodness-of-fit test of the hypothesis H0:f=f0H_0 : f = f_0, where the alternative H1H_1 is expressed with respect to L2\mathbb{L}_{2}-norm (i.e. has the form ψn2ff022C\psi_{n}^{-2}\|f-f_{0}\|_{2}^{2}\ge \mathcal{C}). These procedures are adaptive with respect to both ss and β\beta and attain the rates which are known optimal for known values of ss and β\beta. As a by-product, when the noise density is known and exponentially smooth our testing procedure is optimal adaptive for testing Sobolev-type densities. The estimating procedure of s is illustrated on synthetic data

    Adaptive goodness-of-fit testing from indirect observations

    Get PDF
    International audienceIn a convolution model, we observe random variables whose distribution is the convolution of some unknown density ff and some known noise density gg. We assume that gg is polynomially smooth. We provide goodness-of-fit testing procedures for the test H0:f=f0H_0:f=f_0, where the alternative H1H_1 is expressed with respect to L2\mathbb{L}_2-norm (\emph{i.e.} has the form ψn2ff022C\psi_{n}^{-2}\|f-f_0\|_2^2 \ge \mathcal{C}). Our procedure is adaptive with respect to the unknown smoothness parameter τ\tau of ff. Different testing rates (ψn\psi_n) are obtained according to whether f0f_0 is polynomially or exponentially smooth. A price for adaptation is noted and for computing this, we provide a non-uniform Berry-Esseen type theorem for degenerate UU-statistics. In the case of polynomially smooth f0f_0, we prove that the price for adaptation is optimal. We emphasise the fact that the alternative may contain functions smoother than the null density to be tested, which is new in the context of goodness-of-fit tests

    Test on components of mixture densities

    No full text
    International audienceThis paper deals with statistical tests on the components of mixture densities. We propose to test whether the densities of two independent samples of independent random variables Y1,,YnY_1, \dots, Y_n and Z1,,ZnZ_1, \dots, Z_n result from the same mixture of MM components or not. We provide a test procedure which is proved to be asymptotically optimal according to the minimax setting. We extensively discuss the connection between the mixing weights and the performance of the testing procedure and illustrate it with numerical examples

    Minimax rates over Besov spaces in ill-conditioned mixture-models with varying mixing-weights

    No full text
    International audienceWe consider ill-conditioned mixture-models with varying mixing-weights. We study the classical homogeneity testing problem in the minimax setup and try to push the model to its limits, that is to say to let the mixture model to be really ill-conditioned. We highlight the strong connection between the mixing-weights and the expected rate of testing. This link is characterized by the behavior of the smallest eigenvalue of a particular matrix computed from the varying mixing-weights. We provide optimal testing procedures and we exhibit a wide range of rates that are the minimax and minimax adaptive rates for Besov balls
    corecore