91 research outputs found

    Goodness-of-fit testing and quadratic functional estimation from indirect observations

    Full text link
    We consider the convolution model where i.i.d. random variables XiX_i having unknown density ff are observed with additive i.i.d. noise, independent of the XX's. We assume that the density ff belongs to either a Sobolev class or a class of supersmooth functions. The noise distribution is known and its characteristic function decays either polynomially or exponentially asymptotically. We consider the problem of goodness-of-fit testing in the convolution model. We prove upper bounds for the risk of a test statistic derived from a kernel estimator of the quadratic functional f2\int f^2 based on indirect observations. When the unknown density is smoother enough than the noise density, we prove that this estimator is n1/2n^{-1/2} consistent, asymptotically normal and efficient (for the variance we compute). Otherwise, we give nonparametric upper bounds for the risk of the same estimator. We give an approach unifying the proof of nonparametric minimax lower bounds for both problems. We establish them for Sobolev densities and for supersmooth densities less smooth than exponential noise. In the two setups we obtain exact testing constants associated with the asymptotic minimax rates.Comment: Published in at http://dx.doi.org/10.1214/009053607000000118 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sharp minimax tests for large covariance matrices and adaptation

    Full text link
    We consider the detection problem of correlations in a pp-dimensional Gaussian vector, when we observe nn independent, identically distributed random vectors, for nn and pp large. We assume that the covariance matrix varies in some ellipsoid with parameter α>1/2\alpha >1/2 and total energy bounded by L>0L>0. We propose a test procedure based on a U-statistic of order 2 which is weighted in an optimal way. The weights are the solution of an optimization problem, they are constant on each diagonal and non-null only for the TT first diagonals, where T=o(p)T=o(p). We show that this test statistic is asymptotically Gaussian distributed under the null hypothesis and also under the alternative hypothesis for matrices close to the detection boundary. We prove upper bounds for the total error probability of our test procedure, for α>1/2\alpha>1/2 and under the assumption T=o(p)T=o(p) which implies that n=o(p2α)n=o(p^{ 2 \alpha}). We illustrate via a numerical study the behavior of our test procedure. Moreover, we prove lower bounds for the maximal type II error and the total error probabilities. Thus we obtain the asymptotic and the sharp asymptotically minimax separation rate φ~=(C(α,L)n2p)α/(4α+1)\tilde{\varphi} = (C(\alpha, L) n^2 p )^{- \alpha/(4 \alpha + 1)}, for α>3/2\alpha>3/2 and for α>1\alpha >1 together with the additional assumption p=o(n4α1)p= o(n^{4 \alpha -1}), respectively. We deduce rate asymptotic minimax results for testing the inverse of the covariance matrix. We construct an adaptive test procedure with respect to the parameter α\alpha and show that it attains the rate ψ~=(n2p/lnln(np))α/(4α+1)\tilde{\psi}= ( n^2 p / \ln\ln(n \displaystyle\sqrt{p}) )^{- \alpha/(4 \alpha + 1)}

    Sharp detection of smooth signals in a high-dimensional sparse matrix with indirect observations

    Full text link
    We consider a matrix-valued Gaussian sequence model, that is, we observe a sequence of high-dimensional M×NM \times N matrices of heterogeneous Gaussian random variables xij,kx_{ij,k} for i{1,...,M}i \in\{1,...,M\}, j{1,...,N}j \in \{1,...,N\} and kZk \in \mathbb{Z}. The standard deviation of our observations is \ep k^s for some \ep >0 and s0s \geq 0. We give sharp rates for the detection of a sparse submatrix of size m×nm \times n with active components. A component (i,j)(i,j) is said active if the sequence {xij,k}k\{x_{ij,k}\}_k have mean {θij,k}k\{\theta_{ij,k}\}_k within a Sobolev ellipsoid of smoothness τ>0\tau >0 and total energy kθij,k2\sum_k \theta^2_{ij,k} larger than some r^2_\ep. Our rates involve relationships between m,n,Mm,\, n, \, M and NN tending to infinity such that m/Mm/M, n/Nn/N and \ep tend to 0, such that a test procedure that we construct has asymptotic minimax risk tending to 0. We prove corresponding lower bounds under additional assumptions on the relative size of the submatrix in the large matrix of observations. Except for these additional conditions our rates are asymptotically sharp. Lower bounds for hypothesis testing problems mean that no test procedure can distinguish between the null hypothesis (no signal) and the alternative, i.e. the minimax risk for testing tends to 1

    Detection of a sparse submatrix of a high-dimensional noisy matrix

    Full text link
    We observe a N×MN\times M matrix Yij=sij+ξijY_{ij}=s_{ij}+\xi_{ij} with ξijN(0,1)\xi_{ij}\sim {\mathcal {N}}(0,1) i.i.d. in i,ji,j, and sijRs_{ij}\in \mathbb {R}. We test the null hypothesis sij=0s_{ij}=0 for all i,ji,j against the alternative that there exists some submatrix of size n×mn\times m with significant elements in the sense that sija>0s_{ij}\ge a>0. We propose a test procedure and compute the asymptotical detection boundary aa so that the maximal testing risk tends to 0 as MM\to\infty, NN\to\infty, p=n/N0p=n/N\to0, q=m/M0q=m/M\to0. We prove that this boundary is asymptotically sharp minimax under some additional constraints. Relations with other testing problems are discussed. We propose a testing procedure which adapts to unknown (n,m)(n,m) within some given set and compute the adaptive sharp rates. The implementation of our test procedure on synthetic data shows excellent behavior for sparse, not necessarily squared matrices. We extend our sharp minimax results in different directions: first, to Gaussian matrices with unknown variance, next, to matrices of random variables having a distribution from an exponential family (non-Gaussian) and, finally, to a two-sided alternative for matrices with Gaussian elements.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ470 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Quadratic functional estimation in inverse problems

    Get PDF
    We consider in this paper a Gaussian sequence model of observations YiY_i, i1i\geq 1 having mean (or signal) θi\theta_i and variance σi\sigma_i which is growing polynomially like iγi^\gamma, γ>0\gamma >0. This model describes a large panel of inverse problems. We estimate the quadratic functional of the unknown signal i1θi2\sum_{i\geq 1}\theta_i^2 when the signal belongs to ellipsoids of both finite smoothness functions (polynomial weights iαi^\alpha, α>0\alpha>0) and infinite smoothness (exponential weights eβire^{\beta i^r}, β>0\beta >0, 0<r20<r \leq 2). We propose a Pinsker type projection estimator in each case and study its quadratic risk. When the signal is sufficiently smoother than the difficulty of the inverse problem (α>γ+1/4\alpha>\gamma+1/4 or in the case of exponential weights), we obtain the parametric rate and the efficiency constant associated to it. Moreover, we give upper bounds of the second order term in the risk and conjecture that they are asymptotically sharp minimax. When the signal is finitely smooth with αγ+1/4\alpha \leq \gamma +1/4, we compute non parametric upper bounds of the risk of and we presume also that the constant is asymptotically sharp
    corecore