637 research outputs found

    Concentration of Measure Inequalities for Toeplitz Matrices with Applications

    Full text link
    We derive Concentration of Measure (CoM) inequalities for randomized Toeplitz matrices. These inequalities show that the norm of a high-dimensional signal mapped by a Toeplitz matrix to a low-dimensional space concentrates around its mean with a tail probability bound that decays exponentially in the dimension of the range space divided by a quantity which is a function of the signal. For the class of sparse signals, the introduced quantity is bounded by the sparsity level of the signal. However, we observe that this bound is highly pessimistic for most sparse signals and we show that if a random distribution is imposed on the non-zero entries of the signal, the typical value of the quantity is bounded by a term that scales logarithmically in the ambient dimension. As an application of the CoM inequalities, we consider Compressive Binary Detection (CBD).Comment: Initial Submission to the IEEE Transactions on Signal Processing on December 1, 2011. Revised and Resubmitted on July 12, 201

    Estimation of Toeplitz Covariance Matrices in Large Dimensional Regime with Application to Source Detection

    Full text link
    In this article, we derive concentration inequalities for the spectral norm of two classical sample estimators of large dimensional Toeplitz covariance matrices, demonstrating in particular their asymptotic almost sure consistence. The consistency is then extended to the case where the aggregated matrix of time samples is corrupted by a rank one (or more generally, low rank) matrix. As an application of the latter, the problem of source detection in the context of large dimensional sensor networks within a temporally correlated noise environment is studied. As opposed to standard procedures, this application is performed online, i.e. without the need to possess a learning set of pure noise samples.Comment: 20 pages, 3 figures, submitted to IEEE Transactions on Signal Processin

    A general approach to small deviation via concentration of measures

    Full text link
    We provide a general approach to obtain upper bounds for small deviations P(∥y∥≤ϵ) \mathbb{P}(\Vert y \Vert \le \epsilon) in different norms, namely the supremum and β\beta- H\"older norms. The large class of processes yy under consideration takes the form yt=Xt+∫0tasdsy_t= X_t + \int_0^t a_s d s, where XX and aa are two possibly dependent stochastic processes. Our approach provides an upper bound for small deviations whenever upper bounds for the \textit{concentration of measures} of LpL^p- norm of random vectors built from increments of the process XX and \textit{large deviation} estimates for the process aa are available. Using our method, among others, we obtain the optimal rates of small deviations in supremum and β\beta- H\"older norms for fractional Brownian motion with Hurst parameter H≤ 12H\le\ \frac{1}{2}. As an application, we discuss the usefulness of our upper bounds for small deviations in pathwise stochastic integral representation of random variables motivated by the hedging problem in mathematical finance
    • …
    corecore