584 research outputs found

    On the Performance of Single-Threshold Detectors for Binary Communications in the Presence of Gaussian Mixture Noise

    Get PDF
    Cataloged from PDF version of article.In this paper, probability of error performance of single-threshold detectors is studied for binary communications systems in the presence of Gaussian mixture noise. First, suffi- cient conditions are proposed to specify when the sign detector is (not) an optimal detector among all the single-threshold detectors. Then, a monotonicity property of the error probability is derived for the optimal single-threshold detector. In addition, a theoretical limit is obtained on the maximum ratio between the average probabilities of error for the sign detector and the optimal single-threshold detector. Finally, numerical examples are presented to investigate the theoretical results

    Robust Techniques for Signal Processing: A Survey

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryU.S. Army Research Office / DAAG29-81-K-0062U.S. Air Force Office of Scientific Research / AFOSR 82-0022Joint Services Electronics Program / N00014-84-C-0149National Science Foundation / ECS-82-12080U.S. Office of Naval Research / N00014-80-K-0945 and N00014-81-K-001

    Partial recovery bounds for clustering with the relaxed KKmeans

    Full text link
    We investigate the clustering performances of the relaxed KKmeans in the setting of sub-Gaussian Mixture Model (sGMM) and Stochastic Block Model (SBM). After identifying the appropriate signal-to-noise ratio (SNR), we prove that the misclassification error decay exponentially fast with respect to this SNR. These partial recovery bounds for the relaxed KKmeans improve upon results currently known in the sGMM setting. In the SBM setting, applying the relaxed KKmeans SDP allows to handle general connection probabilities whereas other SDPs investigated in the literature are restricted to the assortative case (where within group probabilities are larger than between group probabilities). Again, this partial recovery bound complements the state-of-the-art results. All together, these results put forward the versatility of the relaxed KKmeans.Comment: 39 page

    Topics In Multivariate Statistics

    Get PDF
    Multivariate statistics concerns the study of dependence relations among multiple variables of interest. Distinct from widely studied regression problems where one of the variables is singled out as a response, in multivariate analysis all variables are treated symmetrically and the dependency structures are examined, either for interest in its own right or for further analyses such as regressions. This thesis includes the study of three independent research problems in multivariate statistics. The first part of the thesis studies additive principal components (APCs for short), a nonlinear method useful for exploring additive relationships among a set of variables. We propose a shrinkage regularization approach for estimating APC transformations by casting the problem in the framework of reproducing kernel Hilbert spaces. To formulate the kernel APC problem, we introduce the Null Comparison Principle, a principle that ties the constraint in a multivariate problem to its criterion in a way that makes the goal of the multivariate method under study transparent. In addition to providing a detailed formulation and exposition of the kernel APC problem, we study asymptotic theory of kernel APCs. Our theory also motivates an iterative algorithm for computing kernel APCs. The second part of the thesis investigates the estimation of precision matrices in high dimensions when the data is corrupted in a cellwise manner and the uncontaminated data follows a multivariate normal distribution. It is known that in the setting of Gaussian graphical models, the conditional independence relations among variables is captured by the precision matrix of a multivariate normal distribution, and estimating the support of the precision matrix is equivalent to graphical model selection. In this work, we analyze the theoretical properties of robust estimators for precision matrices in high dimensions. The estimators we analyze are formed by plugging appropriately chosen robust covariance matrix estimators into the graphical Lasso and CLIME, two existing methods for high-dimensional precision matrix estimation. We establish error bounds for the precision matrix estimators that reveal the interplay between the dimensionality of the problem and the degree of contamination permitted in the observed distribution, and also analyze the breakdown point of both estimators. We also discuss implications of our work for Gaussian graphical model estimation in the presence of cellwise contamination. The third part of the thesis studies the problem of optimal estimation of a quadratic functional under the Gaussian two-sequence model. Quadratic functional estimation has been well studied under the Gaussian sequence model, and close connections between the problem of quadratic functional estimation and that of signal detection have been noted. Focusing on the estimation problem in the Gaussian two-sequence model, in this work we propose optimal estimators of the quadratic functional for different regimes and establish the minimax rates of convergence over a family of parameter spaces. The optimal rates exhibit interesting phase transition in this family. We also discuss the implications of our estimation results on the associated simultaneous signal detection problem

    Adaptive Statistical Inference

    Get PDF
    This workshop in mathematical statistics highlights recent advances in adaptive methods for statistical estimation, testing and confidence sets. Related open mathematical problems are discussed with potential impact on the development of computationally efficient algorithms of data processing under prior uncertainty. Parcticular emphasis is on high dimensional models, inverse problems and discrtete structures

    Acoustic vector-sensor array processing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 145-148).Existing theory yields useful performance criteria and processing techniques for acoustic pressure-sensor arrays. Acoustic vector-sensor arrays, which measure particle velocity and pressure, offer significant potential but require fundamental changes to algorithms and performance assessment. This thesis develops new analysis and processing techniques for acoustic vector-sensor arrays. First, the thesis establishes performance metrics suitable for vector sensor processing. Two novel performance bounds define optimality and explore the limits of vector-sensor capabilities. Second, the thesis designs non-adaptive array weights that perform well when interference is weak. Obtained using convex optimization, these weights substantially improve conventional processing and remain robust to modeling errors. Third, the thesis develops subspace techniques that enable near-optimal adaptive processing. Subspace processing reduces the problem dimension, improving convergence or shortening training time.by Jonathan Paul Kitchens.Ph.D

    Inference And Learning: Computational Difficulty And Efficiency

    Get PDF
    In this thesis, we mainly investigate two collections of problems: statistical network inference and model selection in regression. The common feature shared by these two types of problems is that they typically exhibit an interesting phenomenon in terms of computational difficulty and efficiency. For statistical network inference, our goal is to infer the network structure based on a noisy observation of the network. Statistically, we model the network as generated from the structural information with the presence of noise, for example, planted submatrix model (for bipartite weighted graph), stochastic block model, and Watts-Strogatz model. As the relative amount of ``signal-to-noise\u27\u27 varies, the problems exhibit different stages of computational difficulty. On the theoretical side, we investigate these stages through characterizing the transition thresholds on the ``signal-to-noise\u27\u27 ratio, for the aforementioned models. On the methodological side, we provide new computationally efficient procedures to reconstruct the network structure for each model. For model selection in regression, our goal is to learn a ``good\u27\u27 model based on a certain model class from the observed data sequences (feature and response pairs), when the model can be misspecified. More concretely, we study two model selection problems: to learn from general classes of functions based on i.i.d. data with minimal assumptions, and to select from the sparse linear model class based on possibly adversarially chosen data in a sequential fashion. We develop new theoretical and algorithmic tools beyond empirical risk minimization to study these problems from a learning theory point of view
    corecore