48 research outputs found

    High Dimensional Separable Representations for Statistical Estimation and Controlled Sensing.

    Full text link
    This thesis makes contributions to a fundamental set of high dimensional problems in the following areas: (1) performance bounds for high dimensional estimation of structured Kronecker product covariance matrices, (2) optimal query design for a centralized collaborative controlled sensing system used for target localization, and (3) global convergence theory for decentralized controlled sensing systems. Separable approximations are effective dimensionality reduction techniques for high dimensional problems. In multiple modality and spatio-temporal signal processing, separable models for the underlying covariance are exploited for improved estimation accuracy and reduced computational complexity. In query- based controlled sensing, estimation performance is greatly optimized at the expense of query design. Multi-agent controlled sensing systems for target localization consist of a set of agents that collaborate to estimate the location of an unknown target. In the centralized setting, for a large number of agents and/or high- dimensional targets, separable representations of the fusion center’s query policies are exploited to maintain tractability. For large-scale sensor networks, decentralized estimation methods are of primary interest, under which agents obtain new noisy information as a function of their current belief and exchange local beliefs with their neighbors. Here, separable representations of the temporally evolving information state are exploited to improve robustness and scalability. The results improve upon the current state-of-the-art.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107110/1/ttsili_1.pd

    Community detection in sparse networks via Grothendieck's inequality

    Full text link
    We present a simple and flexible method to prove consistency of semidefinite optimization problems on random graphs. The method is based on Grothendieck's inequality. Unlike the previous uses of this inequality that lead to constant relative accuracy, we achieve any given relative accuracy by leveraging randomness. We illustrate the method with the problem of community detection in sparse networks, those with bounded average degrees. We demonstrate that even in this regime, various simple and natural semidefinite programs can be used to recover the community structure up to an arbitrarily small fraction of misclassified vertices. The method is general; it can be applied to a variety of stochastic models of networks and semidefinite programs.Comment: This is the final version, incorporating the referee's comment

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    A Computational and Statistical Study of Convex and Nonconvex Optimization with Applications to Structured Source Demixing and Matrix Factorization Problems

    Get PDF
    University of Minnesota Ph.D. dissertation. September 2017. Major: Electrical/Computer Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); ix, 153 pages.Modern machine learning problems that emerge from real-world applications typically involve estimating high dimensional model parameters, whose number may be of the same order as or even significantly larger than the number of measurements. In such high dimensional settings, statistically-consistent estimation of true underlying models via classical approaches is often impossible, due to the lack of identifiability. A recent solution to this issue is through incorporating regularization functions into estimation procedures to promote intrinsic low-complexity structure of the underlying models. Statistical studies have established successful recovery of model parameters via structure-exploiting regularized estimators and computational efforts have examined efficient numerical procedures to accurately solve the associated optimization problems. In this dissertation, we study the statistical and computational aspects of some regularized estimators that are successful in reconstructing high dimensional models. The investigated estimation frameworks are motivated by their applications in different areas of engineering, such as structural health monitoring and recommendation systems. In particular, the group Lasso recovery guarantees provided in Chapter 2 will bring insight into the application of this estimator for localizing material defects in the context of a structural diagnostics problem. Chapter 3 describes the convergence study of an accelerated variant of the well-known alternating direction method of multipliers (ADMM) for minimizing strongly convex functions. The analysis is followed by several experimental evidence into the algorithm's applicability to a ranking problem. Finally, Chapter 4 presents a local convergence analysis of regularized factorization-based estimators for reconstructing low-rank matrices. Interestingly, the analysis of this chapter reveals the interplay between statistical and computational aspects of such (non-convex) estimators. Therefore, it can be useful in a wide variety of problems that involve low-rank matrix estimation

    On the Relationship between Conjugate Gradient and Optimal First-Order Methods for Convex Optimization

    Get PDF
    In a series of work initiated by Nemirovsky and Yudin, and later extended by Nesterov, first-order algorithms for unconstrained minimization with optimal theoretical complexity bound have been proposed. On the other hand, conjugate gradient algorithms as one of the widely used first-order techniques suffer from the lack of a finite complexity bound. In fact their performance can possibly be quite poor. This dissertation is partially on tightening the gap between these two classes of algorithms, namely the traditional conjugate gradient methods and optimal first-order techniques. We derive conditions under which conjugate gradient methods attain the same complexity bound as in Nemirovsky-Yudin's and Nesterov's methods. Moreover, we propose a conjugate gradient-type algorithm named CGSO, for Conjugate Gradient with Subspace Optimization, achieving the optimal complexity bound with the payoff of a little extra computational cost. We extend the theory of CGSO to convex problems with linear constraints. In particular we focus on solving l1l_1-regularized least square problem, often referred to as Basis Pursuit Denoising (BPDN) problem in the optimization community. BPDN arises in many practical fields including sparse signal recovery, machine learning, and statistics. Solving BPDN is fairly challenging because the size of the involved signals can be quite large; therefore first order methods are of particular interest for these problems. We propose a quasi-Newton proximal method for solving BPDN. Our numerical results suggest that our technique is computationally effective, and can compete favourably with the other state-of-the-art solvers
    corecore