14 research outputs found

    Nested Sampling and its Applications in Stable Compressive Covariance Estimation and Phase Retrieval with Near-Minimal Measurements

    Get PDF
    Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements

    Non-Convex Phase Retrieval Algorithms and Performance Analysis

    Get PDF
    University of Minnesota Ph.D. dissertation. April 2018. Major: Electrical Engineering. Advisor: Georgios Giannakis. 1 computer file (PDF); ix, 149 pages.High-dimensional signal estimation plays a fundamental role in various science and engineering applications, including optical and medical imaging, wireless communications, and power system monitoring. The ability to devise solution procedures that maintain high computational and statistical efficiency will facilitate increasing the resolution and speed of lensless imaging, identifying artifacts in products intended for military or national security, as well as protecting critical infrastructure including the smart power grid. This thesis contributes in both theory and methods to the fundamental problem of phase retrieval of high-dimensional (sparse) signals from magnitude-only measurements. Our vision is to leverage exciting advances in non-convex optimization and statistical learning to devise algorithmic tools that are simple, scalable, and easy-to-implement, while being computationally and statistically (near-)optimal. Phase retrieval is approached from a non-convex optimization perspective. To gain statistical and computational efficiency, the magnitude data (instead of the intensities) are fitted based on the least-squares or maximum likelihood criterion, which leads to optimization models that trade off smoothness for ‘low-order’ non-convexity. To solve the resultant challenging nonconvex and non-smooth optimization, the present thesis introduces a two-stage algorithmic framework, that is termed amplitude flow. The amplitude flows start with a careful initialization, which is subsequently refined by a sequence of regularized gradient-type iterations. Both stages are lightweight, and they scale well with problem dimensions. Due to the highly non-convex landscape, judicious gradient regularization techniques such as trimming (i.e., truncation) and iterative reweighting are devised to boost the exact phase recovery performance. It is shown that successive iterates of the amplitude flows provably converge to the global optimum at a geometric rate, corroborating their efficiency in terms of computational, storage, and data resources. The amplitude flows are also demonstrated to be stable vis-a-vis additive noise. Sparsity plays a instrumental role in many scientific fields - what has led to the upsurge of research referred to as compressive sampling. In diverse applications, the signal is naturally sparse or admits a sparse representation after some known and deterministic linear transformation. This thesis also accounts for phase retrieval of sparse signals, by putting forth sparsity-cognizant amplitude flow variants. Although analysis, comparisons, and corroborating tests focus on non-convex phase retrieval in this thesis, a succinct overview of other areas is provided to highlight the universality of the novel algorithmic framework to a number of intriguing future research directions

    Structured Signal Recovery from Nonlinear Measurements with Applications in Phase Retrieval and Linear Classification

    Get PDF
    Nonlinear models are widely used in signal processing, statistics, and machine learning to model real-world applications. A popular class of such models is the single-index model where the response variable is related to a linear combination of dependent variables through a link function. In other words, if x ∈ Rp denotes the input signal, the posterior mean of the generated output y has the form, E[y|x] = ρ(xTw), where ρ :R → R is a known function (referred to as the link function), and w ∈ Rp is the vector of unknown parameters. When ρ(•) is invertible, this class of models is called generalized linear models (GLMs). GLMs are commonly used in statistics and are often viewed as flexible generalizations of linear regression. Given n measurements (samples) from this model, D = {(xi, yi) | 1 ≤q i ≤ n}, the goal is to estimate the parameter vector w. While the model parameters are assumed to be unknown, in many applications these parameters follow certain structures (sparse, low-rank, group-sparse, etc.) The knowledge on this structure can be used to form more accurate estimators. The main contribution of this thesis is to provide a precise performance analysis for convex optimization programs that are used for parameter estimation in two important classes of single-index models. These classes are: (1) phase retrieval in signal processing, and (2) binary classification in statistical learning. The first class of models studied in this thesis is the phase retrieval problem, where the goal is to recover a discrete complex-valued signal from amplitudes of its linear combinations. Methods based on convex optimization have recently gained significant attentions in the literature. The conventional convex-optimization-based methods resort to the idea of lifting which makes them computationally inefficient. In addition to providing an analysis of the recovery threshold for the semidefinite-programming-based methods, this thesis studies the performance of a new convex relaxation for the phase retrieval problem, known as phasemax, which is computationally more efficient as it does not lift the signal to higher dimensions. Furthermore, to address the case of structured signals, regularized phasemax is introduced along with a precise characterization of the conditions for its perfect recovery in the asymptotic regime. The next important application studied in this thesis is the binary classification in statistical learning. While classification models have been studied in the literature since 1950's, the understanding of their performance has been incomplete until very recently. Inspired by the maximum likelihood (ML) estimator in logistic models, we analyze a class of optimization programs that attempts to find the model parameters by minimizing an objective that consists of a loss function (which is often inspired by the ML estimator) and an additive regularization term that enforces our knowledge on the structure. There are two operating regimes for this problem depending on the separability of the training data set D. In the asymptotic regime, where the number of samples and the number of parameters grow to infinity, a phase transition phenomenon is demonstrated that happens at a certain over-parameterization ratio. We compute this phase transition for the setting where the underlying data is drawn from a Gaussian distribution. In the case where the data is non-separable, the ML estimator is well-defined, and its attributes have been studied in the classical statistics. However, these classical results fail to provide reasonable estimate in the regime where the number of data points is proportional to the number of samples. One contribution of this thesis is to provide an exact analysis on the performance of the regularized logistic regression when the number of training data is proportional to the number of samples. When the data is separable (a.k.a. the interpolating regime), there exist multiple linear classifiers that perfectly fit the training data. In this regime, we introduce and analyze the performance of "extended margin maximizers" (EMMs). Inspired by the max-margin classifier, EMM classifiers simultaneously consider maximizing the margin and the structure of the parameter. Lastly, we discuss another generalization to the max-margin classifier, referred to as the robust max-margin classifier, that takes into account the perturbations by an adversary. It is shown that for a broad class of loss functions, gradient descent iterates (with proper step sizes) converge to the robust max-margin classifier.</p

    Deep Learning in Neuronal and Neuromorphic Systems

    Get PDF
    The ever-increasing compute and energy requirements in the field of deep learning have caused a rising interest in the development of novel, more energy-efficient computing paradigms to support the advancement of artificial intelligence systems. Neuromorphic architectures are promising candidates, as they aim to mimic the functional mechanisms, and thereby inherit the efficiency, of their archetype: the brain. However, even though neuromorphics and deep learning are, at their roots, inspired by the brain, they are not directly compatible with each other. In this thesis, we aim at bridging this gap by realizing error backpropagation, the central algorithm behind deep learning, on neuromorphic platforms. We start by introducing the Yin-Yang classification dataset, a tool for neuromorphic and algorithmic prototyping, as a prerequisite for the other work presented. This novel dataset is designed to not require excessive hardware or computing resources to be solved. At the same time, it is challenging enough to be useful for debugging and testing by revealing potential algorithmic or implementation flaws. We then explore two different approaches of implementing error backpropagation on neuromorphic systems. Our first solution provides an exact algorithm for error backpropagation on the first spike times of leaky integrate-andfire neurons, one of the most common neuron models implemented in neuromorphic chips. The neuromorphic feasibility is demonstrated by the deployment on the BrainScaleS-2 chip and yields competitive results both with respect to task performance as well as efficiency. The second approach is based on a biologically plausible variant of error backpropagation realized by a dendritc microcircuit model. We assess this model with respect to its practical feasibility, extend it to improve learning performance and address the obstacles for neuromorphic implementation: We introduce the Latent Equilibrium mechanism to solve the relaxation problem introduced by slow neuron dynamics. Our Phaseless Alignment Learning method allows us to learn feedback weights in the network and thus avoid the weight transport problem. And finally, we explore two methods to port the rate-based model onto an event-based neuromorphic system. The presented work showcases two ways of uniting the powerful and flexible learning mechanisms of deep learning with energy-efficient neuromorphic systems, thus illustrating the potential of a convergence of artificial intelligence and neuromorphic engineering research

    Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 1

    Get PDF
    Background material and a systems analysis of a multifrequency aperture - synthesizing microwave radiometer system is presented. It was found that the system does not exhibit high performance because much of the available thermal power is not used in the construction of the image and because the image that can be formed has a resolution of only ten lines. An analysis of image reconstruction is given. The system is compared with conventional aperture synthesis systems
    corecore