197 research outputs found

    Achievable Angles Between two Compressed Sparse Vectors Under Norm/Distance Constraints Imposed by the Restricted Isometry Property: A Plane Geometry Approach

    Full text link
    The angle between two compressed sparse vectors subject to the norm/distance constraints imposed by the restricted isometry property (RIP) of the sensing matrix plays a crucial role in the studies of many compressive sensing (CS) problems. Assuming that (i) u and v are two sparse vectors separated by an angle thetha, and (ii) the sensing matrix Phi satisfies RIP, this paper is aimed at analytically characterizing the achievable angles between Phi*u and Phi*v. Motivated by geometric interpretations of RIP and with the aid of the well-known law of cosines, we propose a plane geometry based formulation for the study of the considered problem. It is shown that all the RIP-induced norm/distance constraints on Phi*u and Phi*v can be jointly depicted via a simple geometric diagram in the two-dimensional plane. This allows for a joint analysis of all the considered algebraic constraints from a geometric perspective. By conducting plane geometry analyses based on the constructed diagram, closed-form formulae for the maximal and minimal achievable angles are derived. Computer simulations confirm that the proposed solution is tighter than an existing algebraic-based estimate derived using the polarization identity. The obtained results are used to derive a tighter restricted isometry constant of structured sensing matrices of a certain kind, to wit, those in the form of a product of an orthogonal projection matrix and a random sensing matrix. Follow-up applications to three CS problems, namely, compressed-domain interference cancellation, RIP-based analysis of the orthogonal matching pursuit algorithm, and the study of democratic nature of random sensing matrices are investigated.Comment: submitted to IEEE Trans. Information Theor

    Efficient algorithms and data structures for compressive sensing

    Get PDF
    Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stößt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kürzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen über das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verändert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso müssen viele Hardwarekonzepte für praktische Anwendungen neu überdacht werden. Das heißt, dass man zwischen der Menge an Information, die man über Signale gewinnen kann, und dem Aufwand für das Design und Betreiben eines Signalverarbeitungssystems abwägen kann und muss. Die hier vorgestellte Arbeit trägt dazu bei, dass bei diesem Abwägen CS mehr begünstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende Leistungsfähigkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode präsentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschätzt werden kann. Wir zeigen auf, dass dieser Ansatz für Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler führt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. Für die einfachere Anwendung dieser Darstellung präsentieren wir eine freie Softwarearchitektur und demonstrieren deren Vorzüge, wenn sie für die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits für das Ermitteln von Defektpositionen in Prüfkörpern mittels Ultraschall, und andererseits für das Schätzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. Darüber hinaus stellen wir für die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate für die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulässt, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeinträchtigen. Für die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende Schätzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten Funkkanälen. Um den inhärenten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die Einschränkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schätzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schätzen. Weiterhin zeigen wir, wie dieser Ansatz zur Richtungsschätzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang präsentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut für Parameterschätzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die Schätzgenauigkeit über den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wünschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline

    Semi-device-dependent blind quantum tomography

    Get PDF
    Extracting tomographic information about quantum states is a crucial task in the quest towards devising high-precision quantum devices. Current schemes typically require measurement devices for tomography that are a priori calibrated to a high precision. Ironically, the accuracy of the measurement calibration is fundamentally limited by the accuracy of state preparation, establishing a vicious cycle. Here, we prove that this cycle can be broken and the fundamental dependence on the measurement devices significantly relaxed. We show that exploiting the natural low-rank structure of quantum states of interest suffices to arrive at a highly scalable blind tomography scheme with a classically efficient post-processing algorithm. We further improve the efficiency of our scheme by making use of the sparse structure of the calibrations. This is achieved by relaxing the blind quantum tomography problem to the task of de-mixing a sparse sum of low-rank quantum states. Building on techniques from model-based compressed sensing, we prove that the proposed algorithm recovers a low-rank quantum state and the calibration provided that the measurement model exhibits a restricted isometry property. For generic measurements, we show that our algorithm requires a close-to-optimal number measurement settings for solving the blind tomography task. Complementing these conceptual and mathematical insights, we numerically demonstrate that blind quantum tomography is possible by exploiting low-rank assumptions in a practical setting inspired by an implementation of trapped ions using constrained alternating optimization.Comment: 22 pages, 8 Figure

    Estimating Sparse Representations from Dictionaries With Uncertainty

    Get PDF
    In the last two decades, sparse representations have gained increasing attention in a variety of engineering applications. A sparse representation of a signal requires a dictionary of basic elements that describe salient and discriminant features of that signal. When the dictionary is created from a mathematical model, its expressiveness depends on the quality of this model. In this dissertation, the problem of estimating sparse representations in the presence of errors and uncertainty in the dictionary is addressed. In the first part, a statistical framework for sparse regularization is introduced. The second part is concerned with the development of methodologies for estimating sparse representations from highly redundant dictionaries along with unknown dictionary parameters. The presented methods are illustrated using applications in direction finding and fiber-optic sensing. They serve as illustrative examples for investigating the abstract problems in the theory of sparse representations. Estimating a sparse representation often involves the solution of a regularized optimization problem. The presented regularization framework offers a systematic procedure for the determination of a regularization parameter that accounts for the joint effects of model errors and measurement noise. It is determined as an upper bound of the mean-squared error between the corrupted data and the ideal model. Despite proper regularization, the quality and accuracy of the obtained sparse representation remains affected by model errors and is indeed sensitive to changes in the regularization parameter. To alleviate this problem, dictionary calibration is performed. The framework is applied to the problem of direction finding. Redundancy enables the dictionary to describe a broader class of observations but also increases the similarity between different entries, which leads to ambiguous representations. To address the problem of redundancy and additional uncertainty in the dictionary parameters, two strategies are pursued. Firstly, an alternating estimation method for iteratively determining the underlying sparse representation and the dictionary parameters is presented. Also, theoretical bounds for the estimation errors are derived. Secondly, a Bayesian framework for estimating sparse representations and dictionary learning is developed. A hierarchical structure is considered to account for uncertainty in prior assumptions. The considered model for the coefficients of the sparse representation is particularly designed to handle high redundancy in the dictionary. Approximate inference is accomplished using a hybrid Markov Chain Monte Carlo algorithm. The performance and practical applicability of both methodologies is evaluated for a problem in fiber-optic sensing, where a mathematical model for the sensor signal is compiled. This model is used to generate a suitable parametric dictionary

    Sparse Recovery with Fusion Frames

    Get PDF
    Sparse signal structures have become increasingly important in signal processing applications as the technology progresses and a plethora of data needs to be handled. It has been shown in practice that various signals have fewer degrees of freedom compared to their actual sizes, i.e., can be expressed in terms of a small number of elements from some dictionary. Alongside an increase in applications, a recent theory of sparse and compressible signal recovery has been recently developed under the name of Compressed Sensing (CS). This approach states that a sparse signal can be efficiently recovered from a small number of random linear measurements. Another powerful tool in signal processing is frames which provide redundant representations for signals. Such redundancy is desirable in many applications where resilience to errors and losses in data is important. The increase in data has also significantly increased the demand to model applications requiring distributed processing which goes beyond the classical frames. The recent theory of Fusion Frames, which can be regarded as a generalization of classical frames, satisfies those needs by analyzing signals by projecting them onto multidimensional subspaces. Fusion frames provide a suitable mathematical framework to model large data systems such as sensor networks, transmission of data of communication networks, etc. In this thesis, we combine these two recent theories and consider the recovery of signals that have a sparse representation in a fusion frame. As in the classical CS, a sparse signal from a fusion frame can be sampled using very few random projections and efficiently recovered using a convex optimization that minimizes the mixed l1/l2-norm. This problem has close connections with other similar recovery problems studied in CS literature such as block sparsity and joint sparsity problems. A key contribution in this thesis is to exploit the incoherence of the fusion frame subspaces in order to enhance the existing recovery results by incorporating this structure. In particular, we derive upper and lower bounds for the number of measurements required for the sparse recovery and the error derived by convex optimization. Aside from our results in the fusion frame setup, we also present results in the classical CS where we focus on improving constants appearing in the number of measurements required and prove optimal constants in the nonuniform setting with rather concise and simple proofs

    Regime Change: Sampling Rate vs. Bit-Depth in Compressive Sensing

    Get PDF
    The compressive sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by exploiting inherent structure in natural and man-made signals. It has been demonstrated that structured signals can be acquired with just a small number of linear measurements, on the order of the signal complexity. In practice, this enables lower sampling rates that can be more easily achieved by current hardware designs. The primary bottleneck that limits ADC sampling rates is quantization, i.e., higher bit-depths impose lower sampling rates. Thus, the decreased sampling rates of CS ADCs accommodate the otherwise limiting quantizer of conventional ADCs. In this thesis, we consider a different approach to CS ADC by shifting towards lower quantizer bit-depths rather than lower sampling rates. We explore the extreme case where each measurement is quantized to just one bit, representing its sign. We develop a new theoretical framework to analyze this extreme case and develop new algorithms for signal reconstruction from such coarsely quantized measurements. The 1-bit CS framework leads us to scenarios where it may be more appropriate to reduce bit-depth instead of sampling rate. We find that there exist two distinct regimes of operation that correspond to high/low signal-to-noise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement (as in conventional CS); in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement (as in this thesis). A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement. The above philosophy extends further to practical CS ADC system designs. We propose two new CS architectures, one of which takes advantage of the fact that the sampling and quantization operations are performed by two different hardware components. The former can be employed at high rates with minimal costs while the latter cannot. Thus, we develop a system that discretizes in time, performs CS preconditioning techniques, and then quantizes at a low rate
    corecore