22 research outputs found
Structured Compressed Sensing: From Theory to Applications
Compressed sensing (CS) is an emerging field that has attracted considerable
research interest over the past few years. Previous review articles in CS limit
their scope to standard discrete-to-discrete measurement architectures using
matrices of randomized nature and signal models based on standard sparsity. In
recent years, CS has worked its way into several new application areas. This,
in turn, necessitates a fresh look on many of the basics of CS. The random
matrix measurement operator must be replaced by more structured sensing
architectures that correspond to the characteristics of feasible acquisition
hardware. The standard sparsity prior has to be extended to include a much
richer class of signals and to encode broader data models, including
continuous-time signals. In our overview, the theme is exploiting signal and
measurement structure in compressive sensing. The prime focus is bridging
theory and practice; that is, to pinpoint the potential of structured CS
strategies to emerge from the math to the hardware. Our summary highlights new
directions as well as relations to more traditional CS, with the hope of
serving both as a review to practitioners wanting to join this emerging field,
and as a reference for researchers that attempts to put some of the existing
ideas in perspective of practical applications.Comment: To appear as an overview paper in IEEE Transactions on Signal
Processin
Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms
New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined
Recommended from our members
Reassessing the Paradigms of Statistical Model-Building
Statistical model-building is the science of constructing models from data and from information about the data-generation process, with the aim of analysing those data and drawing inference from that analysis. Many statistical tasks are undertaken during this analysis; they include classification, forecasting, prediction and testing. Model-building has assumed substantial importance, as new technologies enable data on highly complex phenomena to be gathered in very large quantities. This creates a demand for more complex models, and requires the model-building process itself to be adaptive. The word “paradigm” refers to philosophies, frameworks and methodologies for developing and interpreting statistical models, in the context of data, and applying them for inference. In order to solve contemporary statistical problems it is often necessary to combine techniques from previously separate paradigms. The workshop addressed model-building paradigms that are at the frontiers of modern statistical research. It tried to create synergies, by delineating the connections and collisions among different paradigms. It also endeavoured to shape the future evolution of paradigms
Steady state modelling of non-linear power plant components
This thesis studies the problem of periodic. waveform distortion in electric power systems. A
general framework is formulated in the Hilbert domain to account for any given orthogonal
basis such as complex Fourier. real Fourier. Hartley and Walsh.· Particular applications of
this generalised framework result in unified frames of reference. These domains are unified
frameworks in the sense that they accommodate all the nodes. phases and the full spectrum
of coefficients of the orthogonal basis. Linear and linearised, non-linear elements can be
combined in the same frame of reference for a unified solution.
In rigorous waveform distortion analysis. accurate representation of non-linear characteristics
for all power plant components is essential. In this thesis several analytical forms
are studied which provide accurate representations of non-linearities and which are suitable
for efficient. repetitive waveform distortion studies.
Several harmonic domain approaches are also presented. To date most frequency domain
techniques in power systems have used the Complex Fourier expansion but more efficient
solutions can be obtained when using formulations which do not require complex algebra.
With this in mind. two real harmonic domain frames of references are presented: the real
Fourier harmonic domain and the Hartley domain. The solutions exhibit quadratic rate of
convergence. Also, discrete convolutions are proposed as a means for free-aliasing harmonic
domain evaluations; a fact which aids convergence greatly.
Two new models in the harmonic domain are presented: the Three Phase Thyristor
Controlled Reactor model and the Multi-limb Three Phase Transformer model. The former
uses switching functions and discrete convolutions. It yields efficient solutions with strong
characteristics of convergence. The latter is based on the principle of duality and takes
account of the non-linear electromagnetic effects involving iron core, transformer tank and
return air paths. The algorithm exhibits quadratic convergence. Real data is used to
validate both models.
Harmonic distortion can be evaluated by using true Newton-Raphson techniques which
exhibit quadratic convergence. However, these methods can be made to produce faster solutions
by using relaxation techniques. Several alternative relaxation techniques are presented.
An algorithm which uses diagonal relaxation has shown good characteristics of convergence
plus the possibility of parallelisation.
The Walsh series are a set of orthogonal functions with rectangular waveforms. They
are used in this thesis to study switching circuits which are quite common in modern power
systems. They have switching functions which resemble Walsh functions substantially.
Accordingly, switching functions may be represented exactly by a finite number of Walsh
functions, whilst a large number of Fourier coefficients may be required to achieve the same result. Evaluation of waveform distortion of power networks is a non-linear problem
which is solved by linearisation about an operation point. In this thesis the Walsh domain
is used to study this phenomenon. It has deep theoretical strengths which helps greatly in
understanding waveform distortion and which allows its qualitative assessment.
Traditionally, the problem of finding waveform distortion levels in power networks has
been solved by the use of repetitive linearisation of the problem about an operation point.
In this thesis a step towards a true non-linear solution is made. A new approach, which uses
bi-linearisations as opposed to linearisations, is presented. Bi-linear systems are a class of
simple, non-linear systems which are amenable to analytical solutions. Also, a new method,
based on Taylor series expansions, is used to approximate generic, non-linear systems using
a bi-linear system. It is shown that when using repetitive bi-linearisations, as opposed to
linearisations, solutions show super-quadratic rate of convergence.
Finally, several power system applications using the Walsh approach are presented. A
model of a single phase TCR, a model of three phase bank of transformers and a model of
frequency dependent transmission lines are developed
Recommended from our members
A Cognitive Radio Compressive Sensing Framework
With the proliferation of wireless devices and services, allied with further significant predicted growth, there is an ever increasing demand for higher transmission rates. This is especially challenging given the limited availability of radio spectrum, and is further exacerbated by a rigid licensing regulatory regime. Spectrum however, is largely underutilized and this has prompted regulators to promote the concept of opportunistic spectrum access. This allows unlicensed secondary users to use bands which are licensed to primary users, but are currently unoccupied, so leading to more efficient spectrum utilization.
A potentially attractive solution to this spectrum underutilisation problem is cognitive radio (CR) technology, which enables the identification and usage of vacant bands by continuously sensing the radio environment, though CR enforces stringent timing requirements and high sampling rates. Compressive sensing (CS) has emerged as a novel sampling paradigm, which provides the theoretical basis to resolve some of these issues, especially for signals exhibiting sparsity in some domain. For CR-related signals however, existing CS architectures such as the random demodulator and compressive multiplexer have limitations in regard to the signal types used, spectrum estimation methods applied, spectral band classification and a dependence on Fourier domain based sparsity.
This thesis presents a new generic CS framework which addresses these issues by specifically embracing three original scientific contributions: i) seamless embedding of the concept of precolouring into existing CS architectures to enhance signal sparsity for CR-related digital modulation schemes; ii) integration of the multitaper spectral estimator to improve sparsity in CR narrowband modulation schemes; and iii) exploiting sparsity in an alternative, non-Fourier (Walsh-Hadamard) domain to expand the applicable CR-related modulation schemes.
Critical analysis reveals the new CS framework provides a consistently superior and robust solution for the recovery of an extensive set of currently employed CR-type signals encountered in wireless communication standards. Significantly, the generic and portable nature of the framework affords the opportunity for further extensions into other CS architectures and sparsity domains
Compressive Acquisition and Processing of Sparse Analog Signals
Since the advent of the first digital processing units, the importance of digital signal processing has been steadily rising. Today, most signal processing happens in the digital domain, requiring that analog signals be first sampled and digitized before any relevant data can be extracted from them. The recent explosion of the demands for data acquisition, storage and processing, however, has pushed the capabilities of conventional acquisition systems to their limits in many application areas. By offering an alternative view on the signal acquisition process, ideas from sparse signal processing and one of its main beneficiaries compressed sensing (CS), aim at alleviating some of these problems. In this thesis, we look into the ways the application of a compressive measurement kernel impacts the signal recovery performance and investigate methods to infer the current signal complexity from the compressive observations. We then study a particular application, namely that of sub-Nyquist sampling and processing of sparse analog multiband signals in spectral, angular and spatial domains.Seit dem Aufkommen der ersten digitalen Verarbeitungseinheiten hat die Bedeutung der digitalen Signalverarbeitung stetig zugenommen. Heutzutage findet die meiste Signalverarbeitung im digitalen Bereich statt, was erfordert, dass analoge Signale zuerst abgetastet und digitalisiert werden, bevor relevante Daten daraus extrahiert werden können. Jahrzehntelang hat die herkömmliche äquidistante Abtastung, die durch das Nyquist-Abtasttheorem bestimmt wird, zu diesem Zweck ein nahezu universelles Mittel bereitgestellt. Der kürzliche explosive Anstieg der Anforderungen an die Datenerfassung, -speicherung und -verarbeitung hat jedoch die Fähigkeiten herkömmlicher Erfassungssysteme in vielen Anwendungsbereichen an ihre Grenzen gebracht. Durch eine alternative Sichtweise auf den Signalerfassungsprozess können Ideen aus der sparse Signalverarbeitung und einer ihrer Hauptanwendungsgebiete, Compressed Sensing (CS), dazu beitragen, einige dieser Probleme zu mindern. Basierend auf der Annahme, dass der Informationsgehalt eines Signals oft viel geringer ist als was von der nativen Repräsentation vorgegeben, stellt CS ein alternatives Konzept für die Erfassung und Verarbeitung bereit, das versucht, die Abtastrate unter Beibehaltung des Signalinformationsgehalts zu reduzieren. In dieser Arbeit untersuchen wir einige der Grundlagen des endlichdimensionalen CSFrameworks und seine Verbindung mit Sub-Nyquist Abtastung und Verarbeitung von sparsen analogen Signalen. Obwohl es seit mehr als einem Jahrzehnt ein Schwerpunkt aktiver Forschung ist, gibt es noch erhebliche Lücken beim Verständnis der Auswirkungen von komprimierenden Ansätzen auf die Signalwiedergewinnung und die Verarbeitungsleistung, insbesondere bei rauschbehafteten Umgebungen und in Bezug auf praktische Messaufgaben. In dieser Dissertation untersuchen wir, wie sich die Anwendung eines komprimierenden Messkerns auf die Signal- und Rauschcharakteristiken auf die Signalrückgewinnungsleistung auswirkt. Wir erforschen auch Methoden, um die aktuelle Signal-Sparsity-Order aus den komprimierten Messungen abzuleiten, ohne auf die Nyquist-Raten-Verarbeitung zurückzugreifen, und zeigen den Vorteil, den sie für den Wiederherstellungsprozess bietet. Nachdem gehen wir zu einer speziellen Anwendung, nämlich der Sub-Nyquist-Abtastung und Verarbeitung von sparsen analogen Multibandsignalen. Innerhalb des Sub-Nyquist-Abtastung untersuchen wir drei verschiedene Multiband-Szenarien, die Multiband-Sensing in der spektralen, Winkel und räumlichen-Domäne einbeziehen.Since the advent of the first digital processing units, the importance of digital signal
processing has been steadily rising. Today, most signal processing happens in the digital
domain, requiring that analog signals be first sampled and digitized before any relevant data
can be extracted from them. For decades, conventional uniform sampling that is governed by
the Nyquist sampling theorem has provided an almost universal means to this end. The recent explosion of the demands for data acquisition, storage and processing, however, has pushed the capabilities of conventional acquisition systems to their limits in many application areas. By offering an alternative view on the signal acquisition process, ideas from sparse signal processing and one of its main beneficiaries compressed sensing (CS), have the potential to assist alleviating some of these problems. Building on the premise that the signal information
rate is often much lower than what is dictated by its native representation, CS provides an
alternative acquisition and processing framework that attempts to reduce the sampling rate
while preserving the information content of the signal. In this thesis, we explore some of the basic foundations of the finite-dimensional CS framework and its connection to sub-Nyquist sampling and processing of sparse continuous analog signals with application to multiband sensing. Despite being a focus of active research for over a decade, there still remain signi_cant gaps in understanding the implications that compressive approaches have on the signal recovery and processing performance, especially against noisy settings and in relation to practical sampling problems. This dissertation aims at filling some of these gaps. More specifically, we look into the ways the application of a compressive measurement kernel impacts signal and noise characteristics and the relation it has to the signal recovery performance. We also investigate methods to infer the current complexity of the signal scene from the reduced-rate compressive observations without resorting to Nyquist-rate processing and show the advantage this knowledge offers to the recovery process. Having considered some of the universal aspects of compressive systems, we then move to studying a particular application, namely that of sub-Nyquist sampling and processing of sparse analog multiband signals. Within the sub-Nyquist sampling framework, we examine three different multiband scenarios that involve multiband sensing in spectral, angular and spatial domains. For each of them, we provide a sub-Nyquist receiver architecture, develop recovery methods and numerically evaluate their performance
Efficient algorithms and data structures for compressive sensing
Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stößt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kürzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen über das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verändert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso müssen viele Hardwarekonzepte für praktische Anwendungen neu überdacht werden. Das heißt, dass man zwischen der Menge an Information, die man über Signale gewinnen kann, und dem Aufwand für das Design und Betreiben eines Signalverarbeitungssystems abwägen kann und muss. Die hier vorgestellte Arbeit trägt dazu bei, dass bei diesem Abwägen CS mehr begünstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende Leistungsfähigkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode präsentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschätzt werden kann. Wir zeigen auf, dass dieser Ansatz für Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler führt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. Für die einfachere Anwendung dieser Darstellung präsentieren wir eine freie Softwarearchitektur und demonstrieren deren Vorzüge, wenn sie für die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits für das Ermitteln von Defektpositionen in Prüfkörpern mittels Ultraschall, und andererseits für das Schätzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. Darüber hinaus stellen wir für die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate für die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulässt, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeinträchtigen. Für die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende Schätzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten Funkkanälen. Um den inhärenten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die Einschränkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schätzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schätzen. Weiterhin zeigen wir, wie dieser Ansatz zur Richtungsschätzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang präsentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut für Parameterschätzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die Schätzgenauigkeit über den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wünschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline
Estimating Sparse Representations from Dictionaries With Uncertainty
In the last two decades, sparse representations have gained increasing attention in a variety of engineering applications. A sparse representation of a signal requires a dictionary of basic elements that describe salient and discriminant features of that signal. When the dictionary is created from a mathematical model, its expressiveness depends on the quality of this model.
In this dissertation, the problem of estimating sparse representations in the presence of errors and uncertainty in the dictionary is addressed. In the first part, a statistical framework for sparse regularization is introduced. The second part is concerned with the development of methodologies for estimating sparse representations from highly redundant dictionaries along with unknown dictionary parameters. The presented methods are illustrated using applications in direction finding and fiber-optic sensing.
They serve as illustrative examples for investigating the abstract problems in the theory of sparse representations.
Estimating a sparse representation often involves the solution of a regularized optimization problem. The presented regularization framework offers a systematic procedure for the determination of a regularization parameter that accounts for the joint effects of model errors and measurement noise. It is determined as an upper bound of the mean-squared error between the corrupted data and the ideal model. Despite proper regularization, the quality and accuracy of the obtained sparse representation remains affected by model errors and is indeed sensitive to changes in the regularization parameter. To alleviate this problem, dictionary calibration is performed. The framework is applied to the problem of direction finding.
Redundancy enables the dictionary to describe a broader class of observations but also increases the similarity between different entries, which leads to ambiguous representations. To address the problem of redundancy and additional uncertainty in the dictionary parameters, two strategies are pursued. Firstly, an alternating estimation method for iteratively determining the underlying sparse representation and the dictionary parameters is presented. Also, theoretical bounds for the estimation errors are derived. Secondly, a Bayesian framework for estimating sparse representations and dictionary learning is developed. A hierarchical structure is considered to account for uncertainty in prior assumptions. The considered model for the coefficients of the sparse representation is particularly designed to handle high redundancy in the dictionary. Approximate inference is accomplished using a hybrid Markov Chain Monte Carlo algorithm. The performance and practical applicability of both methodologies is evaluated for a problem in fiber-optic sensing, where a mathematical model for the sensor signal is compiled. This model is used to generate a suitable parametric dictionary