78 research outputs found

    Compressive sensing-based data uploading in time-driven public sensing applications

    Get PDF
    Over the last few years, the technology of mobile phones greatly got increased. People gain and upload more and more information through their mobile phones in an easy way. Accordingly, a new sensing technology emerges, referred to as public sensing (PS). The core idea behind PS is to exploit the crowdedness of smart mobile devices to opportunistically provide real-time sensor data considering spatial and environmental dimensions. Recently, PS has been applied in many different application scenarios, such as environmental monitoring, traffic analysis, and indoor mapping. However, PS applications face several challenges. One of the most prominent challenges is the users acceptance to participate in the PS applications. In order to convince users to participate in the PS applications, several incentives mechanisms have been developed. However, the main two requirements - which should be met by any PS application - are the users privacy and the energy costs of running the PS application. In fact, there exist several energy consumers in PS applications. For example, many PS applications require the mobile devices to fix their position and frequently send this position data to the PS server. Similarly, the mobile devices waste energy when they receive sensing queries outside the sensing areas. However, the most energy-expensive task is to frequently acquire and send data to the PS server. In this thesis, we tackle the problem of energy consumption in a special category of PS applications in which the participating mobile devices are periodically queried for sensor data, such as acceleration and images. To reduce the energy overhead of uploading lots of information, we exploit the fact that processing approximately one thousand instructions consumes energy equal to that of transmitting one bit of information. Accordingly, we exploit data compression to reduce the number of bit that will be transmitted from the participating mobile devices to the PS server. Although, he technical literature has many compression methods, such as derivative-based prediction, Cosine transform, Wavelet transform; we designed a framework based on the compressive sensing (CS) theory. In the last decade, CS has been proven as a promising candidate for compressing N-dimensional data. Moreover, it shows satisfactory results when used for inferring missing data. Accordingly, we exploit CS to compress 1D data (e.g. acceleration, gravity) and 2D data (e.g. images). To efficiently utilize the CS method on resources-taxed devices such as the smart mobile devices, we start with identifying the most lightweight measurements matrices which will be implemented on the mobile devices. We examine several matrices, such as the random measurement matrix, the random Gaussian matrix, and the Toeplitz matrix. Our analysis mainly bases on the recovery accuracy and the dissipated energy from the mobile device's battery. Additionally, we perform a comparative study with other compressors, including the cosine transform and the lossless ZIP compressor. To further confirm that CS has a high recovery accuracy, we implemented an activity recognition algorithm at the server side. To this end, we exploit the dynamic time warping (DTW) algorithm as a pattern matching tool between a set of stored patterns and the recovered data. Several experiments have been performed which show the high accuracy of both CS and DTW to recover several activities such as walking, running, and jogging. In terms of energy, CS significantly reduce the battery consumption relative to the other baseline compressors. Finally, we prove the possibility of exploiting the CS-based compression method for manipulating 1D data as well as 2D data, i.e. images. The main challenge is to perform image encoding on the mobile devices, despite the complex matrix operations between the image pixels and the sensing matrices. To overcome this problem, we divide the image into a number of cells and subsequently, we perform the encoding process on each cell individually. Accordingly, the compression process is iteratively achieved. The evaluation results show promising results for 2D compression-based on the CS theory in terms of the saved energy consumption and the recovery accuracy

    Sampling of graph signals via randomized local aggregations

    Get PDF
    Sampling of signals defined over the nodes of a graph is one of the crucial problems in graph signal processing. While in classical signal processing sampling is a well defined operation, when we consider a graph signal many new challenges arise and defining an efficient sampling strategy is not straightforward. Recently, several works have addressed this problem. The most common techniques select a subset of nodes to reconstruct the entire signal. However, such methods often require the knowledge of the signal support and the computation of the sparsity basis before sampling. Instead, in this paper we propose a new approach to this issue. We introduce a novel technique that combines localized sampling with compressed sensing. We first choose a subset of nodes and then, for each node of the subset, we compute random linear combinations of signal coefficients localized at the node itself and its neighborhood. The proposed method provides theoretical guarantees in terms of reconstruction and stability to noise for any graph and any orthonormal basis, even when the support is not known.Comment: IEEE Transactions on Signal and Information Processing over Networks, 201

    Compressed Sensing and Parallel Acquisition

    Full text link
    Parallel acquisition systems arise in various applications in order to moderate problems caused by insufficient measurements in single-sensor systems. These systems allow simultaneous data acquisition in multiple sensors, thus alleviating such problems by providing more overall measurements. In this work we consider the combination of compressed sensing with parallel acquisition. We establish the theoretical improvements of such systems by providing recovery guarantees for which, subject to appropriate conditions, the number of measurements required per sensor decreases linearly with the total number of sensors. Throughout, we consider two different sampling scenarios -- distinct (corresponding to independent sampling in each sensor) and identical (corresponding to dependent sampling between sensors) -- and a general mathematical framework that allows for a wide range of sensing matrices (e.g., subgaussian random matrices, subsampled isometries, random convolutions and random Toeplitz matrices). We also consider not just the standard sparse signal model, but also the so-called sparse in levels signal model. This model includes both sparse and distributed signals and clustered sparse signals. As our results show, optimal recovery guarantees for both distinct and identical sampling are possible under much broader conditions on the so-called sensor profile matrices (which characterize environmental conditions between a source and the sensors) for the sparse in levels model than for the sparse model. To verify our recovery guarantees we provide numerical results showing phase transitions for a number of different multi-sensor environments.Comment: 43 pages, 4 figure

    Compressed Fingerprint Matching and Camera Identification via Random Projections

    Get PDF
    Sensor imperfections in the form of photo-response nonuniformity (PRNU) patterns are a well-established fingerprinting technique to link pictures to the camera sensors that acquired them. The noise-like characteristics of the PRNU pattern make it a difficult object to compress, thus hindering many interesting applications that would require storage of a large number of fingerprints or transmission over a bandlimited channel for real-time camera matching. In this paper, we propose to use realvalued or binary random projections to effectively compress the fingerprints at a small cost in terms of matching accuracy. The performance of randomly projected fingerprints is analyzed from a theoretical standpoint and experimentally verified on databases of real photographs. Practical issues concerning the complexity of implementing random projections are also addressed by using circulant matrices

    Efficient algorithms and data structures for compressive sensing

    Get PDF
    Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stößt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kürzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen über das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verändert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso müssen viele Hardwarekonzepte für praktische Anwendungen neu überdacht werden. Das heißt, dass man zwischen der Menge an Information, die man über Signale gewinnen kann, und dem Aufwand für das Design und Betreiben eines Signalverarbeitungssystems abwägen kann und muss. Die hier vorgestellte Arbeit trägt dazu bei, dass bei diesem Abwägen CS mehr begünstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende Leistungsfähigkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode präsentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschätzt werden kann. Wir zeigen auf, dass dieser Ansatz für Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler führt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. Für die einfachere Anwendung dieser Darstellung präsentieren wir eine freie Softwarearchitektur und demonstrieren deren Vorzüge, wenn sie für die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits für das Ermitteln von Defektpositionen in Prüfkörpern mittels Ultraschall, und andererseits für das Schätzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. Darüber hinaus stellen wir für die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate für die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulässt, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeinträchtigen. Für die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende Schätzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten Funkkanälen. Um den inhärenten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die Einschränkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schätzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schätzen. Weiterhin zeigen wir, wie dieser Ansatz zur Richtungsschätzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang präsentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut für Parameterschätzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die Schätzgenauigkeit über den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wünschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline

    A Unified Multi-Functional Dynamic Spectrum Access Framework: Tutorial, Theory and Multi-GHz Wideband Testbed

    Get PDF
    Dynamic spectrum access is a must-have ingredient for future sensors that are ideally cognitive. The goal of this paper is a tutorial treatment of wideband cognitive radio and radar—a convergence of (1) algorithms survey, (2) hardware platforms survey, (3) challenges for multi-function (radar/communications) multi-GHz front end, (4) compressed sensing for multi-GHz waveforms—revolutionary A/D, (5) machine learning for cognitive radio/radar, (6) quickest detection, and (7) overlay/underlay cognitive radio waveforms. One focus of this paper is to address the multi-GHz front end, which is the challenge for the next-generation cognitive sensors. The unifying theme of this paper is to spell out the convergence for cognitive radio, radar, and anti-jamming. Moore’s law drives the system functions into digital parts. From a system viewpoint, this paper gives the first comprehensive treatment for the functions and the challenges of this multi-function (wideband) system. This paper brings together the inter-disciplinary knowledge

    FaultFace: Deep Convolutional Generative Adversarial Network (DCGAN) based Ball-Bearing Failure Detection Method

    Full text link
    Failure detection is employed in the industry to improve system performance and reduce costs due to unexpected malfunction events. So, a good dataset of the system is desirable for designing an automated failure detection system. However, industrial process datasets are unbalanced and contain little information about failure behavior due to the uniqueness of these events and the high cost for running the system just to get information about the undesired behaviors. For this reason, performing correct training and validation of automated failure detection methods is challenging. This paper proposes a methodology called FaultFace for failure detection on Ball-Bearing joints for rotational shafts using deep learning techniques to create balanced datasets. The FaultFace methodology uses 2D representations of vibration signals denominated faceportraits obtained by time-frequency transformation techniques. From the obtained faceportraits, a Deep Convolutional Generative Adversarial Network is employed to produce new faceportraits of the nominal and failure behaviors to get a balanced dataset. A Convolutional Neural Network is trained for fault detection employing the balanced dataset. The FaultFace methodology is compared with other deep learning techniques to evaluate its performance in for fault detection with unbalanced datasets. Obtained results show that FaultFace methodology has a good performance for failure detection for unbalanced datasets

    Structured Compressed Sensing: From Theory to Applications

    Full text link
    Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.Comment: To appear as an overview paper in IEEE Transactions on Signal Processin
    corecore