148 research outputs found

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    ROBUST BACKGROUND SUBTRACTION FOR MOVING CAMERAS AND THEIR APPLICATIONS IN EGO-VISION SYSTEMS

    Get PDF
    Background subtraction is the algorithmic process that segments out the region of interest often known as foreground from the background. Extensive literature and numerous algorithms exist in this domain, but most research have focused on videos captured by static cameras. The proliferation of portable platforms equipped with cameras has resulted in a large amount of video data being generated from moving cameras. This motivates the need for foundational algorithms for foreground/background segmentation in videos from moving cameras. In this dissertation, I propose three new types of background subtraction algorithms for moving cameras based on appearance, motion, and a combination of them. Comprehensive evaluation of the proposed approaches on publicly available test sequences show superiority of our system over state-of-the-art algorithms. The first method is an appearance-based global modeling of foreground and background. Features are extracted by sliding a fixed size window over the entire image without any spatial constraint to accommodate arbitrary camera movements. Supervised learning method is then used to build foreground and background models. This method is suitable for limited scene scenarios such as Pan-Tilt-Zoom surveillance cameras. The second method relies on motion. It comprises of an innovative background motion approximation mechanism followed by spatial regulation through a Mega-Pixel denoising process. This work does not need to maintain any costly appearance models and is therefore appropriate for resource constraint ego-vision systems. The proposed segmentation combined with skin cues is validated by a novel application on authenticating hand-gestured signature captured by wearable cameras. The third method combines both motion and appearance. Foreground probabilities are jointly estimated by motion and appearance. After the mega-pixel denoising process, the probability estimates and gradient image are combined by Graph-Cut to produce the segmentation mask. This method is universal as it can handle all types of moving cameras

    Distributed, Low-Cost, Non-Expert Fine Dust Sensing with Smartphones

    Get PDF
    Diese Dissertation behandelt die Frage, wie mit kostengünstiger Sensorik Feinstäube in hoher zeitlicher und räumlicher Auflösung gemessen werden können. Dazu wird ein neues Sensorsystem auf Basis kostengünstiger off-the-shelf-Sensoren und Smartphones vorgestellt, entsprechende robuste Algorithmen zur Signalverarbeitung entwickelt und Erkenntnisse zur Interaktions-Gestaltung für die Messung durch Laien präsentiert. Atmosphärische Aerosolpartikel stellen im globalen Maßstab ein gravierendes Problem für die menschliche Gesundheit dar, welches sich in Atemwegs- und Herz-Kreislauf-Erkrankungen äußert und eine Verkürzung der Lebenserwartung verursacht. Bisher wird Luftqualität ausschließlich anhand von Daten relativ weniger fester Messstellen beurteilt und mittels Modellen auf eine hohe räumliche Auflösung gebracht, so dass deren Repräsentativität für die flächendeckende Exposition der Bevölkerung ungeklärt bleibt. Es ist unmöglich, derartige räumliche Abbildungen mit den derzeitigen statischen Messnetzen zu bestimmen. Bei der gesundheitsbezogenen Bewertung von Schadstoffen geht der Trend daher stark zu räumlich differenzierenden Messungen. Ein vielversprechender Ansatz um eine hohe räumliche und zeitliche Abdeckung zu erreichen ist dabei Participatory Sensing, also die verteilte Messung durch Endanwender unter Zuhilfenahme ihrer persönlichen Endgeräte. Insbesondere für Luftqualitätsmessungen ergeben sich dabei eine Reihe von Herausforderungen - von neuer Sensorik, die kostengünstig und tragbar ist, über robuste Algorithmen zur Signalauswertung und Kalibrierung bis hin zu Anwendungen, die Laien bei der korrekten Ausführung von Messungen unterstützen und ihre Privatsphäre schützen. Diese Arbeit konzentriert sich auf das Anwendungsszenario Partizipatorischer Umweltmessungen, bei denen Smartphone-basierte Sensorik zum Messen der Umwelt eingesetzt wird und üblicherweise Laien die Messungen in relativ unkontrollierter Art und Weise ausführen. Die Hauptbeiträge hierzu sind: 1. Systeme zum Erfassen von Feinstaub mit Smartphones (Low-cost Sensorik und neue Hardware): Ausgehend von früher Forschung zur Feinstaubmessung mit kostengünstiger off-the-shelf-Sensorik wurde ein Sensorkonzept entwickelt, bei dem die Feinstaub-Messung mit Hilfe eines passiven Aufsatzes auf einer Smartphone-Kamera durchgeführt wird. Zur Beurteilung der Sensorperformance wurden teilweise Labor-Messungen mit künstlich erzeugtem Staub und teilweise Feldevaluationen in Ko-Lokation mit offiziellen Messstationen des Landes durchgeführt. 2. Algorithmen zur Signalverarbeitung und Auswertung: Im Zuge neuer Sensordesigns werden Kombinationen bekannter OpenCV-Bildverarbeitungsalgorithmen (Background-Subtraction, Contour Detection etc.) zur Bildanalyse eingesetzt. Der resultierende Algorithmus erlaubt im Gegensatz zur Auswertung von Lichtstreuungs-Summensignalen die direkte Zählung von Partikeln anhand individueller Lichtspuren. Ein zweiter neuartiger Algorithmus nutzt aus, dass es bei solchen Prozessen ein signalabhängiges Rauschen gibt, dessen Verhältnis zum Mittelwert des Signals bekannt ist. Dadurch wird es möglich, Signale die von systematischen unbekannten Fehlern betroffen sind auf Basis ihres Rauschens zu analysieren und das "echte" Signal zu rekonstruieren. 3. Algorithmen zur verteilten Kalibrierung bei gleichzeitigem Schutz der Privatsphäre: Eine Herausforderung partizipatorischer Umweltmessungen ist die wiederkehrende Notwendigkeit der Sensorkalibrierung. Dies beruht zum einen auf der Instabilität insbesondere kostengünstiger Luftqualitätssensorik und zum anderen auf der Problematik, dass Endbenutzern die Mittel für eine Kalibrierung üblicherweise fehlen. Bestehende Ansätze zur sogenannten Cross-Kalibrierung von Sensoren, die sich in Ko-Lokation mit einer Referenzstation oder anderen Sensoren befinden, wurden auf Daten günstiger Feinstaubsensorik angewendet sowie um Mechanismen erweitert, die eine Kalibrierung von Sensoren untereinander ohne Preisgabe privater Informationen (Identität, Ort) ermöglicht. 4. Mensch-Maschine-Interaktions-Gestaltungsrichtlinien für Participatory Sensing: Auf Basis mehrerer kleiner explorativer Nutzerstudien wurde empirisch eine Taxonomie der Fehler erstellt, die Laien beim Messen von Umweltinformationen mit Smartphones machen. Davon ausgehend wurden mögliche Gegenmaßnahmen gesammelt und klassifiziert. In einer großen summativen Studie mit einer hohen Teilnehmerzahl wurde der Effekt verschiedener dieser Maßnahmen durch den Vergleich vier unterschiedlicher Varianten einer App zur partizipatorischen Messung von Umgebungslautstärke evaluiert. Die dabei gefundenen Erkenntnisse bilden die Basis für Richtlinien zur Gestaltung effizienter Nutzerschnittstellen für Participatory Sensing auf Mobilgeräten. 5. Design Patterns für Participatory Sensing Games auf Mobilgeräten (Gamification): Ein weiterer erforschter Ansatz beschäftigt sich mit der Gamifizierung des Messprozesses um Nutzerfehler durch den Einsatz geeigneter Spielmechanismen zu minimieren. Dabei wird der Messprozess z.B. in ein Smartphone-Spiel (sog. Minigame) eingebettet, das im Hintergrund bei geeignetem Kontext die Messung durchführt. Zur Entwicklung dieses "Sensified Gaming" getauften Konzepts wurden Kernaufgaben im Participatory Sensing identifiziert und mit aus der Literatur zu sammelnden Spielmechanismen (Game Design Patterns) gegenübergestellt

    Compressed Sensing for Open-ended Waveguide Non-Destructive Testing and Evaluation

    Get PDF
    Ph. D. ThesisNon-destructive testing and evaluation (NDT&E) systems using open-ended waveguide (OEW) suffer from critical challenges. In the sensing stage, data acquisition is time-consuming by raster scan, which is difficult for on-line detection. Sensing stage also disregards demand for the latter feature extraction process, leading to an excessive amount of data and processing overhead for feature extraction. In the feature extraction stage, efficient and robust defect region segmentation in the obtained image is challenging for a complex image background. Compressed sensing (CS) demonstrates impressive data compression ability in various applications using sparse models. How to develop CS models in OEW NDT&E that jointly consider sensing & processing for fast data acquisition, data compression, efficient and robust feature extraction is remaining challenges. This thesis develops integrated sensing-processing CS models to address the drawbacks in OEW NDT systems and carries out their case studies in low-energy impact damage detection for carbon fibre reinforced plastics (CFRP) materials. The major contributions are: (1) For the challenge of fast data acquisition, an online CS model is developed to offer faster data acquisition and reduce data amount without any hardware modification. The images obtained with OEW are usually smooth which can be sparsely represented with discrete cosine transform (DCT) basis. Based on this information, a customised 0/1 Bernoulli matrix for CS measurement is designed for downsampling. The full data is reconstructed with orthogonal matching pursuit algorithm using the downsampling data, DCT basis, and the customised 0/1 Bernoulli matrix. It is hard to determine the sampling pixel numbers for sparse reconstruction when lacking training data, to address this issue, an accumulated sampling and recovery process is developed in this CS model. The defect region can be extracted with the proposed histogram threshold edge detection (HTED) algorithm after each recovery, which forms an online process. A case study in impact damage detection on CFRP materials is carried out for validation. The results show that the data acquisition time is reduced by one order of magnitude while maintaining equivalent image quality and defect region as raster scan. (2) For the challenge of efficient data compression that considers the later feature extraction, a feature-supervised CS data acquisition method is proposed and evaluated. It reserves interested features while reducing the data amount. The frequencies which reveal the feature only occupy a small part of the frequency band, this method finds these sparse frequency range firstly to supervise the later sampling process. Subsequently, based on joint sparsity of neighbour frame and the extracted frequency band, an aligned spatial-spectrum sampling scheme is proposed. The scheme only samples interested frequency range for required features by using a customised 0/1 Bernoulli measurement matrix. The interested spectral-spatial data are reconstructed jointly, which has much faster speed than frame-by-frame methods. The proposed feature-supervised CS data acquisition is implemented and compared with raster scan and the traditional CS reconstruction in impact damage detection on CFRP materials. The results show that the data amount is reduced greatly without compromising feature quality, and the gain in reconstruction speed is improved linearly with the number of measurements. (3) Based on the above CS-based data acquisition methods, CS models are developed to directly detect defect from CS data rather than using the reconstructed full spatial data. This method is robust to texture background and more time-efficient that HTED algorithm. Firstly, based on the histogram is invariant to down-sampling using the customised 0/1 Bernoulli measurement matrix, a qualitative method which only gives binary judgement of defect is developed. High probability of detection and accuracy is achieved compared to other methods. Secondly, a new greedy algorithm of sparse orthogonal matching pursuit (spOMP)-based defect region segmentation method is developed to quantitatively extract the defect region, because the conventional sparse reconstruction algorithms cannot properly use the sparse character of correlation between the measurement matrix and CS data. The proposed algorithms are faster and more robust to interference than other algorithms.China Scholarship Counci

    ON SOME COMMON COMPRESSIVE SENSING RECOVERY ALGORITHMS AND APPLICATIONS

    Get PDF
    Compressive Sensing, as an emerging technique in signal processing is reviewed in this paper together with its’ common applications. As an alternative to the traditional signal sampling, Compressive Sensing allows a new acquisition strategy with significantly reduced number of samples needed for accurate signal reconstruction. The basic ideas and motivation behind this approach are provided in the theoretical part of the paper. The commonly used algorithms for missing data reconstruction are presented. The Compressive Sensing applications have gained significant attention leading to an intensive growth of signal processing possibilities. Hence, some of the existing practical applications assuming different types of signals in real-world scenarios are described and analyzed as well

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding
    corecore