801 research outputs found

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Compressed Sensing for Open-ended Waveguide Non-Destructive Testing and Evaluation

    Get PDF
    Ph. D. ThesisNon-destructive testing and evaluation (NDT&E) systems using open-ended waveguide (OEW) suffer from critical challenges. In the sensing stage, data acquisition is time-consuming by raster scan, which is difficult for on-line detection. Sensing stage also disregards demand for the latter feature extraction process, leading to an excessive amount of data and processing overhead for feature extraction. In the feature extraction stage, efficient and robust defect region segmentation in the obtained image is challenging for a complex image background. Compressed sensing (CS) demonstrates impressive data compression ability in various applications using sparse models. How to develop CS models in OEW NDT&E that jointly consider sensing & processing for fast data acquisition, data compression, efficient and robust feature extraction is remaining challenges. This thesis develops integrated sensing-processing CS models to address the drawbacks in OEW NDT systems and carries out their case studies in low-energy impact damage detection for carbon fibre reinforced plastics (CFRP) materials. The major contributions are: (1) For the challenge of fast data acquisition, an online CS model is developed to offer faster data acquisition and reduce data amount without any hardware modification. The images obtained with OEW are usually smooth which can be sparsely represented with discrete cosine transform (DCT) basis. Based on this information, a customised 0/1 Bernoulli matrix for CS measurement is designed for downsampling. The full data is reconstructed with orthogonal matching pursuit algorithm using the downsampling data, DCT basis, and the customised 0/1 Bernoulli matrix. It is hard to determine the sampling pixel numbers for sparse reconstruction when lacking training data, to address this issue, an accumulated sampling and recovery process is developed in this CS model. The defect region can be extracted with the proposed histogram threshold edge detection (HTED) algorithm after each recovery, which forms an online process. A case study in impact damage detection on CFRP materials is carried out for validation. The results show that the data acquisition time is reduced by one order of magnitude while maintaining equivalent image quality and defect region as raster scan. (2) For the challenge of efficient data compression that considers the later feature extraction, a feature-supervised CS data acquisition method is proposed and evaluated. It reserves interested features while reducing the data amount. The frequencies which reveal the feature only occupy a small part of the frequency band, this method finds these sparse frequency range firstly to supervise the later sampling process. Subsequently, based on joint sparsity of neighbour frame and the extracted frequency band, an aligned spatial-spectrum sampling scheme is proposed. The scheme only samples interested frequency range for required features by using a customised 0/1 Bernoulli measurement matrix. The interested spectral-spatial data are reconstructed jointly, which has much faster speed than frame-by-frame methods. The proposed feature-supervised CS data acquisition is implemented and compared with raster scan and the traditional CS reconstruction in impact damage detection on CFRP materials. The results show that the data amount is reduced greatly without compromising feature quality, and the gain in reconstruction speed is improved linearly with the number of measurements. (3) Based on the above CS-based data acquisition methods, CS models are developed to directly detect defect from CS data rather than using the reconstructed full spatial data. This method is robust to texture background and more time-efficient that HTED algorithm. Firstly, based on the histogram is invariant to down-sampling using the customised 0/1 Bernoulli measurement matrix, a qualitative method which only gives binary judgement of defect is developed. High probability of detection and accuracy is achieved compared to other methods. Secondly, a new greedy algorithm of sparse orthogonal matching pursuit (spOMP)-based defect region segmentation method is developed to quantitatively extract the defect region, because the conventional sparse reconstruction algorithms cannot properly use the sparse character of correlation between the measurement matrix and CS data. The proposed algorithms are faster and more robust to interference than other algorithms.China Scholarship Counci

    High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework

    Get PDF
    High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification. In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme. For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy. Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Novel deep cross-domain framework for fault diagnosis or rotary machinery in prognostics and health management

    Get PDF
    Improving the reliability of engineered systems is a crucial problem in many applications in various engineering fields, such as aerospace, nuclear energy, and water declination industries. This requires efficient and effective system health monitoring methods, including processing and analyzing massive machinery data to detect anomalies and performing diagnosis and prognosis. In recent years, deep learning has been a fast-growing field and has shown promising results for Prognostics and Health Management (PHM) in interpreting condition monitoring signals such as vibration, acoustic emission, and pressure due to its capacity to mine complex representations from raw data. This doctoral research provides a systematic review of state-of-the-art deep learning-based PHM frameworks, an empirical analysis on bearing fault diagnosis benchmarks, and a novel multi-source domain adaptation framework. It emphasizes the most recent trends within the field and presents the benefits and potentials of state-of-the-art deep neural networks for system health management. Besides, the limitations and challenges of the existing technologies are discussed, which leads to opportunities for future research. The empirical study of the benchmarks highlights the evaluation results of the existing models on bearing fault diagnosis benchmark datasets in terms of various performance metrics such as accuracy and training time. The result of the study is very important for comparing or testing new models. A novel multi-source domain adaptation framework for fault diagnosis of rotary machinery is also proposed, which aligns the domains in both feature-level and task-level. The proposed framework transfers the knowledge from multiple labeled source domains into a single unlabeled target domain by reducing the feature distribution discrepancy between the target domain and each source domain. Besides, the model can be easily reduced to a single-source domain adaptation problem. Also, the model can be readily updated to unsupervised domain adaptation problems in other fields such as image classification and image segmentation. Further, the proposed model is modified with a novel conditional weighting mechanism that aligns the class-conditional probability of the domains and reduces the effect of irrelevant source domain which is a critical issue in multi-source domain adaptation algorithms. The experimental verification results show the superiority of the proposed framework over state-of-the-art multi-source domain-adaptation models

    Compressed sensing and finite rate of innovation for efficient data acquisition of quantitative acoustic microscopy images

    Get PDF
    La microscopie acoustique quantitative (MAQ) est une modalité d'imagerie bien établie qui donne accès à des cartes paramétriques 2D représentatives des propriétés mécaniques des tissus à une échelle microscopique. Dans la plupart des études sur MAQ, l'échantillons est scanné ligne par ligne (avec un pas de 2µm) à l'aide d'un transducteur à 250 MHz. Ce type d'acquisition permet d'obtenir un cube de données RF 3D, avec deux dimensions spatiales et une dimension temporelle. Chaque signal RF correspondant à une position spatiale dans l'échantillon permet d'estimer des paramètres acoustiques comme par exemple la vitesse du son ou l'impédance. Le temps d'acquisition en MAQ est directement proportionnel à la taille de l'échantillon et peut aller de quelques minutes à quelques dizaines de minutes. Afin d'assurer des conditions d'acquisition stables et étant donnée la sensibilité des échantillons à ces conditions, diminuer le temps d'acquisition est un des grand défis en MAQ. Afin de relever ce défi, ce travail de thèse propose plusieurs solutions basées sur l'échantillonnage compressé (EC) et la théories des signaux ayant un faible nombre de degré de liberté (finite rate of innovation - FRI, en anglais). Le principe de l'EC repose sur la parcimonie des données, sur l'échantillonnage incohérent de celles-ci et sur les algorithmes d'optimisation numérique. Dans cette thèse, les phénomènes physiques derrière la MAQ sont exploités afin de créer des modèles adaptés aux contraintes de l'EC et de la FRI. Plus particulièrement, ce travail propose plusieurs pistes d'application de l'EC en MAQ : un schéma d'acquisition spatiale innovant, un algorithme de reconstruction d'images exploitant les statistiques des coefficients en ondelettes des images paramétriques, un modèle FRI adapté aux signaux RF et un schéma d'acquisition compressée dans le domaine temporel.Quantitative acoustic microscopy (QAM) is a well-accepted modality for forming 2D parameter maps making use of mechanical properties of soft tissues at microscopic scales. In leading edge QAM studies, the sample is raster-scanned (spatial step size of 2µm) using a 250 MHz transducer resulting in a 3D RF data cube, and each RF signal for each spatial location is processed to obtain acoustic parameters, e.g., speed of sound or acoustic impedance. The scanning time directly depends on the sample size and can range from few minutes to tens of minutes. In order to maintain constant experimental conditions for the sensitive thin sectioned samples, the scanning time is an important practical issue. To deal with the current challenge, we propose the novel approach inspired by compressed sensing (CS) and finite rate of innovation (FRI). The success of CS relies on the sparsity of data under consideration, incoherent measurement and optimization technique. On the other hand, the idea behind FRI is supported by a signal model fully characterized as a limited number of parameters. From this perspective, taking into account the physics leading to data acquisition of QAM system, the QAM data can be regarded as an adequate application amenable to the state of the art technologies aforementioned. However, when it comes to the mechanical structure of QAM system which does not support canonical CS measurement manners on the one hand, and the compositions of the RF signal model unsuitable to existing FRI schemes on the other hand, the advanced frameworks are still not perfect methods to resolve the problems that we are facing. In this thesis, to overcome the limitations, a novel sensing framework for CS is presented in spatial domain: a recently proposed approximate message passing (AMP) algorithm is adapted to account for the underlying data statistics of samples sparsely collected by proposed scanning patterns. In time domain, as an approach for achieving an accurate recovery from a small set of samples of QAM RF signals, we employ sum of sincs (SoS) sampling kernel and autoregressive (AR) model estimator. The spiral scanning manner, introduced as an applicable sensing technique to QAM system, contributed to the significant reduction of the number of spatial samples when reconstructing speed of sound images of a human lymph node. Furthermore, the scanning time was also hugely saved due to the merit of the mechanical movement of the proposed sensing pattern. Together with the achievement in spatial domain, the introduction of SoS kernel and AR estimator responsible for an innovation rate sampling and a parameter estimation respectively led to dramatic reduction of the required number of samples per RF signal compared to a conventional approach. Finally, we showed that both data acquisition frameworks based on the CS and FRI can be combined into a single spatio-temporal solution to maximize the benefits stated above

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    IoT Applications Computing

    Get PDF
    The evolution of emerging and innovative technologies based on Industry 4.0 concepts are transforming society and industry into a fully digitized and networked globe. Sensing, communications, and computing embedded with ambient intelligence are at the heart of the Internet of Things (IoT), the Industrial Internet of Things (IIoT), and Industry 4.0 technologies with expanding applications in manufacturing, transportation, health, building automation, agriculture, and the environment. It is expected that the emerging technology clusters of ambient intelligence computing will not only transform modern industry but also advance societal health and wellness, as well as and make the environment more sustainable. This book uses an interdisciplinary approach to explain the complex issue of scientific and technological innovations largely based on intelligent computing

    Air Force Institute of Technology Research Report 2013

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems Engineering and Management, Operational Sciences, Mathematics, Statistics and Engineering Physics
    • …
    corecore