120 research outputs found

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    A Decentralized Architecture for Active Sensor Networks

    Get PDF
    This thesis is concerned with the Distributed Information Gathering (DIG) problem in which a Sensor Network is tasked with building a common representation of environment. The problem is motivated by the advantages offered by distributed autonomous sensing systems and the challenges they present. The focus of this study is on Macro Sensor Networks, characterized by platform mobility, heterogeneous teams, and long mission duration. The system under consideration may consist of an arbitrary number of mobile autonomous robots, stationary sensor platforms, and human operators, all linked in a network. This work describes a comprehensive framework called Active Sensor Network (ASN) which addresses the tasks of information fusion, decistion making, system configuration, and user interaction. The main design objectives are scalability with the number of robotic platforms, maximum flexibility in implementation and deployment, and robustness to component and communication failure. The framework is described from three complementary points of view: architecture, algorithms, and implementation. The main contribution of this thesis is the development of the ASN architecture. Its design follows three guiding principles: decentralization, modularity, and locality of interactions. These principles are applied to all aspects of the architecture and the framework in general. To achieve flexibility, the design approach emphasizes interactions between components rather than the definition of the components themselves. The architecture specifies a small set of interfaces sufficient to implement a wide range of information gathering systems. In the area of algorithms, this thesis builds on the earlier work on Decentralized Data Fusion (DDF) and its extension to information-theoretic decistion making. It presents the Bayesian Decentralized Data Fusion (BDDF) algorithm formulated for environment features represented by a general probability density function. Several specific representations are also considered: Gaussian, discrete, and the Certainty Grid map. Well known algorithms for these representations are shown to implement various aspects of the Bayesian framework. As part of the ASN implementation, a practical indoor sensor network has been developed and tested. Two series of experiments were conducted, utilizing two types of environment representation: 1) point features with Gaussian position uncertainty and 2) Certainty Grid maps. The network was operational for several days at a time, with individual platforms coming on and off-line. On several occasions, the network consisted of 39 software components. The lessons learned during the system's development may be applicable to other heterogeneous distributed systems with data-intensive algorithms

    Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media

    Full text link
    Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic inputā€“output ā€œtransmission matrixā€ for a fixed medium. However, this ā€œone-to-oneā€ mapping is highly susceptible to speckle decorrelations ā€“ small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical ā€œone-to-allā€ deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.National Science Foundation (NSF) (1711156); Directorate for Engineering (ENG). (1711156 - National Science Foundation (NSF); Directorate for Engineering (ENG))First author draf

    Fusing diverse monitoring algorithms for robust change detection

    Full text link

    Improving acoustic vehicle classification by information fusion

    No full text
    We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicleā€™s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicleā€™s exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Best Linear Unbiased Estimation Fusion with Constraints

    Get PDF
    Estimation fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple data sets for the purpose of estimating an unknown quantity ā€” a parameter or a process. Estimation fusion with constraints gives rise to challenging theoretical problems given the observations from multiple geometrically dispersed sensors: Under dimensionality constraints, how to preprocess data at each local sensor to achieve the best estimation accuracy at the fusion center? Under communication bandwidth constraints, how to quantize local sensor data to minimize the estimation error at the fusion center? Under constraints on storage, how to optimally update state estimates at the fusion center with out-of-sequence measurements? Under constraints on storage, how to apply the out-of-sequence measurements (OOSM) update algorithm to multi-sensor multi-target tracking in clutter? The present work is devoted to the above topics by applying the best linear unbiased estimation (BLUE) fusion. We propose optimal data compression by reducing sensor data from a higher dimension to a lower dimension with minimal or no performance loss at the fusion center. For single-sensor and some particular multiple-sensor systems, we obtain the explicit optimal compression rule. For a multisensor system with a general dimensionality requirement, we propose the Gauss-Seidel iterative algorithm to search for the optimal compression rule. Another way to accomplish sensor data compression is to find an optimal sensor quantizer. Using BLUE fusion rules, we develop optimal sensor data quantization schemes according to the bit rate constraints in communication between each sensor and the fusion center. For a dynamic system, how to perform the state estimation and sensor quantization update simultaneously is also established, along with a closed form of a recursion for a linear system with additive white Gaussian noise. A globally optimal OOSM update algorithm and a constrained optimal update algorithm are derived to solve one-lag as well as multi-lag OOSM update problems. In order to extend the OOSM update algorithms to multisensor multitarget tracking in clutter, we also study the performance of OOSM update associated with the Probabilistic Data Association (PDA) algorithm

    Data Mining and Machine Learning Techniques for Cyber Security Intrusion Detection

    Get PDF
    An interference discovery framework is customizing that screens a singular or an arrangement of PCs for toxic activities that are away for taking or blue-penciling information or spoiling framework shows. The most methodology used as a piece of the present interference recognition framework is not prepared to deal with the dynamic and complex nature of computerized attacks on PC frameworks. In spite of the way that compelling adaptable methodologies like various frameworks of AI can realize higher discovery rates, cut down bogus alert rates and reasonable estimation and correspondence cost. The use of data mining can realize ceaseless model mining, request, gathering and littler than ordinary data stream. This examination paper portrays a connected with composing audit of AI and data delving procedures for advanced examination in the assistance of interference discovery. In perspective on the number of references or the congruity of a rising methodology, papers addressing each procedure were recognized, examined, and compacted. Since data is so fundamental in AI and data mining draws near, some striking advanced educational records used as a piece of AI and data burrowing are depicted for computerized security is shown, and a couple of recommendations on when to use a given system are given

    A Physical Model of Human Skin and Its Application for Search and Rescue

    Get PDF
    For this research we created a human skin reflectance model in the VIS and NIR. We then modeled sensor output for an RGB sensor based on output from the skin reflectance model. The model was also used to create a skin detection algorithm and a skin pigmentation level (skin reflectance at 685nm) estimation algorithm. The average root mean square error across the VIS and NIR between the skin reflectance model and measured data was 2%. The skin reflectance model then allowed us to generate qualitatively accurate responses for an RGB sensor for different biological and lighting conditions. To test the accuracy of the skin detection and skin color estimation algorithms, hyperspectral images of a suburban test scene containing people with various skin colors were collected. The skin detection algorithm had a probability of detection as high as 95% with a probability of false alarm of 0.6%. The skin pigmentation level estimation algorithm had a mean absolute error when compared with data measured by a reflectometer of 2.6% where the reflectance of the individuals at 685nm ranged from 14% to 64%

    Robust deep learning for computational imaging through random optics

    Full text link
    Light scattering is a pervasive phenomenon that poses outstanding challenges in both coherent and incoherent imaging systems. The output of a coherent light scattered from a complex medium exhibits a seemingly random speckle pattern that scrambles the useful information of the object. To date, there is no simple solution for inverting such complex scattering. Advancing the solution of inverse scattering problems could provide important insights into applications across many areas, such as deep tissue imaging, non-line-of-sight imaging, and imaging in degraded environment. On the other hand, in incoherent systems, the randomness of scattering medium could be exploited to build lightweight, compact, and low-cost lensless imaging systems that are applicable in miniaturized biomedical and scientific imaging. The imaging capabilities of such computational imaging systems, however, are largely limited by the ill-posed or ill-conditioned inverse problems, which typically causes imaging artifacts and degradation of the image resolution. Therefore, mitigating this issue by developing modern algorithms is essential for pushing the limits of such lensless computational imaging systems. In this thesis, I focus on the problem of imaging through random optics and present two novel deep-learning (DL) based methodologies to overcome the challenges in coherent and incoherent systems: 1) no simple solution for inverse scattering problem and lack of robustness to scattering variations; and 2) ill-posed problem for diffuser-based lensless imaging. In the first part, I demonstrate the novel use of a deep neural network (DNN) to solve the inverse scattering problem in a coherent imaging system. I propose a `one-to-all' deep learning technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. I show for the first time, to the best of my knowledge, that the trained CNN is able to generalize and make high-quality object prediction through an entirely different set of diffusers of the same macroscopic parameter. I then push the limit of robustness against a broader class of perturbations including scatterer change, displacements, and system defocus up to 10X depth of field. In the second part, I consider the utility of the random light scattering to build a diffuser-based computational lensless imaging system and present a generally applicable novel DL framework to achieve fast and noise-robust color image reconstruction. I developed a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Experimentally, I demonstrated fundus image reconstruction over a large field of view (FOV) and robustness to refractive error using a constant point-spread-function. Next, I present a physics simulator-trained, adaptive DL framework to achieve fast and noise-robust color imaging. The physics simulator incorporates optical system modeling, the simulation of mixed Poisson-Gaussian noise, and color filter array induced artifacts in color sensors. The learning framework includes an adaptive multi-channel L2-regularized inversion module and a channel-attention enhancement network module. Both simulation and experiments show consistently better reconstruction accuracy and robustness to various noise levels under different light conditions compared with traditional L2-regularized reconstructions. Overall, this thesis investigated two major classes of problems in imaging through random optics. In the first part of the thesis, my work explored a novel DL-based approach for solving the inverse scattering problem and paves the way to a scalable and robust deep learning approach to imaging through scattering media. In the second part of the thesis, my work developed a broadly applicable adaptive learning-based framework for ill-conditioned image reconstruction and a physics-based simulation model for computational color imaging
    • ā€¦
    corecore