39,705 research outputs found

    Guided-wave-based damage detection of timber poles using a hierarchical data fusion algorithm

    Full text link
    This paper presents a hierarchical data fusion algorithm based on the combination of wavelet transform (WT), back propagation neural network (BPNN) and Dempster-Shafer (D-S) evidence theory for the multi-sensor guided-wave-based (GW-based) damage detection of in-situ timber utility poles. In the data-level fusion, noise elimination is performed on the original wave data to obtain single-mode signals using WT technology. Statistical information is extracted from the single-model signals as major characteristic parameters. In the feature-level fusion, for each sensor in the testing system, two sub-networks corresponding to different types of GW signals are constructed based on BPNN and characteristic parameters are sent to the networks for initial state recognition. In the decision-level fusion, the D-S evidence theory method is adopted to combine the initial results from different sensors for final decision making. The overall algorithm employs a hierarchical configuration, in which the results from the former level are regarded as input to the next level. To validate the proposed method, it was tested on GW signals from in-situ timber poles. The obtained damage detection results clearly demonstrate the effectiveness and accuracy of the proposed algorithm

    Learned Semantic Multi-Sensor Depth Map Fusion

    Full text link
    Volumetric depth map fusion based on truncated signed distance functions has become a standard method and is used in many 3D reconstruction pipelines. In this paper, we are generalizing this classic method in multiple ways: 1) Semantics: Semantic information enriches the scene representation and is incorporated into the fusion process. 2) Multi-Sensor: Depth information can originate from different sensors or algorithms with very different noise and outlier statistics which are considered during data fusion. 3) Scene denoising and completion: Sensors can fail to recover depth for certain materials and light conditions, or data is missing due to occlusions. Our method denoises the geometry, closes holes and computes a watertight surface for every semantic class. 4) Learning: We propose a neural network reconstruction method that unifies all these properties within a single powerful framework. Our method learns sensor or algorithm properties jointly with semantic depth fusion and scene completion and can also be used as an expert system, e.g. to unify the strengths of various photometric stereo algorithms. Our approach is the first to unify all these properties. Experimental evaluations on both synthetic and real data sets demonstrate clear improvements.Comment: 11 pages, 7 figures, 2 tables, accepted for the 2nd Workshop on 3D Reconstruction in the Wild (3DRW2019) in conjunction with ICCV201

    Bearing Fault Diagnosis using Multi-sensor Fusion based on weighted D-S Evidence Theory

    Get PDF
    This paper has presented a novel method for bearing fault diagnosis using a multi-sensor fusion approach based on an improved weighted Dempster-Shafer (D-S) evidence theory combined with Genetic Algorithm (GA). Vibration measurements are collected from an industrial multi-stage centrifugal air compressor using three wireless acceleration sensors. Fine-to-Coarse Multiscale Permutation Entropy (F2CMPE) is applied to extract the complexity changes of vibration data sets. Then, the extracted feature vectors produced by F2CMPE via multiple scales are fed into Back Propagation Neural Network (BPNN) for fault classification. The normalized probability outputs of BPNN are considered now as inputs of the proposed weighted D-S evidence theory for multi-sensor information fusion. The measurements collected from real industrial equipment are analyzed using the proposed diagnosis method, and the experimental validation has demonstrated its efficiency to identify rolling bearing conditions, the results of which have also shown higher accuracy compared to those using individual sensor signal analysis

    Sensor Fusion for Object Detection and Tracking in Autonomous Vehicles

    Get PDF
    Autonomous driving vehicles depend on their perception system to understand the environment and identify all static and dynamic obstacles surrounding the vehicle. The perception system in an autonomous vehicle uses the sensory data obtained from different sensor modalities to understand the environment and perform a variety of tasks such as object detection and object tracking. Combining the outputs of different sensors to obtain a more reliable and robust outcome is called sensor fusion. This dissertation studies the problem of sensor fusion for object detection and object tracking in autonomous driving vehicles and explores different approaches for utilizing deep neural networks to accurately and efficiently fuse sensory data from different sensing modalities. In particular, this dissertation focuses on fusing radar and camera data for 2D and 3D object detection and object tracking tasks. First, the effectiveness of radar and camera fusion for 2D object detection is investigated by introducing a radar region proposal algorithm for generating object proposals in a two-stage object detection network. The evaluation results show significant improvement in speed and accuracy compared to a vision-based proposal generation method. Next, radar and camera fusion is used for the task of joint object detection and depth estimation where the radar data is used in conjunction with image features to generate object proposals, but also provides accurate depth estimation for the detected objects in the scene. A fusion algorithm is also proposed for 3D object detection where where the depth and velocity data obtained from the radar is fused with the camera images to detect objects in 3D and also accurately estimate their velocities without requiring any temporal information. Finally, radar and camera sensor fusion is used for 3D multi-object tracking by introducing an end-to-end trainable and online network capable of tracking objects in real-time

    Time-Domain Data Fusion Using Weighted Evidence and Dempster–Shafer Combination Rule: Application in Object Classification

    Get PDF
    To apply data fusion in time-domain based on Dempster–Shafer (DS) combination rule, an 8-step algorithm with novel entropy function is proposed. The 8-step algorithm is applied to time-domain to achieve the sequential combination of time-domain data. Simulation results showed that this method is successful in capturing the changes (dynamic behavior) in time-domain object classification. This method also showed better anti-disturbing ability and transition property compared to other methods available in the literature. As an example, a convolution neural network (CNN) is trained to classify three different types of weeds. Precision and recall from confusion matrix of the CNN are used to update basic probability assignment (BPA) which captures the classification uncertainty. Real data of classified weeds from a single sensor is used test time-domain data fusion. The proposed method is successful in filtering noise (reduce sudden changes—smoother curves) and fusing conflicting information from the video feed. Performance of the algorithm can be adjusted between robustness and fast-response using a tuning parameter which is number of time-steps(ts)
    corecore