402 research outputs found

    Temporal data fusion in multisensor systems using dynamic time warping

    Full text link
    Data acquired from multiple sensors can be fused at a variety of levels: the raw data level, the feature level, or the decision level. An additional dimension to the fusion process is temporal fusion, which is fusion of data or information acquired from multiple sensors of different types over a period of time. We propose a technique that can perform such temporal fusion. The core of the system is the fusion processor that uses Dynamic Time Warping (DTW) to perform temporal fusion. We evaluate the performance of the fusion system on two real world datasets: 1) accelerometer data acquired from performing two hand gestures and 2) NOKIA&rsquo;s benchmark dataset for context recognition. The results of the first experiment show that the system can perform temporal fusion on both raw data and features derived from the raw data. The system can also recognize the same class of multisensor temporal sequences even though they have different lengths e.g. the same human gestures can be performed at different speeds. In addition, the fusion processor can infer decisions from the temporal sequences fast and accurately. The results of the second experiment show that the system can perform fusion on temporal sequences that have large dimensions and are a mix of discrete and continuous variables. The proposed fusion system achieved good classification rates efficiently in both experiments<br /

    Online context recognition in multisensor systems using Dynamic Time Warping

    Full text link
    In this paper, we present our system for online context recognition of multimodal sequences acquired from multiple sensors. The system uses Dynamic Time Warping (DTW) to recognize multimodal sequences of different lengths, embedded in continuous data streams. We evaluate the performance of our system on two real world datasets: 1) accelerometer data acquired from performing two hand gestures and 2) NOKIA\u27s benchmark dataset for context recognition. The results from both datasets demonstrate that the system can perform online context recognition efficiently and achieve high recognition accuracy.<br /

    An incremental learning framework to enhance teaching by demonstration based on multimodal sensor fusion

    Get PDF
    Though a robot can reproduce the demonstration trajectory from a human demonstrator by teleoperation, there is a certain error between the reproduced trajectory and the desired trajectory. To minimize this error, we propose a multimodal incremental learning framework based on a teleoperation strategy that can enable the robot to reproduce the demonstration task accurately. The multimodal demonstration data are collected from two different kinds of sensors in the demonstration phase. Then, the Kalman filter (KF) and dynamic time warping (DTW) algorithms are used to preprocessing the data for the multiple sensor signals. The KF algorithm is mainly used to fuse sensor data of different modalities, and the DTW algorithm is used to align the data in the same timeline. The preprocessed demonstration data are further trained and learned by the incremental learning network and sent to a Baxter robot for reproducing the task demonstrated by the human. Comparative experiments have been performed to verify the effectiveness of the proposed framework

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    GPU Accelerated Color Correction and Frame Warping for Real-time Video Stitching

    Full text link
    Traditional image stitching focuses on a single panorama frame without considering the spatial-temporal consistency in videos. The straightforward image stitching approach will cause temporal flicking and color inconstancy when it is applied to the video stitching task. Besides, inaccurate camera parameters will cause artifacts in the image warping. In this paper, we propose a real-time system to stitch multiple video sequences into a panoramic video, which is based on GPU accelerated color correction and frame warping without accurate camera parameters. We extend the traditional 2D-Matrix (2D-M) color correction approach and a present spatio-temporal 3D-Matrix (3D-M) color correction method for the overlap local regions with online color balancing using a piecewise function on global frames. Furthermore, we use pairwise homography matrices given by coarse camera calibration for global warping followed by accurate local warping based on the optical flow. Experimental results show that our system can generate highquality panorama videos in real time
    corecore