44,713 research outputs found

    Compressive sensing based velocity estimation in video data

    Full text link
    This paper considers the use of compressive sensing based algorithms for velocity estimation of moving vehicles. The procedure is based on sparse reconstruction algorithms combined with time-frequency analysis applied to video data. This algorithm provides an accurate estimation of object's velocity even in the case of a very reduced number of available video frames. The influence of crucial parameters is analysed for different types of moving vehicles.Comment: 4 pages, 5 figure

    Object Tracking from Audio and Video data using Linear Prediction method

    Get PDF
    Microphone arrays and video surveillance by camera are widely used for detection and tracking of a moving speaker. In this project, object tracking was planned using multimodal fusion i.e., Audio-Visual perception. Source localisation can be done by GCC-PHAT, GCC-ML for time delay estimation delay estimation. These methods are based on spectral content of the speech signals that can be effected by noise and reverberation. Video tracking can be done using Kalman filter or Particle filter. Therefore Linear Prediction method is used for audio and video tracking. Linear prediction in source localisation use features related to excitation source information of speech which are less effected by noise. Hence by using this excitation source information, time delays are estimated and the results are compared with GCC PHAT method. The dataset obtained from [20] is used in video tracking a single moving object captured through stationary camera. Then for object detection, projection histogram is done followed by linear prediction for tracking and the corresponding results are compared with Kalman filter method

    Seeing Tree Structure from Vibration

    Full text link
    Humans recognize object structure from both their appearance and motion; often, motion helps to resolve ambiguities in object structure that arise when we observe object appearance only. There are particular scenarios, however, where neither appearance nor spatial-temporal motion signals are informative: occluding twigs may look connected and have almost identical movements, though they belong to different, possibly disconnected branches. We propose to tackle this problem through spectrum analysis of motion signals, because vibrations of disconnected branches, though visually similar, often have distinctive natural frequencies. We propose a novel formulation of tree structure based on a physics-based link model, and validate its effectiveness by theoretical analysis, numerical simulation, and empirical experiments. With this formulation, we use nonparametric Bayesian inference to reconstruct tree structure from both spectral vibration signals and appearance cues. Our model performs well in recognizing hierarchical tree structure from real-world videos of trees and vessels.Comment: ECCV 2018. The first two authors contributed equally to this work. Project page: http://tree.csail.mit.edu

    DyMo: Dynamic Monitoring of Large Scale LTE-Multicast Systems

    Full text link
    LTE evolved Multimedia Broadcast/Multicast Service (eMBMS) is an attractive solution for video delivery to very large groups in crowded venues. However, deployment and management of eMBMS systems is challenging, due to the lack of realtime feedback from the User Equipment (UEs). Therefore, we present the Dynamic Monitoring (DyMo) system for low-overhead feedback collection. DyMo leverages eMBMS for broadcasting Stochastic Group Instructions to all UEs. These instructions indicate the reporting rates as a function of the observed Quality of Service (QoS). This simple feedback mechanism collects very limited QoS reports from the UEs. The reports are used for network optimization, thereby ensuring high QoS to the UEs. We present the design aspects of DyMo and evaluate its performance analytically and via extensive simulations. Specifically, we show that DyMo infers the optimal eMBMS settings with extremely low overhead, while meeting strict QoS requirements under different UE mobility patterns and presence of network component failures. For instance, DyMo can detect the eMBMS Signal-to-Noise Ratio (SNR) experienced by the 0.1% percentile of the UEs with Root Mean Square Error (RMSE) of 0.05% with only 5 to 10 reports per second regardless of the number of UEs

    DistancePPG: Robust non-contact vital signs monitoring using a camera

    Full text link
    Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios.Comment: 24 pages, 11 figure
    corecore