6,968 research outputs found

    Automatic roI detection for camera-based pulse-rate measurement

    Get PDF
    Remote photoplethysmography (rPPG) enables contactless measurement of pulse-rate by detecting pulse-induced colour changes on human skin using a regular camera. Most of existing rPPG methods exploit the subject face as the Region of Interest (RoI) for pulse-rate measurement by automatic face detection. However, face detection is a suboptimal solution since (1) not all the subregions in a face contain the skin pixels where pulse-signal can be extracted, (2) it fails to locate the RoI in cases when the frontal face is invisible (e.g., side-view faces). In this paper, we present a novel automatic RoI detection method for camerabased pulse-rate measurement, which consists of three main steps: subregion tracking, feature extraction, and clustering of skin regions. To evaluate the robustness of the proposed method, 36 video recordings are made of 6 subjects with different skin-types performing 6 types of head motion. Experimental results show that for the video sequences containing subjects with brighter skin-types and modest body motions, the accuracy of the pulse-rates measured by our method (94 %) is comparable to that obtained by a face detector (92 %), while the average SNR is significantly improved from 5.8 dB to 8.6 dB

    Estimating Carotid Pulse and Breathing Rate from Near-infrared Video of the Neck

    Full text link
    Objective: Non-contact physiological measurement is a growing research area that allows capturing vital signs such as heart rate (HR) and breathing rate (BR) comfortably and unobtrusively with remote devices. However, most of the approaches work only in bright environments in which subtle photoplethysmographic and ballistocardiographic signals can be easily analyzed and/or require expensive and custom hardware to perform the measurements. Approach: This work introduces a low-cost method to measure subtle motions associated with the carotid pulse and breathing movement from the neck using near-infrared (NIR) video imaging. A skin reflection model of the neck was established to provide a theoretical foundation for the method. In particular, the method relies on template matching for neck detection, Principal Component Analysis for feature extraction, and Hidden Markov Models for data smoothing. Main Results: We compared the estimated HR and BR measures with ones provided by an FDA-cleared device in a 12-participant laboratory study: the estimates achieved a mean absolute error of 0.36 beats per minute and 0.24 breaths per minute under both bright and dark lighting. Significance: This work advances the possibilities of non-contact physiological measurement in real-life conditions in which environmental illumination is limited and in which the face of the person is not readily available or needs to be protected. Due to the increasing availability of NIR imaging devices, the described methods are readily scalable.Comment: 21 pages, 15 figure

    Automated drowsiness detection for improved driving safety

    Get PDF
    Several approaches were proposed for the detection and prediction of drowsiness. The approaches can be categorized as estimating the fitness of duty, modeling the sleep-wake rhythms, measuring the vehicle based performance and online operator monitoring. Computer vision based online operator monitoring approach has become prominent due to its predictive ability of detecting drowsiness. Previous studies with this approach detect driver drowsiness primarily by making preassumptions about the relevant behavior, focusing on blink rate, eye closure, and yawning. Here we employ machine learning to datamine actual human behavior during drowsiness episodes. Automatic classifiers for 30 facial actions from the Facial Action Coding system were developed using machine learning on a separate database of spontaneous expressions. These facial actions include blinking and yawn motions, as well as a number of other facial movements. In addition, head motion was collected through automatic eye tracking and an accelerometer. These measures were passed to learning-based classifiers such as Adaboost and multinomial ridge regression. The system was able to predict sleep and crash episodes during a driving computer game with 96% accuracy within subjects and above 90% accuracy across subjects. This is the highest prediction rate reported to date for detecting real drowsiness. Moreover, the analysis revealed new information about human behavior during drowsy drivin

    H2B: Heartbeat-based Secret Key Generation Using Piezo Vibration Sensors

    Full text link
    We present Heartbeats-2-Bits (H2B), which is a system for securely pairing wearable devices by generating a shared secret key from the skin vibrations caused by heartbeat. This work is motivated by potential power saving opportunity arising from the fact that heartbeat intervals can be detected energy-efficiently using inexpensive and power-efficient piezo sensors, which obviates the need to employ complex heartbeat monitors such as Electrocardiogram or Photoplethysmogram. Indeed, our experiments show that piezo sensors can measure heartbeat intervals on many different body locations including chest, wrist, waist, neck and ankle. Unfortunately, we also discover that the heartbeat interval signal captured by piezo vibration sensors has low Signal-to-Noise Ratio (SNR) because they are not designed as precision heartbeat monitors, which becomes the key challenge for H2B. To overcome this problem, we first apply a quantile function-based quantization method to fully extract the useful entropy from the noisy piezo measurements. We then propose a novel Compressive Sensing-based reconciliation method to correct the high bit mismatch rates between the two independently generated keys caused by low SNR. We prototype H2B using off-the-shelf piezo sensors and evaluate its performance on a dataset collected from different body positions of 23 participants. Our results show that H2B has an overwhelming pairing success rate of 95.6%. We also analyze and demonstrate H2B's robustness against three types of attacks. Finally, our power measurements show that H2B is very power-efficient

    Local Visual Microphones: Improved Sound Extraction from Silent Video

    Full text link
    Sound waves cause small vibrations in nearby objects. A few techniques exist in the literature that can extract sound from video. In this paper we study local vibration patterns at different image locations. We show that different locations in the image vibrate differently. We carefully aggregate local vibrations and produce a sound quality that improves state-of-the-art. We show that local vibrations could have a time delay because sound waves take time to travel through the air. We use this phenomenon to estimate sound direction. We also present a novel algorithm that speeds up sound extraction by two to three orders of magnitude and reaches real-time performance in a 20KHz video.Comment: Accepted to BMVC 201
    corecore