257 research outputs found

    Cardiovascular assessment by imaging photoplethysmography – a review

    Get PDF
    AbstractOver the last few years, the contactless acquisition of cardiovascular parameters using cameras has gained immense attention. The technique provides an optical means to acquire cardiovascular information in a very convenient way. This review provides an overview on the technique’s background and current realizations. Besides giving detailed information on the most widespread application of the technique, namely the contactless acquisition of heart rate, we outline further concepts and we critically discuss the current state.</jats:p

    Remote heart rate monitoring - Assessment of the Facereader rPPg by Noldus

    Get PDF
    Remote photoplethysmography (rPPG) allows contactless monitoring of human cardiac activity through a video camera. In this study, we assessed the accuracy and precision for heart rate measurements of the only consumer product available on the market, namely the Facereaderâ„¢ rPPG by Noldus, with respect to a gold standard electrocardiograph. Twenty-four healthy participants were asked to sit in front of a computer screen and alternate two periods of rest with two stress tests (i.e. Go/No-Go task), while their heart rate was simultaneously acquired for 20 minutes using the ECG criterion measure and the Facereaderâ„¢ rPPG. Results show that the Facereaderâ„¢ rPPG tends to overestimate lower heart rates and underestimate higher heart rates compared to the ECG. The Facereaderâ„¢ rPPG revealed a mean bias of 9.8 bpm, the 95% limits of agreement (LoA) ranged from almost -30 up to +50 bpm. These results suggest that whilst the rPPG Facereaderâ„¢ technology has potential for contactless heart rate monitoring, its predictions are inaccurate for higher heart rates, with unacceptable precision across the entire range, rendering its estimates unreliable for monitoring individuals

    Video pulse rate variability analysis in stationary and motion conditions

    Get PDF
    Background: In the last few years, some studies have measured heart rate (HR) or heart rate variability (HRV) parameters using a video camera. This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. To date, most of these works have obtained HRV parameters in stationary conditions, and there are practically no studies that obtain these parameters in motion scenarios and by conducting an in-depth statistical analysis. Methods: In this study, a video pulse rate variability (PRV) analysis is conducted by measuring the pulse-to-pulse (PP) intervals in stationary and motion conditions. Firstly, given the importance of the sampling rate in a PRV analysis and the low frame rate of commercial cameras, we carried out an analysis of two models to evaluate their performance in the measurements. We propose a selective tracking method using the Viola–Jones and KLT algorithms, with the aim of carrying out a robust video PRV analysis in stationary and motion conditions. Data and results of the proposed method are contrasted with those reported in the state of the art. Results: The webcam achieved better results in the performance analysis of video cameras. In stationary conditions, high correlation values were obtained in PRV parameters with results above 0.9. The PP time series achieved an RMSE (mean ± standard deviation) of 19.45 ± 5.52 ms (1.70 ± 0.75 bpm). In the motion analysis, most of the PRV parameters also achieved good correlation results, but with lower values as regards stationary conditions. The PP time series presented an RMSE of 21.56 ± 6.41 ms (1.79 ± 0.63 bpm). Conclusions: The statistical analysis showed good agreement between the reference system and the proposed method. In stationary conditions, the results of PRV parameters were improved by our method in comparison with data reported in related works. An overall comparative analysis of PRV parameters in motion conditions was more limited due to the lack of studies or studies containing insufficient data analysis. Based on the results, the proposed method could provide a low-cost, contactless and reliable alternative for measuring HR or PRV parameters in non-clinical environments.Peer ReviewedPostprint (published version

    Automatic Infant Respiration Estimation from Video: A Deep Flow-based Algorithm and a Novel Public Benchmark

    Full text link
    Respiration is a critical vital sign for infants, and continuous respiratory monitoring is particularly important for newborns. However, neonates are sensitive and contact-based sensors present challenges in comfort, hygiene, and skin health, especially for preterm babies. As a step toward fully automatic, continuous, and contactless respiratory monitoring, we develop a deep-learning method for estimating respiratory rate and waveform from plain video footage in natural settings. Our automated infant respiration flow-based network (AIRFlowNet) combines video-extracted optical flow input and spatiotemporal convolutional processing tuned to the infant domain. We support our model with the first public annotated infant respiration dataset with 125 videos (AIR-125), drawn from eight infant subjects, set varied pose, lighting, and camera conditions. We include manual respiration annotations and optimize AIRFlowNet training on them using a novel spectral bandpass loss function. When trained and tested on the AIR-125 infant data, our method significantly outperforms other state-of-the-art methods in respiratory rate estimation, achieving a mean absolute error of ∼\sim2.9 breaths per minute, compared to ∼\sim4.7--6.2 for other public models designed for adult subjects and more uniform environments

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    A Deep Learning Classifier for Detecting Atrial Fibrillation in Hospital Settings Applicable to Various Sensing Modalities

    Get PDF
    Cardiac signals provide variety of information related to the patient\u27s health. One of the most important is for medical experts to diagnose the functionality of a patient’s heart. This information helps the medical experts monitor heart disease such as atrial fibrillation and heart failure. Atrial fibrillation (AF) is one of the most major diseases that are threatening patients’ health. Medical experts measure cardiac signals usng the Electrocardiogram (ECG or EKG), the Photoplethysmogram (PPG), and more recently the Videoplethysmogram (VPG). Then they can use these measurements to analyze the heart functionality to detect heart diseases. In this study, these three major cardiac signals were used with different classification methodologies such as Basic Thresholding Classifiers (BTC), Machine Learning (SVM) classifiers, and deep learning classifiers based on Convolutional Neural Networks (CNN) to detect AF. To support the work, cardiac signals were acquired from forty-six AF subjects scheduled for cardioversion who were enrolled in a clinical study that was approved by the Internal Review Committees to protect human subjects at the University of Rochester Medical Center (URMC, Rochester, NY), and the Rochester Institute of Technology (RIT, Rochester, NY). The study included synchronized measurements of 5 minutes and 30 seconds of ECG, PPG, VPG 180Hz (High-quality camera), VPG 30 Hz (low quality webcam), taken before and after cardioversion of AF subjects receiving treatment at the AF Clinic of URMC. These data are subjected to BTC, SVM, and CNN classifiers to detect AF and compare the result for each classifier depending on the signal type. We propose a deep learning approach that is applicable to different kinds of cardiac signals to detect AF in a similar manner. By building this technique for different sensors we aim to provide a framework to implement a technique that can be used for most devices, such as, phones, tablets, PCs, ECG devices, and wearable PPG sensors. This conversion of the different sensing platforms provides a single AF detection classifier that can support a complete monitoring cycle that is referring to screen the patient whether at a hospital or home. By using that, the risk factor of heart attack, stroke, or other kind of heart complications can be reduced to a low level to prevent major dangers, since increasing monitoring AF patients helps to predict the disease at an early stage as well as track its progress. We show that the proposed approach provides around 99% accuracy for each type of classifier on the test dataset, thereby helping generalize AF detection by simplifying implementation using a sensor-agnostic deep learning model

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer
    • …
    corecore