25 research outputs found

    Cardiovascular assessment by imaging photoplethysmography – a review

    Get PDF
    AbstractOver the last few years, the contactless acquisition of cardiovascular parameters using cameras has gained immense attention. The technique provides an optical means to acquire cardiovascular information in a very convenient way. This review provides an overview on the technique’s background and current realizations. Besides giving detailed information on the most widespread application of the technique, namely the contactless acquisition of heart rate, we outline further concepts and we critically discuss the current state.</jats:p

    Video pulse rate variability analysis in stationary and motion conditions

    Get PDF
    Background: In the last few years, some studies have measured heart rate (HR) or heart rate variability (HRV) parameters using a video camera. This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. To date, most of these works have obtained HRV parameters in stationary conditions, and there are practically no studies that obtain these parameters in motion scenarios and by conducting an in-depth statistical analysis. Methods: In this study, a video pulse rate variability (PRV) analysis is conducted by measuring the pulse-to-pulse (PP) intervals in stationary and motion conditions. Firstly, given the importance of the sampling rate in a PRV analysis and the low frame rate of commercial cameras, we carried out an analysis of two models to evaluate their performance in the measurements. We propose a selective tracking method using the Viola–Jones and KLT algorithms, with the aim of carrying out a robust video PRV analysis in stationary and motion conditions. Data and results of the proposed method are contrasted with those reported in the state of the art. Results: The webcam achieved better results in the performance analysis of video cameras. In stationary conditions, high correlation values were obtained in PRV parameters with results above 0.9. The PP time series achieved an RMSE (mean ± standard deviation) of 19.45 ± 5.52 ms (1.70 ± 0.75 bpm). In the motion analysis, most of the PRV parameters also achieved good correlation results, but with lower values as regards stationary conditions. The PP time series presented an RMSE of 21.56 ± 6.41 ms (1.79 ± 0.63 bpm). Conclusions: The statistical analysis showed good agreement between the reference system and the proposed method. In stationary conditions, the results of PRV parameters were improved by our method in comparison with data reported in related works. An overall comparative analysis of PRV parameters in motion conditions was more limited due to the lack of studies or studies containing insufficient data analysis. Based on the results, the proposed method could provide a low-cost, contactless and reliable alternative for measuring HR or PRV parameters in non-clinical environments.Peer ReviewedPostprint (published version

    Multi-hierarchical Convolutional Network for Efficient Remote Photoplethysmograph Signal and Heart Rate Estimation from Face Video Clips

    Full text link
    Heart beat rhythm and heart rate (HR) are important physiological parameters of the human body. This study presents an efficient multi-hierarchical spatio-temporal convolutional network that can quickly estimate remote physiological (rPPG) signal and HR from face video clips. First, the facial color distribution characteristics are extracted using a low-level face feature Generation (LFFG) module. Then, the three-dimensional (3D) spatio-temporal stack convolution module (STSC) and multi-hierarchical feature fusion module (MHFF) are used to strengthen the spatio-temporal correlation of multi-channel features. In the MHFF, sparse optical flow is used to capture the tiny motion information of faces between frames and generate a self-adaptive region of interest (ROI) skin mask. Finally, the signal prediction module (SP) is used to extract the estimated rPPG signal. The experimental results on the three datasets show that the proposed network outperforms the state-of-the-art methods.Comment: 33 pages,9 figure

    Heart rates estimation using rPPG methods in challenging imaging conditions

    Get PDF
    Abstract. The cardiovascular system plays a crucial role in maintaining the body’s equilibrium by regulating blood flow and oxygen supply to different organs and tissues. While contact-based techniques like electrocardiography and photoplethysmography are commonly used in healthcare and clinical monitoring, they are not practical for everyday use due to their skin contact requirements. Therefore, non-contact alternatives like remote photoplethysmography (rPPG) have gained significant attention in recent years. However, extracting accurate heart rate information from rPPG signals under challenging imaging conditions, such as image degradation and occlusion, remains a significant challenge. Therefore, this thesis aims to investigate the effectiveness of rPPG methods in extracting heart rate information from rPPG signals in these imaging conditions. It evaluates the effectiveness of both traditional rPPG approaches and rPPG pre-trained deep learning models in the presence of real-world image transformations, such as occlusion of the faces by sunglasses or facemasks, as well as image degradation caused by noise artifacts and motion blur. The study also explores various image restoration techniques to enhance the performance of the selected rPPG methods and experiments with various fine-tuning methods of the best-performing pre-trained model. The research was conducted on three databases, namely UBFC-rPPG, UCLA-rPPG, and UBFC-Phys, and includes comprehensive experiments. The results of this study offer valuable insights into the efficacy of rPPG in practical scenarios and its potential as a non-contact alternative to traditional cardiovascular monitoring techniques

    A wavelet-based decomposition method for a robust extraction of pulse rate from video recordings

    Get PDF
    Background Remote photoplethysmography (rPPG) is a promising optical method for non-contact assessment of pulse rate (PR) from video recordings. In order to implement the method in real-time applications, it is necessary for the rPPG algorithms to be capable of eliminating as many distortions from the pulse signal as possible. Methods In order to increase the degrees-of-freedom of the distortion elimination, the dimensionality of the RGB video signals is increased by the wavelet transform decomposition using the generalized Morse wavelet. The proposed Continuous-Wavelet-Transform-based Sub-Band rPPG method (SB-CWT) is evaluated on the 101 publicly available RGB facial video recordings and corresponding reference blood volume pulse (BVP) signals taken from the MMSE-HR database. The performance of the SB-CWT is compared with the performance of the state-of-the-art Sub-band rPPG (SB). Results Median signal-to-noise ratio (SNR) for the proposed SB-CWT ranges from 6.63 to 10.39 dB and for the SB from 4.23 to 6.24 dB. The agreement between the estimated PRs from rPPG pulse signals and the reference signals in terms of the coefficients of determination ranges from 0.81 to 0.91 for SB-CWT and from 0.41 to 0.47 for SB. All the correlation coefficients are statistically significant (p < 0.001). The Bland–Altman plots show that mean difference range from 5.37 to 1.82 BPM for SB-CWT and from 22.18 to 18.80 BPM for SB. Discussion The results show that the proposed SB-CWT outperforms SB in terms of SNR and the agreement between the estimated PRs from RGB video signals and PRs from the reference BVP signals

    Facial Video-based Remote Physiological Measurement via Self-supervised Learning

    Full text link
    Facial video-based remote physiological measurement aims to estimate remote photoplethysmography (rPPG) signals from human face videos and then measure multiple vital signs (e.g. heart rate, respiration frequency) from rPPG signals. Recent approaches achieve it by training deep neural networks, which normally require abundant facial videos and synchronously recorded photoplethysmography (PPG) signals for supervision. However, the collection of these annotated corpora is not easy in practice. In this paper, we introduce a novel frequency-inspired self-supervised framework that learns to estimate rPPG signals from facial videos without the need of ground truth PPG signals. Given a video sample, we first augment it into multiple positive/negative samples which contain similar/dissimilar signal frequencies to the original one. Specifically, positive samples are generated using spatial augmentation. Negative samples are generated via a learnable frequency augmentation module, which performs non-linear signal frequency transformation on the input without excessively changing its visual appearance. Next, we introduce a local rPPG expert aggregation module to estimate rPPG signals from augmented samples. It encodes complementary pulsation information from different face regions and aggregate them into one rPPG prediction. Finally, we propose a series of frequency-inspired losses, i.e. frequency contrastive loss, frequency ratio consistency loss, and cross-video frequency agreement loss, for the optimization of estimated rPPG signals from multiple augmented video samples and across temporally neighboring video samples. We conduct rPPG-based heart rate, heart rate variability and respiration frequency estimation on four standard benchmarks. The experimental results demonstrate that our method improves the state of the art by a large margin.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligenc

    rPPG-Toolbox: Deep Remote PPG Toolbox

    Full text link
    Camera-based physiological measurement is a fast growing field of computer vision. Remote photoplethysmography (rPPG) utilizes imaging devices (e.g., cameras) to measure the peripheral blood volume pulse (BVP) via photoplethysmography, and enables cardiac measurement via webcams and smartphones. However, the task is non-trivial with important pre-processing, modeling, and post-processing steps required to obtain state-of-the-art results. Replication of results and benchmarking of new models is critical for scientific progress; however, as with many other applications of deep learning, reliable codebases are not easy to find or use. We present a comprehensive toolbox, rPPG-Toolbox, that contains unsupervised and supervised rPPG models with support for public benchmark datasets, data augmentation, and systematic evaluation: \url{https://github.com/ubicomplab/rPPG-Toolbox

    rPPG-MAE: Self-supervised Pre-training with Masked Autoencoders for Remote Physiological Measurement

    Full text link
    Remote photoplethysmography (rPPG) is an important technique for perceiving human vital signs, which has received extensive attention. For a long time, researchers have focused on supervised methods that rely on large amounts of labeled data. These methods are limited by the requirement for large amounts of data and the difficulty of acquiring ground truth physiological signals. To address these issues, several self-supervised methods based on contrastive learning have been proposed. However, they focus on the contrastive learning between samples, which neglect the inherent self-similar prior in physiological signals and seem to have a limited ability to cope with noisy. In this paper, a linear self-supervised reconstruction task was designed for extracting the inherent self-similar prior in physiological signals. Besides, a specific noise-insensitive strategy was explored for reducing the interference of motion and illumination. The proposed framework in this paper, namely rPPG-MAE, demonstrates excellent performance even on the challenging VIPL-HR dataset. We also evaluate the proposed method on two public datasets, namely PURE and UBFC-rPPG. The results show that our method not only outperforms existing self-supervised methods but also exceeds the state-of-the-art (SOTA) supervised methods. One important observation is that the quality of the dataset seems more important than the size in self-supervised pre-training of rPPG. The source code is released at https://github.com/linuxsino/rPPG-MAE
    corecore