87 research outputs found

    Video-based patient monitoring system application of the system in intensive care unit

    Full text link
    The paper presents the video-based monitoring system to assess the physiological parameters and patient state in intensive care unit. It allows to measure thoracic and abdominal breathing movements, remote plethysmography signals, tissue perfusion, patient activity and changes in psycho-emotional state. Thus, the system provides a comprehensive assessment of patient state without contact. The system works in usual illumination conditions of intensive care unit and consists of a personal computer with specialized software and two low-cost Logitech C920 webcams with RGB sensors (8 bit per channel), 30 Hz sampling frequency and 640x480 pixel resolution. The webcams were placed at a distance of 80 cm above the patient’s body. The software provides automatic assessment of psychophysiological parameters and determination the following patterns: heart rate, heart rate variability, asystole and arrhythmias, breathing rate, spontaneous breathing recovery, breathing muscle tone and patient consciousness recovery, motor activity and control of ventilation parameters. The proposed system can be used as an additional diagnostic tool of anesthesia equipment for non-invasive patient monitoring in intensive care unit. Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedThe work was partially supported by Act 211 Government of the Russian Federation, contract 02.A 03.21.0006

    Heart rate from face videos under realistic conditions for advanced driver monitoring

    Get PDF
    Abstract The role of physiological signals has a large impact on driver monitoring systems, since it tells something about the human state. This work addresses the recursive probabilistic inference problem in time-varying linear dynamic systems to incorporate invariance into the task of heart rate estimation from face videos under realistic conditions. The invariance encapsulates motion as well as varying illumination conditions in order to accurately estimate vitality parameters from human faces using conventional camera technology. The solution is based on the canonical state space representation of an Itô process and a Wiener velocity model. Empirical results yield to excellent real-time and estimation performance of heart rates in presence of disturbing factors, like rigid head motion, talking, facial expressions and natural illumination conditions making the process of human state estimation from face videos applicable in a much broader sense, pushing the technology towards advanced driver monitoring systems.</jats:p

    Relation between pulse pressure and the pulsation strength in camera-based photoplethysmograms

    Get PDF
    Abstract Camera-based photoplethysmography (cbPPG) is an innovative measuring technique that enables the remote extraction of vital signs using video cameras. Most studies in the field focus on heart rate detection while other physiological quantities are often ignored. In this work, we analyzed the relation between the pulse pressure and the pulsation strengths of cbPPG signals for 70 patients after surgery. Our results show a high correlation between the two measures (r = 0.54). Furthermore, the influence of technical and medical factors was tested. The controlled impact of these factors proved to enhance the correlation by between 9 and 27 %.</jats:p

    Multi-hierarchical Convolutional Network for Efficient Remote Photoplethysmograph Signal and Heart Rate Estimation from Face Video Clips

    Full text link
    Heart beat rhythm and heart rate (HR) are important physiological parameters of the human body. This study presents an efficient multi-hierarchical spatio-temporal convolutional network that can quickly estimate remote physiological (rPPG) signal and HR from face video clips. First, the facial color distribution characteristics are extracted using a low-level face feature Generation (LFFG) module. Then, the three-dimensional (3D) spatio-temporal stack convolution module (STSC) and multi-hierarchical feature fusion module (MHFF) are used to strengthen the spatio-temporal correlation of multi-channel features. In the MHFF, sparse optical flow is used to capture the tiny motion information of faces between frames and generate a self-adaptive region of interest (ROI) skin mask. Finally, the signal prediction module (SP) is used to extract the estimated rPPG signal. The experimental results on the three datasets show that the proposed network outperforms the state-of-the-art methods.Comment: 33 pages,9 figure

    A Comparative Evaluation of Heart Rate Estimation Methods using Face Videos

    Full text link
    This paper presents a comparative evaluation of methods for remote heart rate estimation using face videos, i.e., given a video sequence of the face as input, methods to process it to obtain a robust estimation of the subjects heart rate at each moment. Four alternatives from the literature are tested, three based in hand crafted approaches and one based on deep learning. The methods are compared using RGB videos from the COHFACE database. Experiments show that the learning-based method achieves much better accuracy than the hand crafted ones. The low error rate achieved by the learning based model makes possible its application in real scenarios, e.g. in medical or sports environments.Comment: Accepted in "IEEE International Workshop on Medical Computing (MediComp) 2020

    A Reproducible Study on Remote Heart Rate Measurement

    Get PDF
    This paper studies the problem of reproducible research in remote photoplethysmography (rPPG). Most of the work published in this domain is assessed on privately-owned databases, making it difficult to evaluate proposed algorithms in a standard and principled manner. As a consequence, we present a new, publicly available database containing a relatively large number of subjects recorded under two different lighting conditions. Also, three state-of-the-art rPPG algorithms from the literature were selected, implemented and released as open source free software. After a thorough, unbiased experimental evaluation in various settings, it is shown that none of the selected algorithms is precise enough to be used in a real-world scenario
    corecore