8,442 research outputs found
Multispectral Video Fusion for Non-contact Monitoring of Respiratory Rate and Apnea
Continuous monitoring of respiratory activity is desirable in many clinical
applications to detect respiratory events. Non-contact monitoring of
respiration can be achieved with near- and far-infrared spectrum cameras.
However, current technologies are not sufficiently robust to be used in
clinical applications. For example, they fail to estimate an accurate
respiratory rate (RR) during apnea. We present a novel algorithm based on
multispectral data fusion that aims at estimating RR also during apnea. The
algorithm independently addresses the RR estimation and apnea detection tasks.
Respiratory information is extracted from multiple sources and fed into an RR
estimator and an apnea detector whose results are fused into a final
respiratory activity estimation. We evaluated the system retrospectively using
data from 30 healthy adults who performed diverse controlled breathing tasks
while lying supine in a dark room and reproduced central and obstructive apneic
events. Combining multiple respiratory information from multispectral cameras
improved the root mean square error (RMSE) accuracy of the RR estimation from
up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for
classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also
improved. Furthermore, the independent consideration of apnea detection led to
a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may
represent a step towards the use of cameras for vital sign monitoring in
medical applications
Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization
A person’s well-being status is reflected by their face through a combination of facial expressions and physical signs. The SEMEOTICONS project translates the semeiotic code of the human face into measurements and computational descriptors that are automatically extracted from images, videos and 3D scans of the face. SEMEOTICONS developed a multisensory platform in the form of a smart mirror to identify signs related to cardio-metabolic risk. The aim was to enable users to self-monitor their well-being status over time and guide them to improve their lifestyle. Significant scientific and technological challenges have been addressed to build the multisensory mirror, from touchless data acquisition, to real-time processing and integration of multimodal data
Quantitative Multidimensional Stress Assessment from Facial Videos
Stress has a significant impact on the physical and mental health of an individual and is a growing concern for society, especially during the COVID-19 pandemic. Facial video-based stress evaluation from non-invasive cameras has proven to be a significantly more efficient method to evaluate stress in comparison to approaches that use questionnaires or wearable sensors. Plenty of classification models have been built for stress detection. However, most do not consider individual differences. Also, the results for such models are limited by a uni-dimensional definition of stress levels lacking a comprehensive quantitative definition of stress. The dissertation focuses on building a framework that utilizes the multilevel video frame representations from deep learning and the remote photoplethysmography signals extracted from the facial videos for stress assessment. The fusion model takes the inputs of a baseline video and a target video of the subject. The physiological features such as heart rate and heart rate variability are used with the initial stress scores generated from deep learning are used to predict the stress scores in cognitive anxiety, somatic anxiety, and self-confidence. To generate stress scores with better accuracy, the signal extraction method is improved by introducing the CWT-SNR method that uses the signal-to-noise ratio to assist the adaptive bandpass filtering in the post-processing of the signals. A study on phase space reconstruction features is performed and the results show the potential for additional accuracy improvement for the heart rate variability detection. To select the best deep learning architecture, multiple deep learning architectures are tested to build the deep learning model. Support Vector Regression is used to generate the output stress score results. Testing with the data from the UBFC-Phys dataset, the fusion model shows a strong correlation between ground truth and the predicted results
Cardiovascular assessment by imaging photoplethysmography – a review
AbstractOver the last few years, the contactless acquisition of cardiovascular parameters using cameras has gained immense attention. The technique provides an optical means to acquire cardiovascular information in a very convenient way. This review provides an overview on the technique’s background and current realizations. Besides giving detailed information on the most widespread application of the technique, namely the contactless acquisition of heart rate, we outline further concepts and we critically discuss the current state.</jats:p
- …