611 research outputs found

    Multispectral Video Fusion for Non-contact Monitoring of Respiratory Rate and Apnea

    Full text link
    Continuous monitoring of respiratory activity is desirable in many clinical applications to detect respiratory events. Non-contact monitoring of respiration can be achieved with near- and far-infrared spectrum cameras. However, current technologies are not sufficiently robust to be used in clinical applications. For example, they fail to estimate an accurate respiratory rate (RR) during apnea. We present a novel algorithm based on multispectral data fusion that aims at estimating RR also during apnea. The algorithm independently addresses the RR estimation and apnea detection tasks. Respiratory information is extracted from multiple sources and fed into an RR estimator and an apnea detector whose results are fused into a final respiratory activity estimation. We evaluated the system retrospectively using data from 30 healthy adults who performed diverse controlled breathing tasks while lying supine in a dark room and reproduced central and obstructive apneic events. Combining multiple respiratory information from multispectral cameras improved the root mean square error (RMSE) accuracy of the RR estimation from up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also improved. Furthermore, the independent consideration of apnea detection led to a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may represent a step towards the use of cameras for vital sign monitoring in medical applications

    Виявлення та розпізнавання об’єктів за допомогою ймовірнісного підходу при обробці відеоданих

    Get PDF
    Purpose: The represented research results are aimed to improve theoretical basics of computer vision and artificial intelligence of dynamical system. Proposed approach of object detection and recognition is based on probabilistic fundamentals to ensure the required level of correct object recognition. Methods: Presented approach is grounded at probabilistic methods, statistical methods of probability density estimation and computer-based simulation at verification stage of development. Results: Proposed approach for object detection and recognition for video stream data processing has shown several advantages in comparison with existing methods due to its simple realization and small time of data processing. Presented results of experimental verification look plausible for object detection and recognition in video stream. Discussion: The approach can be implemented in dynamical system within changeable environment such as remotely piloted aircraft systems and can be a part of artificial intelligence in navigation and control systems.Цель: Представленные результаты исследования направлены на развитие теоретических основ компьютерного зрения и искусственного интеллекта в динамической системе. Предложенный подход к обнаружению и распознаванию объектов основан на вероятностных предположениях и служит для обеспечения необходимого уровня правильного распознавания объектов. Методы исследования: Представленный подход основывается на методах теории вероятности, методах статистического оценивания плотности вероятности и компьютерного моделирования для верификации. Результаты: Предложенный подход к обнаружению и распознаванию объектов при обработке видеоданных продемонстрировал ряд преимуществ по сравнению с существующими методами ввиду простоты реализации и быстрой обработки данных. Представленные результаты моделирования выглядят многообещающе для применения в задачах обнаружения и распознавания объектов в видеопотоке. Обсуждение: Описанный подход может быть реализован в динамической системе в условиях меняющейся среды, например, в дистанционно пилотируемых авиационных системах, и может быть  частью искусственного интеллекта в системах навигации и управления.Мета: Представлені результати спрямовані на розвиток теоретичних засад комп'ютерного зору та штучного інтелекту динамічних систем. Запропонований підхід до виявлення  та розпізнавання об'єктів базується на ймовірнісних методах забезпечення необхідного рівня правильного розпізнавання об'єктів. Методи дослідження: Представлений підхід базується на методах теорії ймовірності, статистичних методах оцінки щільності ймовірності та комп'ютерному моделюванні. Результати: Запропонований підхід для виявлення та розпізнавання об'єктів при обробці відеоданих продемонстрував ряд переваг у порівнянні з існуючими методами завдяки простоті реалізації та швидкій обробці даних. Представлені результати експериментальної перевірки виглядають перспективно для виявлення та розпізнавання об'єктів у відеопотоці. Обговорення: Підхід може бути реалізований у динамічній системі в умовах мінливого середовища, наприклад, у дистанційно пілотованих авіаційних системах, та може бути складовою штучного інтелекту в системах навігації та  управління

    PhysBench: A Benchmark Framework for rPPG with a New Dataset and Baseline

    Full text link
    In recent years, due to the widespread use of internet videos, physiological remote sensing has gained more and more attention in the fields of affective computing and telemedicine. Recovering physiological signals from facial videos is a challenging task that involves a series of preprocessing, image algorithms, and post-processing to finally restore waveforms. We propose a complete and efficient end-to-end training and testing framework that provides fair comparisons for different algorithms through unified preprocessing and post-processing. In addition, we introduce a highly synchronized lossless format dataset along with a lightweight algorithm. The dataset contains over 32 hours (3.53M frames) of video from 58 subjects; by training on our collected dataset both our proposed algorithm as well as existing ones can achieve improvements

    Camera-Based Heart Rate Extraction in Noisy Environments

    Get PDF
    Remote photoplethysmography (rPPG) is a non-invasive technique that benefits from video to measure vital signs such as the heart rate (HR). In rPPG estimation, noise can introduce artifacts that distort rPPG signal and jeopardize accurate HR measurement. Considering that most rPPG studies occurred in lab-controlled environments, the issue of noise in realistic conditions remains open. This thesis aims to examine the challenges of noise in rPPG estimation in realistic scenarios, specifically investigating the effect of noise arising from illumination variation and motion artifacts on the predicted rPPG HR. To mitigate the impact of noise, a modular rPPG measurement framework, comprising data preprocessing, region of interest, signal extraction, preparation, processing, and HR extraction is developed. The proposed pipeline is tested on the LGI-PPGI-Face-Video-Database public dataset, hosting four different candidates and real-life scenarios. In the RoI module, raw rPPG signals were extracted from the dataset using three machine learning-based face detectors, namely Haarcascade, Dlib, and MediaPipe, in parallel. Subsequently, the collected signals underwent preprocessing, independent component analysis, denoising, and frequency domain conversion for peak detection. Overall, the Dlib face detector leads to the most successful HR for the majority of scenarios. In 50% of all scenarios and candidates, the average predicted HR for Dlib is either in line or very close to the average reference HR. The extracted HRs from the Haarcascade and MediaPipe architectures make up 31.25% and 18.75% of plausible results, respectively. The analysis highlighted the importance of fixated facial landmarks in collecting quality raw data and reducing noise

    Frame registration for motion compensation in imaging photoplethysmography

    Get PDF
    © 2018 by the authors. Licensee MDPI, Basel, Switzerland. Imaging photoplethysmography (iPPG) is an emerging technology used to assess microcirculation and cardiovascular signs by collecting backscattered light from illuminated tissue using optical imaging sensors. An engineering approach is used to evaluate whether a silicone cast of a human palm might be effectively utilized to predict the results of image registration schemes for motion compensation prior to their application on live human tissue. This allows us to establish a performance baseline for each of the algorithms and to isolate performance and noise fluctuations due to the induced motion from the temporally changing physiological signs. A multi-stage evaluation model is developed to qualitatively assess the influence of the region of interest (ROI), system resolution and distance, reference frame selection, and signal normalization on extracted iPPG waveforms from live tissue. We conclude that the application of image registration is able to deliver up to 75% signal-to-noise (SNR) improvement (4.75 to 8.34) over an uncompensated iPPG signal by employing an intensity-based algorithm with a moving reference frame
    corecore