58 research outputs found

    Remote Assessment of the Cardiovascular Function Using Camera-Based Photoplethysmography

    Get PDF
    Camera-based photoplethysmography (cbPPG) is a novel measurement technique that allows the continuous monitoring of vital signs by using common video cameras. In the last decade, the technology has attracted a lot of attention as it is easy to set up, operates remotely, and offers new diagnostic opportunities. Despite the growing interest, cbPPG is not completely established yet and is still primarily the object of research. There are a variety of reasons for this lack of development including that reliable and autonomous hardware setups are missing, that robust processing algorithms are needed, that application fields are still limited, and that it is not completely understood which physiological factors impact the captured signal. In this thesis, these issues will be addressed. A new and innovative measuring system for cbPPG was developed. In the course of three large studies conducted in clinical and non-clinical environments, the system’s great flexibility, autonomy, user-friendliness, and integrability could be successfully proven. Furthermore, it was investigated what value optical polarization filtration adds to cbPPG. The results show that a perpendicular filter setting can significantly enhance the signal quality. In addition, the performed analyses were used to draw conclusions about the origin of cbPPG signals: Blood volume changes are most likely the defining element for the signal's modulation. Besides the hardware-related topics, the software topic was addressed. A new method for the selection of regions of interest (ROIs) in cbPPG videos was developed. Choosing valid ROIs is one of the most important steps in the processing chain of cbPPG software. The new method has the advantage of being fully automated, more independent, and universally applicable. Moreover, it suppresses ballistocardiographic artifacts by utilizing a level-set-based approach. The suitability of the ROI selection method was demonstrated on a large and challenging data set. In the last part of the work, a potentially new application field for cbPPG was explored. It was investigated how cbPPG can be used to assess autonomic reactions of the nervous system at the cutaneous vasculature. The results show that changes in the vasomotor tone, i.e. vasodilation and vasoconstriction, reflect in the pulsation strength of cbPPG signals. These characteristics also shed more light on the origin problem. Similar to the polarization analyses, they support the classic blood volume theory. In conclusion, this thesis tackles relevant issues regarding the application of cbPPG. The proposed solutions pave the way for cbPPG to become an established and widely accepted technology

    Opto-physiological modeling applied to photoplethysmographic cardiovascular assessment

    Get PDF
    This paper presents opto-physiological (OP) modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG). Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation noncontact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance

    Imaging photoplethysmography: towards effective physiological measurements

    Get PDF
    Since its conception decades ago, Photoplethysmography (PPG) the non-invasive opto-electronic technique that measures arterial pulsations in-vivo has proven its worth by achieving and maintaining its rank as a compulsory standard of patient monitoring. However successful, conventional contact monitoring mode is not suitable in certain clinical and biomedical situations, e.g., in the case of skin damage, or when unconstrained movement is required. With the advance of computer and photonics technologies, there has been a resurgence of interest in PPG and one potential route to overcome the abovementioned issues has been increasingly explored, i.e., imaging photoplethysmography (iPPG). The emerging field of iPPG offers some nascent opportunities in effective and comprehensive interpretation of the physiological phenomena, indicating a promising alternative to conventional PPG. Heart and respiration rate, perfusion mapping, and pulse rate variability have been accessed using iPPG. To effectively and remotely access physiological information through this emerging technique, a number of key issues are still to be addressed. The engineering issues of iPPG, particularly the influence of motion artefacts on signal quality, are addressed in this thesis, where an engineering model based on the revised Beer-Lambert law was developed and used to describe opto-physiological phenomena relevant to iPPG. An iPPG setup consisting of both hardware and software elements was developed to investigate its reliability and reproducibility in the context of effective remote physiological assessment. Specifically, a first study was conducted for the acquisition of vital physiological signs under various exercise conditions, i.e. resting, light and heavy cardiovascular exercise, in ten healthy subjects. The physiological parameters derived from the images captured by the iPPG system exhibited functional characteristics comparable to conventional contact PPG, i.e., maximum heart rate difference was <3 bpm and a significant (p < 0.05) correlation between both measurements were also revealed. Using a method for attenuation of motion artefacts, the heart rate and respiration rate information was successfully assessed from different anatomical locations even in high-intensity physical exercise situations. This study thereby leads to a new avenue for noncontact sensing of vital signs and remote physiological assessment, showing clear and promising applications in clinical triage and sports training. A second study was conducted to remotely assess pulse rate variability (PRV), which has been considered a valuable indicator of autonomic nervous system (ANS) status. The PRV information was obtained using the iPPG setup to appraise the ANS in ten normal subjects. The performance of the iPPG system in accessing PRV was evaluated via comparison with the readings from a contact PPG sensor. Strong correlation and good agreement between these two techniques verify the effectiveness of iPPG in the remote monitoring of PRV, thereby promoting iPPG as a potential alternative to the interpretation of physiological dynamics related to the ANS. The outcomes revealed in the thesis could present the trend of a robust non-contact technique for cardiovascular monitoring and evaluation

    Remote Photoplethysmography in Infrared - Towards Contactless Sleep Monitoring

    Get PDF

    Face liveness detection by rPPG features and contextual patch-based CNN

    Get PDF
    Abstract. Face anti-spoofing plays a vital role in security systems including face payment systems and face recognition systems. Previous studies showed that live faces and presentation attacks have significant differences in both remote photoplethysmography (rPPG) and texture information. We propose a generalized method exploiting both rPPG and texture features for face anti-spoofing task. First, we design multi-scale long-term statistical spectral (MS-LTSS) features with variant granularities for the representation of rPPG information. Second, a contextual patch-based convolutional neural network (CP-CNN) is used for extracting global-local and multi-level deep texture features simultaneously. Finally, weight summation strategy is employed for decision level fusion of the two types of features, which allow the proposed system to be generalized for detecting not only print attack and replay attack, but also mask attack. Comprehensive experiments were conducted on five databases, namely 3DMAD, HKBU-Mars V1, MSU-MFSD, CASIA-FASD, and OULU-NPU, to show the superior results of the proposed method compared with state-of-the-art methods.Tiivistelmä. Kasvojen anti-spoofingilla on keskeinen rooli turvajärjestelmissä, mukaan lukien kasvojen maksujärjestelmät ja kasvojentunnistusjärjestelmät. Aiemmat tutkimukset osoittivat, että elävillä kasvoilla ja esityshyökkäyksillä on merkittäviä eroja sekä etävalopölymografiassa (rPPG) että tekstuuri-informaatiossa, ehdotamme yleistettyä menetelmää, jossa hyödynnetään sekä rPPG: tä että tekstuuriominaisuuksia kasvojen anti-spoofing -tehtävässä. Ensinnäkin rPPG-informaation esittämiseksi on suunniteltu monivaiheisia pitkän aikavälin tilastollisia spektrisiä (MS-LTSS) ominaisuuksia, joissa on muunneltavissa olevat granulariteetit. Toiseksi, kontekstuaalista patch-pohjaista konvoluutioverkkoa (CP-CNN) käytetään globaalin paikallisen ja monitasoisen syvään tekstuuriominaisuuksiin samanaikaisesti. Lopuksi, painoarvostusstrategiaa käytetään päätöksentekotason fuusioon, joka auttaa yleistämään menetelmää paitsi hyökkäys- ja toistoiskuille, mutta myös peittää hyökkäyksen. Kattavat kokeet suoritettiin viidellä tietokannalla, nimittäin 3DMAD, HKBU-Mars V1, MSU-MFSD, CASIA-FASD ja OULU-NPU, ehdotetun menetelmän parempien tulosten osoittamiseksi verrattuna uusimpiin menetelmiin

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods

    PipeNet: Selective Modal Pipeline of Fusion Network for Multi-Modal Face Anti-Spoofing

    Full text link
    Face anti-spoofing has become an increasingly important and critical security feature for authentication systems, due to rampant and easily launchable presentation attacks. Addressing the shortage of multi-modal face dataset, CASIA recently released the largest up-to-date CASIA-SURF Cross-ethnicity Face Anti-spoofing(CeFA) dataset, covering 3 ethnicities, 3 modalities, 1607 subjects, and 2D plus 3D attack types in four protocols, and focusing on the challenge of improving the generalization capability of face anti-spoofing in cross-ethnicity and multi-modal continuous data. In this paper, we propose a novel pipeline-based multi-stream CNN architecture called PipeNet for multi-modal face anti-spoofing. Unlike previous works, Selective Modal Pipeline (SMP) is designed to enable a customized pipeline for each data modality to take full advantage of multi-modal data. Limited Frame Vote (LFV) is designed to ensure stable and accurate prediction for video classification. The proposed method wins the third place in the final ranking of Chalearn Multi-modal Cross-ethnicity Face Anti-spoofing Recognition Challenge@CVPR2020. Our final submission achieves the Average Classification Error Rate (ACER) of 2.21 with Standard Deviation of 1.26 on the test set.Comment: Accepted to appear in CVPR2020 WM
    corecore