757 research outputs found

    Estimating pulse wave velocity using mobile phone sensors

    Get PDF
    Pulse wave velocity has been recognised as an important physiological phenomenon in the human body, and its measurement can aid in the diagnosis and treatment of chronic diseases. It is the gold standard for arterial stiffness measurements, and it also shares a positive relationship with blood pressure and heart rate. There exist several methods and devices via which it can be measured. However, commercially available devices are more geared towards working health professionals and hospital settings, requiring a significant monetary investment and specialised training to operate correctly. Furthermore, most of these devices are not portable and thus generally not feasible for private home use by the common individual. Given its usefulness as an indicator of certain physiological functions, it is expected that having a more portable, affordable, and simple to use solution would present many benefits to both end users and healthcare professionals alike. This study investigated and developed a working model for a new approach to pulse wave velocity measurement, based on existing methods, but making use of novel equipment. The proposed approach made use of a mobile phone video camera and audio input in conjunction with a Doppler ultrasound probe. The underlying principle is that of a two-point measurement system utilising photoplethysmography and electrocardiogram signals, an existing method commonly found in many studies. Data was collected using the mobile phone sensors and processed and analysed on a computer. A custom program was developed in MATLAB that computed pulse wave velocity given the audio and video signals and a measurement of the distance between the two data acquisition sites. Results were compared to the findings of previous studies in the field, and showed similar trends. As the power of mobile smartphones grows, there exists potential for the work and methods presented here to be fully developed into a standalone mobile application, which would bring forth real benefits of portability and cost-effectiveness to the prospective user base

    Widefield Computational Biophotonic Imaging for Spatiotemporal Cardiovascular Hemodynamic Monitoring

    Get PDF
    Cardiovascular disease is the leading cause of mortality, resulting in 17.3 million deaths per year globally. Although cardiovascular disease accounts for approximately 30% of deaths in the United States, many deleterious events can be mitigated or prevented if detected and treated early. Indeed, early intervention and healthier behaviour adoption can reduce the relative risk of first heart attacks by up to 80% compared to those who do not adopt new healthy behaviours. Cardiovascular monitoring is a vital component of disease detection, mitigation, and treatment. The cardiovascular system is an incredibly dynamic system that constantly adapts to internal and external stimuli. Monitoring cardiovascular function and response is vital for disease detection and monitoring. Biophotonic technologies provide unique solutions for cardiovascular assessment and monitoring in naturalistic and clinical settings. These technologies leverage the properties of light as it enters and interacts with the tissue, providing safe and rapid sensing that can be performed in many different environments. Light entering into human tissue undergoes a complex series of absorption and scattering events according to both the illumination and tissue properties. The field of quantitative biomedical optics seeks to quantify physiological processes by analysing the remitted light characteristics relative to the controlled illumination source. Drawing inspiration from contact-based biophotonic sensing technologies such as pulse oximetry and near infrared spectroscopy, we explored the feasibility of widefield hemodynamic assessment using computational biophotonic imaging. Specifically, we investigated the hypothesis that computational biophotonic imaging can assess spatial and temporal properties of pulsatile blood flow across large tissue regions. This thesis presents the design, development, and evaluation of a novel photoplethysmographic imaging system for assessing spatial and temporal hemodynamics in major pulsatile vasculature through the sensing and processing of subtle light intensity fluctuations arising from local changes in blood volume. This system co-integrates methods from biomedical optics, electronic control, and biomedical image and signal processing to enable non-contact widefield hemodynamic assessment over large tissue regions. A biophotonic optical model was developed to quantitatively assess transient blood volume changes in a manner that does not require a priori information about the tissue's absorption and scattering characteristics. A novel automatic blood pulse waveform extraction method was developed to encourage passive monitoring. This spectral-spatial pixel fusion method uses physiological hemodynamic priors to guide a probabilistic framework for learning pixel weights across the scene. Pixels are combined according to their signal weight, resulting in a single waveform. Widefield hemodynamic imaging was assessed in three biomedical applications using the aforementioned developed system. First, spatial vascular distribution was investigated across a sample with highly varying demographics for assessing common pulsatile vascular pathways. Second, non-contact biophotonic assessment of the jugular venous pulse waveform was assessed, demonstrating clinically important information about cardiac contractility function in a manner which is currently assessed through invasive catheterization. Lastly, non-contact biophotonic assessment of cardiac arrhythmia was demonstrated, leveraging the system's ability to extract strong hemodynamic signals for assessing subtle fluctuations in the waveform. This research demonstrates that this novel approach for computational biophotonic hemodynamic imaging offers new cardiovascular monitoring and assessment techniques, which can enable new scientific discoveries and clinical detection related to cardiovascular function

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Human-Centric Detection and Mitigation Approach for Various Levels of Cell Phone-Based Driver Distractions

    Get PDF
    abstract: Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.Dissertation/ThesisDoctoral Dissertation Applied Psychology 201

    A unified methodology for heartbeats detection in seismocardiogram and ballistocardiogram signals

    Get PDF
    This work presents a methodology to analyze and segment both seismocardiogram (SCG) and ballistocardiogram (BCG) signals in a unified fashion. An unsupervised approach is followed to extract a template of SCG/BCG heartbeats, which is then used to fine-tune temporal waveform annotation. Rigorous performance assessment is conducted in terms of sensitivity, precision, Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of annotation. The methodology is tested on four independent datasets, covering different measurement setups and time resolutions. A wide application range is therefore explored, which better characterizes the robustness and generality of the method with respect to a single dataset. Overall, sensitivity and precision scores are uniform across all datasets (p > 0.05 from the Kruskal–Wallis test): the average sensitivity among datasets is 98.7%, with 98.2% precision. On the other hand, a slight yet significant difference in RMSE and MAE scores was found (p < 0.01) in favor of datasets with higher sampling frequency. The best RMSE scores for SCG and BCG are 4.5 and 4.8 ms, respectively; similarly, the best MAE scores are 3.3 and 3.6 ms. The results were compared to relevant recent literature and are found to improve both detection performance and temporal annotation errors

    rPPG-Toolbox: Deep Remote PPG Toolbox

    Full text link
    Camera-based physiological measurement is a fast growing field of computer vision. Remote photoplethysmography (rPPG) utilizes imaging devices (e.g., cameras) to measure the peripheral blood volume pulse (BVP) via photoplethysmography, and enables cardiac measurement via webcams and smartphones. However, the task is non-trivial with important pre-processing, modeling, and post-processing steps required to obtain state-of-the-art results. Replication of results and benchmarking of new models is critical for scientific progress; however, as with many other applications of deep learning, reliable codebases are not easy to find or use. We present a comprehensive toolbox, rPPG-Toolbox, that contains unsupervised and supervised rPPG models with support for public benchmark datasets, data augmentation, and systematic evaluation: \url{https://github.com/ubicomplab/rPPG-Toolbox

    Experimental investigations of two-phase flow measurement using ultrasonic sensors

    Get PDF
    This thesis presents the investigations conducted in the use of ultrasonic technology to measure two-phase flow in both horizontal and vertical pipe flows which is important for the petroleum industry. However, there are still key challenges to measure parameters of the multiphase flow accurately. Four methods of ultrasonic technologies were explored. The Hilbert-Huang transform (HHT) was first applied to the ultrasound signals of air-water flow on horizontal flow for measurement of the parameters of the two- phase slug flow. The use of the HHT technique is sensitive enough to detect the hydrodynamics of the slug flow. The results of the experiments are compared with correlations in the literature and are in good agreement. Next, experimental data of air-water two-phase flow under slug, elongated bubble, stratified-wavy and stratified flow regimes were used to develop an objective flow regime classification of two-phase flow using the ultrasonic Doppler sensor and artificial neural network (ANN). The classifications using the power spectral density (PSD) and discrete wavelet transform (DWT) features have accuracies of 87% and 95.6% respectively. This is considerably more promising as it uses non-invasive and non-radioactive sensors. Moreover, ultrasonic pulse wave transducers with centre frequencies of 1MHz and 7.5MHz were used to measure two-phase flow both in horizontal and vertical flow pipes. The liquid level measurement was compared with the conductivity probes technique and agreed qualitatively. However, in the vertical with a gas volume fraction (GVF) higher than 20%, the ultrasound signals were attenuated. Furthermore, gas-liquid and oil-water two-phase flow rates in a vertical upward flow were measured using a combination of an ultrasound Doppler sensor and gamma densitometer. The results showed that the flow gas and liquid flow rates measured are within ±10% for low void fraction tests, water-cut measurements are within ±10%, densities within ±5%, and void fractions within ±10%. These findings are good results for a relatively fast flowing multiphase flow

    Robust Algorithms for Unattended Monitoring of Cardiovascular Health

    Get PDF
    Cardiovascular disease is the leading cause of death in the United States. Tracking daily changes in one’s cardiovascular health can be critical in diagnosing and managing cardiovascular disease, such as heart failure and hypertension. A toilet seat is the ideal device for monitoring parameters relating to a subject’s cardiac health in his or her home, because it is used consistently and requires no change in daily habit. The present work demonstrates the ability to accurately capture clinically relevant ECG metrics, pulse transit time based blood pressures, and other parameters across subjects and physiological states using a toilet seat-based cardiovascular monitoring system, enabled through advanced signal processing algorithms and techniques. The algorithms described herein have been designed for use with noisy physiologic signals measured at non-standard locations. A key component of these algorithms is the classification of signal quality, which allows automatic rejection of noisy segments before feature delineation and interval extractions. The present delineation algorithms have been designed to work on poor quality signals while maintaining the highest possible temporal resolution. When validated on standard databases, the custom QRS delineation algorithm has best-in-class sensitivity and precision, while the photoplethysmogram delineation algorithm has best-in-class temporal resolution. Human subject testing on normative and heart failure subjects is used to evaluate the efficacy of the proposed monitoring system and algorithms. Results show that the accuracy of the measured heart rate and blood pressure are well within the limits of AAMI standards. For the first time, a single device is capable of monitoring long-term trends in these parameters while facilitating daily measurements that are taken at rest, prior to the consumption of food and stimulants, and at consistent times each day. This system has the potential to revolutionize in-home cardiovascular monitoring

    The 2023 wearable photoplethysmography roadmap

    Get PDF
    Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology
    • …
    corecore