1,968 research outputs found

    Wearable in-ear pulse oximetry: theory and applications

    Get PDF
    Wearable health technology, most commonly in the form of the smart watch, is employed by millions of users worldwide. These devices generally exploit photoplethysmography (PPG), the non-invasive use of light to measure blood volume, in order to track physiological metrics such as pulse and respiration. Moreover, PPG is commonly used in hospitals in the form of pulse oximetry, which measures light absorbance by the blood at different wavelengths of light to estimate blood oxygen levels (SpO2). This thesis aims to demonstrate that despite its widespread usage over many decades, this sensor still possesses a wealth of untapped value. Through a combination of advanced signal processing and harnessing the ear as a location for wearable sensing, this thesis introduces several novel high impact applications of in-ear pulse oximetry and photoplethysmography. The aims of this thesis are accomplished through a three pronged approach: rapid detection of hypoxia, tracking of cognitive workload and fatigue, and detection of respiratory disease. By means of the simultaneous recording of in-ear and finger pulse oximetry at rest and during breath hold tests, it was found that in-ear SpO2 responds on average 12.4 seconds faster than the finger SpO2. This is likely due in part to the ear being in close proximity to the brain, making it a priority for oxygenation and thus making wearable in-ear SpO2 a good proxy for core blood oxygen. Next, the low latency of in-ear SpO2 was further exploited in the novel application of classifying cognitive workload. It was found that in-ear pulse oximetry was able to robustly detect tiny decreases in blood oxygen during increased cognitive workload, likely caused by increased brain metabolism. This thesis demonstrates that in-ear SpO2 can be used to accurately distinguish between different levels of an N-back memory task, representing different levels of mental effort. This concept was further validated through its application to gaming and then extended to the detection of driver related fatigue. It was found that features derived from SpO2 and PPG were predictive of absolute steering wheel angle, which acts as a proxy for fatigue. The strength of in-ear PPG for the monitoring of respiration was investigated with respect to the finger, with the conclusion that in-ear PPG exhibits far stronger respiration induced intensity variations and pulse amplitude variations than the finger. All three respiratory modes were harnessed through multivariate empirical mode decomposition (MEMD) to produce spirometry-like respiratory waveforms from PPG. It was discovered that these PPG derived respiratory waveforms can be used to detect obstruction to breathing, both through a novel apparatus for the simulation of breathing disorders and through the classification of chronic obstructive pulmonary disease (COPD) in the real world. This thesis establishes in-ear pulse oximetry as a wearable technology with the potential for immense societal impact, with applications from the classification of cognitive workload and the prediction of driver fatigue, through to the detection of chronic obstructive pulmonary disease. The experiments and analysis in this thesis conclusively demonstrate that widely used pulse oximetry and photoplethysmography possess a wealth of untapped value, in essence teaching the old PPG sensor new tricks.Open Acces

    Physiological-based Driver Monitoring Systems: A Scoping Review

    Get PDF
    A physiological-based driver monitoring system (DMS) has attracted research interest and has great potential for providing more accurate and reliable monitoring of the driver’s state during a driving experience. Many driving monitoring systems are driver behavior-based or vehicle-based. When these non-physiological based DMS are coupled with physiological-based data analysis from electroencephalography (EEG), electrooculography (EOG), electrocardiography (ECG), and electromyography (EMG), the physical and emotional state of the driver may also be assessed. Drivers’ wellness can also be monitored, and hence, traffic collisions can be avoided. This paper highlights work that has been published in the past five years related to physiological-based DMS. Specifically, we focused on the physiological indicators applied in DMS design and development. Work utilizing key physiological indicators related to driver identification, driver alertness, driver drowsiness, driver fatigue, and drunk driver is identified and described based on the PRISMA Extension for Scoping Reviews (PRISMA-Sc) Framework. The relationship between selected papers is visualized using keyword co-occurrence. Findings were presented using a narrative review approach based on classifications of DMS. Finally, the challenges of physiological-based DMS are highlighted in the conclusion. Doi: 10.28991/CEJ-2022-08-12-020 Full Text: PD

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    Electroencephalography (EEG), electromyography (EMG) and eye-tracking for astronaut training and space exploration

    Full text link
    The ongoing push to send humans back to the Moon and to Mars is giving rise to a wide range of novel technical solutions in support of prospective astronaut expeditions. Against this backdrop, the European Space Agency (ESA) has recently launched an investigation into unobtrusive interface technologies as a potential answer to such challenges. Three particular technologies have shown promise in this regard: EEG-based brain-computer interfaces (BCI) provide a non-invasive method of utilizing recorded electrical activity of a user's brain, electromyography (EMG) enables monitoring of electrical signals generated by the user's muscle contractions, and finally, eye tracking enables, for instance, the tracking of user's gaze direction via camera recordings to convey commands. Beyond simply improving the usability of prospective technical solutions, our findings indicate that EMG, EEG, and eye-tracking could also serve to monitor and assess a variety of cognitive states, including attention, cognitive load, and mental fatigue of the user, while EMG could furthermore also be utilized to monitor the physical state of the astronaut. In this paper, we elaborate on the key strengths and challenges of these three enabling technologies, and in light of ESA's latest findings, we reflect on their applicability in the context of human space flight. Furthermore, a timeline of technological readiness is provided. In so doing, this paper feeds into the growing discourse on emerging technology and its role in paving the way for a human return to the Moon and expeditions beyond the Earth's orbit

    Defining, measuring, and modeling passenger's in-vehicle experience and acceptance of automated vehicles

    Full text link
    Automated vehicle acceptance (AVA) has been measured mostly subjectively by questionnaires and interviews, with a main focus on drivers inside automated vehicles (AVs). To ensure that AVs are widely accepted by the public, ensuring the acceptance by both drivers and passengers is key. The in-vehicle experience of passengers will determine the extent to which AVs will be accepted by passengers. A comprehensive understanding of potential assessment methods to measure the passenger experience in AVs is needed to improve the in-vehicle experience of passengers and thereby the acceptance. The present work provides an overview of assessment methods that were used to measure a driver's behavior, and cognitive and emotional states during (automated) driving. The results of the review have shown that these assessment methods can be classified by type of data-collection method (e.g., questionnaires, interviews, direct input devices, sensors), object of their measurement (i.e., perception, behavior, state), time of measurement, and degree of objectivity of the data collected. A conceptual model synthesizes the results of the literature review, formulating relationships between the factors constituting the in-vehicle experience and AVA acceptance. It is theorized that the in-vehicle experience influences the intention to use, with intention to use serving as predictor of actual use. The model also formulates relationships between actual use and well-being. A combined approach of using both subjective and objective assessment methods is needed to provide more accurate estimates for AVA, and advance the uptake and use of AVs.Comment: 22 pages, 1 figur

    Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video

    Get PDF
    The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load
    • …
    corecore