25 research outputs found

    Leveraging driver vehicle and environment interaction: Machine learning using driver monitoring cameras to detect drunk driving

    Get PDF
    Excessive alcohol consumption causes disability and death. Digital interventions are promising means to promote behavioral change and thus prevent alcohol-related harm, especially in critical moments such as driving. This requires real-time information on a person's blood alcohol concentration (BAC). Here, we develop an in-vehicle machine learning system to predict critical BAC levels. Our system leverages driver monitoring cameras mandated in numerous countries worldwide. We evaluate our system with n=30 participants in an interventional simulator study. Our system reliably detects driving under any alcohol influence (area under the receiver operating characteristic curve [AUROC] 0.88) and driving above the WHO recommended limit of 0.05g/dL BAC (AUROC 0.79). Model inspection reveals reliance on pathophysiological effects associated with alcohol consumption. To our knowledge, we are the first to rigorously evaluate the use of driver monitoring cameras for detecting drunk driving. Our results highlight the potential of driver monitoring cameras and enable next-generation drunk driver interaction preventing alcohol-related harm

    Effectiveness and User Perception of an In-Vehicle Voice Warning for Hypoglycemia: Development and Feasibility Trial

    Get PDF
    Background: Hypoglycemia is a frequent and acute complication in type 1 diabetes mellitus (T1DM) and is associated with a higher risk of car mishaps. Currently, hypoglycemia can be detected and signaled through flash glucose monitoring or continuous glucose monitoring devices, which require manual and visual interaction, thereby removing the focus of attention from the driving task. Hypoglycemia causes a decrease in attention, thereby challenging the safety of using such devices behind the wheel. Here, we present an investigation of a hands-free technology—a voice warning that can potentially be delivered via an in-vehicle voice assistant. Objective: This study aims to investigate the feasibility of an in-vehicle voice warning for hypoglycemia, evaluating both its effectiveness and user perception. Methods: We designed a voice warning and evaluated it in 3 studies. In all studies, participants received a voice warning while driving. Study 0 (n=10) assessed the feasibility of using a voice warning with healthy participants driving in a simulator. Study 1 (n=18) assessed the voice warning in participants with T1DM. Study 2 (n=20) assessed the voice warning in participants with T1DM undergoing hypoglycemia while driving in a real car. We measured participants’ self-reported perception of the voice warning (with a user experience scale in study 0 and with acceptance, alliance, and trust scales in studies 1 and 2) and compliance behavior (whether they stopped the car and reaction time). In addition, we assessed technology affinity and collected the participants’ verbal feedback. Results: Technology affinity was similar across studies and approximately 70% of the maximal value. Perception measure of the voice warning was approximately 62% to 78% in the simulated driving and 34% to 56% in real-world driving. Perception correlated with technology affinity on specific constructs (eg, Affinity for Technology Interaction score and intention to use, optimism and performance expectancy, behavioral intention, Session Alliance Inventory score, innovativeness and hedonic motivation, and negative correlations between discomfort and behavioral intention and discomfort and competence trust; all P<.05). Compliance was 100% in all studies, whereas reaction time was higher in study 1 (mean 23, SD 5.2 seconds) than in study 0 (mean 12.6, SD 5.7 seconds) and study 2 (mean 14.6, SD 4.3 seconds). Finally, verbal feedback showed that the participants preferred the voice warning to be less verbose and interactive. Conclusions: This is the first study to investigate the feasibility of an in-vehicle voice warning for hypoglycemia. Drivers find such an implementation useful and effective in a simulated environment, but improvements are needed in the real-world driving context. This study is a kickoff for the use of in-vehicle voice assistants for digital health interventions

    Machine learning for non‐invasive sensing of hypoglycaemia while driving in people with diabetes

    Get PDF
    Aim: To develop and evaluate the concept of a non-invasive machine learning (ML) approach for detecting hypoglycaemia based exclusively on combined driving (CAN) and eye tracking (ET) data. Materials and Methods: We first developed and tested our ML approach in pronounced hypoglycaemia, and then we applied it to mild hypoglycaemia to evaluate its early warning potential. For this, we conducted two consecutive, interventional studies in individuals with type 1 diabetes. In study 1 (n = 18), we collected CAN and ET data in a driving simulator during euglycaemia and pronounced ypoglycaemia (blood glucose [BG] 2.0-2.5 mmol L-1). In study 2 (n = 9), we collected CAN and ET data in the same simulator but in euglycaemia and mild hypoglycaemia (BG 3.0-3.5 mmol L-1). Results: Here, we show that our ML approach detects pronounced and mild hypoglycaemia with high accuracy (area under the receiver operating characteristics curve 0.88 ± 0.10 and 0.83 ± 0.11, respectively). Conclusions: Our findings suggest that an ML approach based on CAN and ET data, exclusively, enables detection of hypoglycaemia while driving. This provides a promising concept for alternative and non-invasive detection of hypoglycaemia

    Driver state prediction from vehicle signals: An evaluation of segmentation approaches

    No full text
    Modern vehicles typically are equipped with assistance systems to support drivers in staying vigilant. To assess the driver state, such systems usually split characteristic vehicle signals into smaller segments which are subsequently fed into algorithms to identify irregularities in driver behavior. In this paper, we compare four different approaches for vehicle signal segmentation to predict driver impairment on a dataset from a drunk driving study (n=31). First, we evaluate two static approaches which segment vehicle signals based on fixed time and distance lengths. Intuitively, such approaches are straightforward to implement and provide segments with a specific frequency. Next, we analyze two dynamic approaches that segment vehicle signals based on pre-defined thresholds and well-defined maneuvers. Although more sophisticated to define, the more specific characteristics of driving situations can potentially improve a driver state prediction model. Finally, we train machine learning models for drunk driving detection on vehicle signals segmented by these four approaches. The maneuver-based approach detects impaired driving with a balanced accuracy of 68.73%, thereby outperforming time-based (67.20%), distance-based (65.66%), and threshold-based (61.53%) approaches in comparable settings. Therefore, our findings indicate that incorporating the driving context benefits the prediction of driver states

    Improving heart rate variability measurements from consumer smartwatches with machine learning

    No full text
    The reactions of the human body to physical exercise, psychophysiological stress and heart diseases are reflected in heart rate variability(HRV). Thus, continuous monitoring of HRV can contribute to determining and predicting issues in well-being and mental health. HRV can be measured in everyday life by consumer Wearable devices such as smart-watches which are easily accessible and affordable. However, they are arguably accurate due to the stability of the sensor. We hypothesize a systematic error which is related to the wearer movement. Our evidence builds upon explanatory and predictive modeling: we find a statistically significant correlation between error in HRV measurements and the wearer movement. We show that this error can be minimized by bringing into context additional available sensor information, such as accelerometer data. This work demonstrates our research-in-Progress on how neural learning can minimize the error of such smartwatch HRV measurements

    FLIRT: A Feature Generation Toolkit for Wearable Data

    No full text
    Background and Objective: Researchers use wearable sensing data and machine learning (ML) models to predict various health and behavioral outcomes. However, sensor data from commercial wearables are prone to noise, missing, or artifacts. Even with the recent interest in deploying commercial wearables for long-term studies, there does not exist a standardized way to process the raw sensor data and researchers often use highly specific functions to preprocess, clean, normalize, and compute features. This leads to a lack of uniformity and reproducibility across different studies, making it difficult to compare results. To overcome these issues, we present FLIRT: A Feature Generation Toolkit for Wearable Data; it is an open-source Python package that focuses on processing physiological data specifically from commercial wearables with all its challenges from data cleaning to feature extraction. Methods: FLIRT leverages a variety of state-of-the-art algorithms (e.g., particle filters, ML-based artifact detection) to ensure a robust preprocessing of physiological data from wearables. In a subsequent step, FLIRT utilizes a sliding-window approach and calculates a feature vector of more than 100 dimensions – a basis for a wide variety of ML algorithms. Results: We evaluated FLIRT on the publicly available WESAD dataset, which focuses on stress detection with an Empatica E4 wearable. Preprocessing the data with FLIRT ensures that unintended noise and artifacts are appropriately filtered. In the classification task, FLIRT outperforms the preprocessing baseline of the original WESAD paper. Conclusion: FLIRT provides functionalities beyond existing packages that can address unmet needs in physiological data processing and feature generation: (a) integrated handling of common wearable file formats (e.g., Empatica E4 archives), (b) robust preprocessing, and (c) standardized feature generation that ensures reproducibility of results. Nevertheless, while FLIRT comes with a default configuration to accommodate most situations, it offers a highly configurable interface for all of its implemented algorithms to account for specific needs.ISSN:0169-2607ISSN:1872-756

    Effectiveness and User Perception of an In-Vehicle Voice Warning for Hypoglycemia: Development and Feasibility Trial.

    Get PDF
    BACKGROUND Hypoglycemia is a frequent and acute complication in type 1 diabetes mellitus (T1DM) and is associated with a higher risk of car mishaps. Currently, hypoglycemia can be detected and signaled through flash glucose monitoring or continuous glucose monitoring devices, which require manual and visual interaction, thereby removing the focus of attention from the driving task. Hypoglycemia causes a decrease in attention, thereby challenging the safety of using such devices behind the wheel. Here, we present an investigation of a hands-free technology-a voice warning that can potentially be delivered via an in-vehicle voice assistant. OBJECTIVE This study aims to investigate the feasibility of an in-vehicle voice warning for hypoglycemia, evaluating both its effectiveness and user perception. METHODS We designed a voice warning and evaluated it in 3 studies. In all studies, participants received a voice warning while driving. Study 0 (n=10) assessed the feasibility of using a voice warning with healthy participants driving in a simulator. Study 1 (n=18) assessed the voice warning in participants with T1DM. Study 2 (n=20) assessed the voice warning in participants with T1DM undergoing hypoglycemia while driving in a real car. We measured participants' self-reported perception of the voice warning (with a user experience scale in study 0 and with acceptance, alliance, and trust scales in studies 1 and 2) and compliance behavior (whether they stopped the car and reaction time). In addition, we assessed technology affinity and collected the participants' verbal feedback. RESULTS Technology affinity was similar across studies and approximately 70% of the maximal value. Perception measure of the voice warning was approximately 62% to 78% in the simulated driving and 34% to 56% in real-world driving. Perception correlated with technology affinity on specific constructs (eg, Affinity for Technology Interaction score and intention to use, optimism and performance expectancy, behavioral intention, Session Alliance Inventory score, innovativeness and hedonic motivation, and negative correlations between discomfort and behavioral intention and discomfort and competence trust; all P<.05). Compliance was 100% in all studies, whereas reaction time was higher in study 1 (mean 23, SD 5.2 seconds) than in study 0 (mean 12.6, SD 5.7 seconds) and study 2 (mean 14.6, SD 4.3 seconds). Finally, verbal feedback showed that the participants preferred the voice warning to be less verbose and interactive. CONCLUSIONS This is the first study to investigate the feasibility of an in-vehicle voice warning for hypoglycemia. Drivers find such an implementation useful and effective in a simulated environment, but improvements are needed in the real-world driving context. This study is a kickoff for the use of in-vehicle voice assistants for digital health interventions
    corecore