2,228 research outputs found

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    IntoxiGait Deep Learning

    Get PDF
    Alcohol abuse has been a pervasive problem worldwide, causing 88,000 annual deaths. Recently, several projects have attempted to estimate a users level of intoxication by measuring gait using mobile sensors. The goal of this project was to compare a deep learning approach to previous methods to predict the blood alcohol concentration of a user by training a convolutional neural network and creating a mobile app which could accurately determine intoxication level. We gathered data from 38 participants over the course of 12 weeks, collecting accelerometer and gyroscope data simultaneously from both a smartwatch and smartphone. Our neural networks accuracy is roughly 64% on the test set and 69% on the training set into 5 BAC ranges for an input containing two seconds of data
    • …
    corecore