8,145 research outputs found

    Towards a Practical Pedestrian Distraction Detection Framework using Wearables

    Full text link
    Pedestrian safety continues to be a significant concern in urban communities and pedestrian distraction is emerging as one of the main causes of grave and fatal accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate and efficient recognition of pedestrian distractions in real-time given the memory, computation and communication limitations of these devices, however, remains the key technical challenge in the design of such systems. Earlier research efforts in pedestrian distraction detection using data available from mobile and wearable devices have primarily focused only on achieving high detection accuracy, resulting in designs that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system that achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption, this paper makes the following main contributions: (i) design of a novel complex activity recognition framework which employs motion data available from users' mobile and wearable devices and a lightweight frequency matching approach to accurately and efficiently recognize complex distraction related activities, and (ii) a comprehensive comparative evaluation of the proposed framework with well-known complex activity recognition techniques in the literature with the help of data collected from human subject pedestrians and prototype implementations on commercially-available mobile and wearable devices

    Detecting Distracted Driving with Deep Learning

    Get PDF
    © Springer International Publishing AG 2017Driver distraction is the leading factor in most car crashes and near-crashes. This paper discusses the types, causes and impacts of distracted driving. A deep learning approach is then presented for the detection of such driving behaviors using images of the driver, where an enhancement has been made to a standard convolutional neural network (CNN). Experimental results on Kaggle challenge dataset have confirmed the capability of a convolutional neural network (CNN) in this complicated computer vision task and illustrated the contribution of the CNN enhancement to a better pattern recognition accuracy.Peer reviewe

    Driver Distraction Identification with an Ensemble of Convolutional Neural Networks

    Full text link
    The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    SaferCross: Enhancing Pedestrian Safety Using Embedded Sensors of Smartphone

    Get PDF
    The number of pedestrian accidents continues to keep climbing. Distraction from smartphone is one of the biggest causes for pedestrian fatalities. In this paper, we develop SaferCross, a mobile system based on the embedded sensors of smartphone to improve pedestrian safety by preventing distraction from smartphone. SaferCross adopts a holistic approach by identifying and developing essential system components that are missing in existing systems and integrating the system components into a "fully-functioning" mobile system for pedestrian safety. Specifically, we create algorithms for improving the accuracy and energy efficiency of pedestrian positioning, effectiveness of phone activity detection, and real-time risk assessment. We demonstrate that SaferCross, through systematic integration of the developed algorithms, performs situation awareness effectively and provides a timely warning to the pedestrian based on the information obtained from smartphone sensors and Direct Wi-Fi-based peer-to-peer communication with approaching cars. Extensive experiments are conducted in a department parking lot for both component-level and integrated testing. The results demonstrate that the energy efficiency and positioning accuracy of SaferCross are improved by 52% and 72% on average compared with existing solutions with missing support for positioning accuracy and energy efficiency, and the phone-viewing event detection accuracy is over 90%. The integrated test results show that SaferCross alerts the pedestrian timely with an average error of 1.6sec in comparison with the ground truth data, which can be easily compensated by configuring the system to fire an alert message a couple of seconds early.Comment: Published in IEEE Access, 202

    Characterizing driving behavior using automatic visual analysis

    Full text link
    In this work, we present the problem of rash driving detection algorithm using a single wide angle camera sensor, particularly useful in the Indian context. To our knowledge this rash driving problem has not been addressed using Image processing techniques (existing works use other sensors such as accelerometer). Car Image processing literature, though rich and mature, does not address the rash driving problem. In this work-in-progress paper, we present the need to address this problem, our approach and our future plans to build a rash driving detector.Comment: 4 pages,7 figures, IBM-ICARE201
    corecore