5,369 research outputs found
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
Multimodal Polynomial Fusion for Detecting Driver Distraction
Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone.
Although there has been a considerable amount of research on modeling the
distracted behavior of drivers under various conditions, accurate automatic
detection using multiple modalities and especially the contribution of using
the speech modality to improve accuracy has received little attention. This
paper introduces a new multimodal dataset for distracted driving behavior and
discusses automatic distraction detection using features from three modalities:
facial expression, speech and car signals. Detailed multimodal feature analysis
shows that adding more modalities monotonically increases the predictive
accuracy of the model. Finally, a simple and effective multimodal fusion
technique using a polynomial fusion layer shows superior distraction detection
results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201
Video surveillance for monitoring driver's fatigue and distraction
Fatigue and distraction effects in drivers represent a great risk for road safety. For both types of driver behavior problems, image analysis of eyes, mouth and head movements gives valuable information. We present in this paper a system for monitoring fatigue and distraction in drivers by evaluating their performance using image processing. We extract visual features related to nod, yawn, eye closure and opening, and mouth movements to detect fatigue as well as to identify diversion of attention from the road. We achieve an average of 98.3% and 98.8% in terms of sensitivity and specificity for detection of driver's fatigue, and 97.3% and 99.2% for detection of driver's distraction when evaluating four video sequences with different drivers
Detecting Distracted Driving with Deep Learning
© Springer International Publishing AG 2017Driver distraction is the leading factor in most car crashes and near-crashes. This paper discusses the types, causes and impacts of distracted driving. A deep learning approach is then presented for the detection of such driving behaviors using images of the driver, where an enhancement has been made to a standard convolutional neural network (CNN). Experimental results on Kaggle challenge dataset have confirmed the capability of a convolutional neural network (CNN) in this complicated computer vision task and illustrated the contribution of the CNN enhancement to a better pattern recognition accuracy.Peer reviewe
Low-cost vehicle driver assistance system for fatigue and distraction detection
In recent years, the automotive industry is equipping vehicles with sophisticated, and often, expensive systems for driving assistance. However, this vehicular technology is more focused on facilitating the driving and not in monitoring the driver. This paper presents a low-cost vehicle driver assistance system for monitoring the drivers activity that intends to prevent an accident. The system consists of 4 sensors that monitor physical parameters and driver position. From these values, the system generates a series of acoustic signals to alert the vehicle driver and avoiding an accident. Finally the system is tested to verify its proper operation.This work has been partially supported by the “Programa para la FormaciĂłn de Personal Investigador–(FPI-2015-S2-884)” by the “Universitat Politècnica de València”.Sendra, S.; GarcĂa-GarcĂa, L.; Jimenez, JM.; Lloret, J. (2017). Low-cost vehicle driver assistance system for fatigue and distraction detection. En Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer Verlag. 69-78. doi:10.1007/978-3-319-51207-5_7S6978Mapfre Foundation. (Online Article) Seguridad activa y pasiva. www.seguridadvialenlaempresa.com/seguridad-empresas/actualidad/noticias/seguridad-vial-activa-y-pasiva-2.jsp . Accessed 25 Aug 2016DirecciĂłn general de tráfico, Ministerio del Interior, Spanish Government. (Online Article) Las principales cifras de la siniestralidad vial. España 2014, p. 21 (2014). http://www.dgt.es/es/seguridad-vial/estadisticas-e-indicadores/publicaciones/ . Accessed 25 Aug 2016Fukuhara, H.: Vehicle collision alert system. US Patent 5355118 A, 11 Oct 1994DirecciĂłn general de tráfico, Ministerio del Interior, Spanish Government. (Online Article) Anuario estadĂstico de accidentes 2014, p. 10 (2014). http://www.dgt.es/es/seguridad-vial/estadisticas-e-indicadores/publicaciones/anuario-estadistico-general/ . Accessed 25 Aug 2016DirecciĂłn general de tráfico, Ministerio del Interior, Spanish Government. (Online Article) Otros factores de riesgo: La fatiga. http://www.dgt.es/PEVI/documentos/catalogo_recursos/didacticos/did_adultas/fatiga.pdf . Accessed 25 Aug 2016Seeing machines web page. https://www.seeingmachines.com/ . Accessed 25 Aug 2016Sigari, M.H., Pourshahabi, M.R., Soryani, M., Fathy, M.: A review on driver face monitoring systems for fatigue and distraction detection. Int. J. Adv. Sci. Technol. 64, 73–100 (2014). http://dx.doi.org/10.14257/ijast.2014.64.07Kutila, M., Jokela, M., Markkula, G., Romera Rue, M.: Driver distraction detection with a camera vision system. In: 14th IEEE International Conference on Image Processing (ICIP 2007), San Antonio, TX, USA, 16–19 September 2007. doi: 10.1109/ICIP.2007.4379556Rezaei, M., Klette, R.: 3D cascade of classifiers for open and closed eye detection in driver distraction monitoring. In: Real, P., Diaz-Pernil, D., Molina-Abril, H., Berciano, A., Kropatsch, W. (eds.) CAIP 2011. LNCS, vol. 6855, pp. 171–179. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-23678-5_19Mbouna, R.O., Kong, S.G., Chun, M.G.: Visual analysis of eye state and head pose for driver alertness monitoring. IEEE Trans. Intell. Transp. Syst. 14(3), 1462–1469 (2013). doi: 10.1109/TITS.2013.2262098Wahlstrom, E., Masoud, O., Papanikolopoulos, N.: Vision-based methods for driver monitoring. In: Proceedings of the International Conference on Intelligent Transportation Systems, vol. 2, pp. 903–908 (2003)Cherrat, L., Ezziyyani, M., El Mouden, A., Hassar, M.: Security and surveillance system for drivers based on user profile and learning systems for face recognition. Netw. Protoc. Algorithms 7(1), 98–118 (2015). doi: http://dx.doi.org/10.5296/npa.v7i1.7151Dong, Y., Hu, Z., Uchimura, K., Murayama, N.: Driver inattention monitoring system for intelligent vehicles: a review. IEEE Trans. Intell. Transp. Syst. 12(2), 596–614 (2011). doi: 10.1109/TITS.2010.2092770Force Sensitive Resistor features. http://www.trossenrobotics.com/productdocs/2010-10-26-DataSheet-FSR402-Layout2.pdf . Accessed 25 Aug 2016Louiza, M., Samira, M.: A new framework for request-driven data harvesting in vehicular sensor networks. Netw. Protoc. Algorithms 5(4), 1–18 (2013)Yao, H., Si, P., Yang, R., Zhang, Y.: Dynamic spectrum management with movement prediction in vehicular ad hoc networks. Ad Hoc Sens. Wirel. Netw. 32(1), 79–97 (2016
Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification
Accurate, robust, inexpensive gaze tracking in the car can help keep a driver
safe by facilitating the more effective study of how to improve (1) vehicle
interfaces and (2) the design of future Advanced Driver Assistance Systems. In
this paper, we estimate head pose and eye pose from monocular video using
methods developed extensively in prior work and ask two new interesting
questions. First, how much better can we classify driver gaze using head and
eye pose versus just using head pose? Second, are there individual-specific
gaze strategies that strongly correlate with how much gaze classification
improves with the addition of eye pose information? We answer these questions
by evaluating data drawn from an on-road study of 40 drivers. The main insight
of the paper is conveyed through the analogy of an "owl" and "lizard" which
describes the degree to which the eyes and the head move when shifting gaze.
When the head moves a lot ("owl"), not much classification improvement is
attained by estimating eye pose on top of head pose. On the other hand, when
the head stays still and only the eyes move ("lizard"), classification accuracy
increases significantly from adding in eye pose. We characterize how that
accuracy varies between people, gaze strategies, and gaze regions.Comment: Accepted for Publication in IET Computer Vision. arXiv admin note:
text overlap with arXiv:1507.0476
- …