491 research outputs found

    Enhancing protection of vehicle drivers and road safety by deploying ADAS and Facial Features Pattern Analysis (FFPA) technologies

    Get PDF
    The latest technology associated with Intelligent Transportation Systems (ITS) have been designed with the aim to minimize the numbers of person injury in road accidents and improve the overall road safety. The driver behavior is one major concern in many accidents in HK urban road links. In particular, the driver\u27s attitudes, such as fatigue, drowsiness and concentration are the major causes to road accidents. It will affect the driver\u27s ability and decisions in properly controlling their vehicles. Very often, this kind of driver distraction is particularly obvious when driving after 2 to 3 hours from most research sources. In the traffic data sourced from Transport Department of HKSAR, around 82% of the personal injury in road accidents belongs to the driver\u27s fault. This paper used the latest technology and applied it to a group of transport vehicles, i.e. taxi. The objective is set up to monitor, record and analyze the fatigue and drowsiness situation of drivers by means of advanced AI system, facial recognition detection system (the sensors) and early warning devices (LDWS) via ADAS technology. The result will be used to give real time early warning and subsequent analysis for the transport operators or researchers for better and safer management of their transport fleets. The system aimed to have a good precaution and protection on all road users, including drivers, passengers and pedestrians. In turn, it largely saves our community resources, such as the medical and social services consumed on treating the injured persons

    Yawn analysis with mouth occlusion detection

    Get PDF
    tOne of the most common signs of tiredness or fatigue is yawning. Naturally, identification of fatiguedindividuals would be helped if yawning is detected. Existing techniques for yawn detection are centred onmeasuring the mouth opening. This approach, however, may fail if the mouth is occluded by the hand, as itis frequently the case. The work presented in this paper focuses on a technique to detect yawning whilstalso allowing for cases of occlusion. For measuring the mouth opening, a new technique which appliesadaptive colour region is introduced. For detecting yawning whilst the mouth is occluded, local binarypattern (LBP) features are used to also identify facial distortions during yawning. In this research, theStrathclyde Facial Fatigue (SFF) database which contains genuine video footage of fatigued individuals isused for training, testing and evaluation of the system

    A Comparative Emotions-detection Review for Non-intrusive Vision-Based Facial Expression Recognition

    Get PDF
    Affective computing advocates for the development of systems and devices that can recognize, interpret, process, and simulate human emotion. In computing, the field seeks to enhance the user experience by finding less intrusive automated solutions. However, initiatives in this area focus on solitary emotions that limit the scalability of the approaches. Further reviews conducted in this area have also focused on solitary emotions, presenting challenges to future researchers when adopting these recommendations. This review aims at highlighting gaps in the application areas of Facial Expression Recognition Techniques by conducting a comparative analysis of various emotion detection datasets, algorithms, and results provided in existing studies. The systematic review adopted the PRISMA model and analyzed eighty-three publications. Findings from the review show that different emotions call for different Facial Expression Recognition techniques, which should be analyzed when conducting Facial Expression Recognition. Keywords: Facial Expression Recognition, Emotion Detection, Image Processing, Computer Visio

    Sistem Pemandu Pengemudi Berbasis Kamera Embeded

    Get PDF
    Keamanan dan kenyamanan berkendara merupakan salah satu aspek penting yang harus diperhatikan oleh industri otomotif. Sebuah sistem yang mampu memberikan peringatan dini pada pengemudi akan membantu mencegah terjadinya kecelakaan. Sistem pemandu pengemudi (Driver assistance system) merupakan sistem yang dikembangkan untuk menyediakan fungsi tersebut. Sistem pemandu pengemudi berbasis kamera merupakan sistem yang berkembang cukup pesat, seiring dengan perkembangan teknologi di bidang teknik pengolahan citra digital dan sistem komputer. Penelitian ini bertujuan untuk mengembangkan sistem pemandu pengemudi berbasis kamera yang mampu mendeteksi kelelahan dan konsentrasi/pandangan mata pengemudi, rambu-rambu lalu lintas, dan marka jalan, serta objek atau kendaraan yang berada di depan. Pada peneltian di tahun pertama, dikembangkan sistem pendeteksi kelelahan pengemudi menggunakan kamera embeded yang dipasang di ruang kemudi kendaraan. Sebuah sistem komputer embeded digunakan sebagai pengolah utama dalam proses pendeteksian berbasis kamera tersebut. Dengan menggunakan sistem embeded ini, implementasi sistem di kendaraan dapat dilakukan dengan mudah dan murah

    Driver Distraction Identification with an Ensemble of Convolutional Neural Networks

    Full text link
    The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949

    A Method for Recognizing Fatigue Driving Based on Dempster-Shafer Theory and Fuzzy Neural Network

    Get PDF
    This study proposes a method based on Dempster-Shafer theory (DST) and fuzzy neural network (FNN) to improve the reliability of recognizing fatigue driving. This method measures driving states using multifeature fusion. First, FNN is introduced to obtain the basic probability assignment (BPA) of each piece of evidence given the lack of a general solution to the definition of BPA function. Second, a modified algorithm that revises conflict evidence is proposed to reduce unreasonable fusion results when unreliable information exists. Finally, the recognition result is given according to the combination of revised evidence based on Dempster’s rule. Experiment results demonstrate that the recognition method proposed in this paper can obtain reasonable results with the combination of information given by multiple features. The proposed method can also effectively and accurately describe driving states

    Face tracking with active models for a driver monitoring application

    Get PDF
    La falta de atención durante la conducción es una de las principales causas de accidentes de tráfico. La \ud \ud monitorización del conductor para detectar inatención es un problema complejo, que incluye elementos fisiológicos y de \ud \ud comportamiento. Un sistema de Visión Computacional para detección de inatención se compone de varios etapas de procesado, y \ud \ud esta tesis se centra en el seguimiento de la cara del conductor. La tesis doctoral propone un nuevo conjunto de vídeos de \ud \ud conductores, grabados en un vehículo real y en dos simuladores realistas, que contienen la mayoría de los comportamientos \ud \ud presentes en la conducción, incluyendo gestos, giros de cabeza, interacción con el sistema de sonido y otras distracciones, \ud \ud y somnolencia. Esta base de datos, RS-DMV, se emplea para evaluar el rendimiento de los métodos que propone la tesis y \ud \ud otros del estado del arte. La tesis analiza el rendimiento de los Modelos Activos de Forma (ASM), y de los Modelos Locales \ud \ud Restringidos (CLM), por considerarlos a priori de interés. En concreto, se ha evaluado el método Stacked Trimmed ASM \ud \ud (STASM), que integra una serie de mejoras sobre el ASM original, mostrando una alta precisión en todas las pruebas cuando \ud \ud la cara es frontal a la cámara, si bien no funciona con la cara girada y su velocidad de ejecución es muy baja. CLM es \ud \ud capaz de ejecutarse con mayor rapidez, pero tiene una precisión mucho menor en todos los casos. El tercer método a evaluar \ud \ud es el Modelado y Seguimiento Simultáneo (SMAT), que caracteriza la forma y la textura de manera incremental, a partir de \ud \ud muestras encontradas previamente. La textura alrededor de cada punto de la forma que define la cara se modela mediante un \ud \ud conjunto de grupos (clusters) de muestras pasadas. El trabajo de tesis propone 3 métodos de clustering alternativos al \ud \ud original para la textura, y un modelo de forma entrenado off-line con una función de ajuste robusta. Los métodos \ud \ud alternativos propuestos obtienen una amplia mejora tanto en la precisión del seguimiento como en la robustez de éste frente \ud \ud a giros de cabeza, oclusiones, gestos y cambios de iluminación. Los métodos propuestos tienen, además, una baja carga \ud \ud computacional, y son capaces de ejecutarse a velocidades en torno a 100 imágenes por segundo en un computador de sobremesa

    An Embedded In-Cabin Lightweight High-Performance 3D Gaze Estimation System

    Get PDF
    Gaze estimation has gained interest in recent years for being an important cue to obtain information about the internal cognitive state of humans. Regardless of whether it is the 3D gaze vector or the point of gaze (PoG), gaze estimation has been applied in various fields, such as: human robot interaction, augmented reality, medicine, aviation and automotive. In the latter field, as part of Advanced Driver-Assistance Systems (ADAS), it allows the development of cutting-edge systems capable of mitigating road accidents by monitoring driver distraction. Gaze estimation can be also used to enhance the driving experience, for instance, autonomous driving. It also can improve comfort with augmented reality components capable of being commanded by the driver's eyes. Although, several high-performance real-time inference works already exist, just a few are capable of working with only a RGB camera on computationally constrained devices, such as a microcontroller. This work aims to develop a low-cost, efficient and high-performance embedded system capable of estimating the driver's gaze using deep learning and a RGB camera. The proposed system has achieved near-SOTA performances with about 90% less memory footprint. The capabilities to generalize in unseen environments have been evaluated through a live demonstration, where high performance and near real-time inference were obtained using a webcam and a Raspberry Pi4

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
    corecore