206 research outputs found

    A study on tiredness assessment by using eye blink detection

    Get PDF
    In this paper, the loss of attention of automotive drivers is studied by using eye blink detection. Facial landmark detection for detecting eye is explored. Afterward, eye blink is detected using Eye Aspect Ratio. By comparing the time of eye closure to a particular period, the driver’s tiredness is decided. The total number of eye blinks in a minute is counted to detect drowsiness. Calculation of total eye blinks in a minute for the driver is done, then compared it with a known standard value. If any of the above conditions fulfills, the system decides the driver is unconscious. A total of 120 samples were taken by placing the light source front, back, and side. There were 40 samples for each position of the light source. The maximum error rate occurred when the light source was placed back with a 15% error rate. The best scenario was 7.5% error rate where the light source was placed front side. The eye blinking process gave an average error of 11.67% depending on the various position of the light source. Another 120 samples were taken at a different time of the day for calculating total eye blink in a minute. The maximum number of blinks was in the morning with an average blink rate of 5.78 per minute, and the lowest number of blink rate was in midnight with 3.33% blink rate. The system performed satisfactorily and achieved the eye blink pattern with 92.7% accuracy

    Detection of Driver Drowsiness and Distraction Using Computer Vision and Machine Learning Approaches

    Get PDF
    Drowsiness and distracted driving are leading factor in most car crashes and near-crashes. This research study explores and investigates the applications of both conventional computer vision and deep learning approaches for the detection of drowsiness and distraction in drivers. In the first part of this MPhil research study conventional computer vision approaches was studied to develop a robust drowsiness and distraction system based on yawning detection, head pose detection and eye blinking detection. These algorithms were implemented by using existing human crafted features. Experiments were performed for the detection and classification with small image datasets to evaluate and measure the performance of system. It was observed that the use of human crafted features together with a robust classifier such as SVM gives better performance in comparison to previous approaches. Though, the results were satisfactorily, there are many drawbacks and challenges associated with conventional computer vision approaches, such as definition and extraction of human crafted features, thus making these conventional algorithms to be subjective in nature and less adaptive in practice. In contrast, deep learning approaches automates the feature selection process and can be trained to learn the most discriminative features without any input from human. In the second half of this research study, the use of deep learning approaches for the detection of distracted driving was investigated. It was observed that one of the advantages of the applied methodology and technique for distraction detection includes and illustrates the contribution of CNN enhancement to a better pattern recognition accuracy and its ability to learn features from various regions of a human body simultaneously. The comparison of the performance of four convolutional deep net architectures (AlexNet, ResNet, MobileNet and NASNet) was carried out, investigated triplet training and explored the impact of combining a support vector classifier (SVC) with a trained deep net. The images used in our experiments with the deep nets are from the State Farm Distracted Driver Detection dataset hosted on Kaggle, each of which captures the entire body of a driver. The best results were obtained with the NASNet trained using triplet loss and combined with an SVC. It was observed that one of the advantages of deep learning approaches are their ability to learn discriminative features from various regions of a human body simultaneously. The ability has enabled deep learning approaches to reach accuracy at human level.

    The effect of electronic word of mouth communication on purchase intention moderate by trust: a case online consumer of Bahawalpur Pakistan

    Get PDF
    The aim of this study is concerned with improving the previous research finding complete filling the research gaps and introducing the e-WOM on purchase intention and brand trust as a moderator between the e-WOM, and purchase intention an online user in Bahawalpur city Pakistan, therefore this study was a focus at linking the research gap of previous literature of past study based on individual awareness from the real-life experience. we collected data from the online user of the Bahawalpur Pakistan. In this study convenience sampling has been used to collect data and instruments of this study adopted from the previous study. The quantitative research methodology used to collect data, survey method was used to assemble data for this study, 300 questionnaire were distributed in Bahawalpur City due to the ease, reliability, and simplicity, effective recovery rate of 67% as a result 202 valid response was obtained for the effect of e-WOM on purchase intention and moderator analysis has been performed. Hypotheses of this research are analyzed by using Structural Equation Modeling (SEM) based on Partial Least Square (PLS). The result of this research is e-WOM significantly positive effect on purchase intention and moderator role of trust significantly affects the relationship between e-WOM, and purchase intention. The addition of brand trust in the model has contributed to the explanatory power, some studied was conduct on brand trust as a moderator and this study has contributed to the literature in this favor. significantly this study focused on current marketing research. Unlike past studies focused on western context, this study has extended the regional literature on e-WOM, and purchase intention to be intergrading in Bahawalpur Pakistan context. Lastly, future studies are recommended to examine the effect of trust in other countries allow for the comparison of the findings

    Efficient and Robust Driver Fatigue Detection Framework Based on the Visual Analysis of Eye States

    Get PDF
    Fatigue detection based on vision is widely employed in vehicles due to its real-time and reliable detection results. With the coronavirus disease (COVID-19) outbreak, many proposed detection systems based on facial characteristics would be unreliable due to the face covering with the mask. In this paper, we propose a robust visual-based fatigue detection system for monitoring drivers, which is robust regarding the coverings of masks, changing illumination and head movement of drivers. Our system has three main modules: face key point alignment, fatigue feature extraction and fatigue measurement based on fused features. The innovative core techniques are described as follows: (1) a robust key point alignment algorithm by fusing global face information and regional eye information, (2) dynamic threshold methods to extract fatigue characteristics and (3) a stable fatigue measurement based on fusing percentage of eyelid closure (PERCLOS) and proportion of long closure duration blink (PLCDB). The excellent performance of our proposed algorithm and methods are verified in experiments. The experimental results show that our key point alignment algorithm is robust to different scenes, and the performance of our proposed fatigue measurement is more reliable due to the fusion of PERCLOS and PLCDB

    Iris Region and Bayes Classifier for Robust Open or Closed Eye Detection

    Get PDF
    AbstractThis paper presents a robust method to detect sequence of state open or closed of eye in low-resolution image which can finally lead to efficient eye blink detection for practical use. Eye states and eye blink detection play an important role in human-computer interaction (HCI) systems. Eye blinks can be used as communication method for people with severe disability providing an alternate input modality to control a computer or as detection method for a driver’s drowsiness. The proposed approach is based on an analysis of eye and skin in eye region image. Evidently, the iris and sclera regions increase as a person opens an eye and decrease while an eye is closing. In particular, the distributions of these eye components, during each eye state, form a bell-like shape. By using color tone differences, the iris and sclera regions can be extracted from the skin. Next, a naive Bayes classifier effectively classifies the eye states. Further, a study also shows that iris region as a feature gives better detection rate over sclera region as a feature. The approach works online with low-resolution image and in typical lighting conditions. It was successfully tested in  image sequences (  frames) and achieved high accuracy of over  for open eye and over  for closed eye compared to the ground truth. In particular, it improves almost  in terms of open eye state detection compared to a recent commonly used approach, template matching algorithm

    A framework for context-aware driver status assessment systems

    Get PDF
    The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard. Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented. With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform. To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior. A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety. Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition

    An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles

    Full text link
    Nowadays, automobile manufacturers make efforts to develop ways to make cars fully safe. Monitoring driver's actions by computer vision techniques to detect driving mistakes in real-time and then planning for autonomous driving to avoid vehicle collisions is one of the most important issues that has been investigated in the machine vision and Intelligent Transportation Systems (ITS). The main goal of this study is to prevent accidents caused by fatigue, drowsiness, and driver distraction. To avoid these incidents, this paper proposes an integrated safety system that continuously monitors the driver's attention and vehicle surroundings, and finally decides whether the actual steering control status is safe or not. For this purpose, we equipped an ordinary car called FARAZ with a vision system consisting of four mounted cameras along with a universal car tool for communicating with surrounding factory-installed sensors and other car systems, and sending commands to actuators. The proposed system leverages a scene understanding pipeline using deep convolutional encoder-decoder networks and a driver state detection pipeline. We have been identifying and assessing domestic capabilities for the development of technologies specifically of the ordinary vehicles in order to manufacture smart cars and eke providing an intelligent system to increase safety and to assist the driver in various conditions/situations.Comment: 15 pages and 5 figures, Submitted to the international conference on Contemporary issues in Data Science (CiDaS 2019), Learn more about this project at https://iasbs.ac.ir/~ansari/fara

    A CNN-LSTM-based Deep Learning Approach for Driver Drowsiness Prediction

    Get PDF
    Abstract: The development of neural networks and machine learning techniques has recently been the cornerstone for many applications of artificial intelligence. These applications are now found in practically all aspects of our daily life. Predicting drowsiness is one of the most particularly valuable of artificial intelligence for reducing the rate of traffic accidents. According to earlier studies, drowsy driving is at responsible for 25 to 50% of all traffic accidents, which account for 1,200 deaths and 76,000 injuries annually. The goal of this research is to diminish car accidents caused by drowsy drivers. This research tests a number of popular deep learning-based models and presents a novel deep learning-based model for predicting driver drowsiness using a combination of convolutional neural networks (CNN) and Long-Short-Term Memory (LSTM) to achieve results that are superior to those of state-of-the-art methods. Utilizing convolutional layers, CNN has excellent feature extraction abilities, whereas LSTM can learn sequential dependencies. The National Tsing Hua University (NTHU) driver drowsiness dataset is used to test the model and compare it to several other current models as well as state-of-the-art models. The proposed model outperformed state-of-the-art models, with results up to 98.30% for training accuracy and 97.31% for validation accuracy
    • â€Ķ
    corecore