67 research outputs found

    Drowziness Detection System using Image Processing

    Get PDF
    There have been a lot of approaches in the detection of drowsiness. The parameters taken into consideration were the eye opening window, the number of blinks during a time period, no. of yawns etc. there are about 12 facial features that can be determined by the camera mounted on the circuit board. The parameter considered here is only the window of eye opening. In addition to the 12 parameters, the head motion was also taken into consideration which, in turn, contributed to the improvement of the accuracy of the measurement. Driver Drowsiness is one of the real reasons for mishaps on the planet. In this undertaking I plan to build up a model of drowsiness recognition framework. This framework meets expectations by observing the eyes of the driver and sounding a caution when he/she is tired. The framework so outlined is a non-nosy continuous checking framework. The need is on enhancing the security of the driver without being prominent. In this venture the eye flicker of the driver is recognized. In the event that the drivers’ eyes stay shut for more than a certain duration of time, the driver is said to be languid and an alert is sounded. The programming for this is done in matlab using image acquisition tool

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    Intelligent in-vehicle interaction technologies

    Get PDF
    With rapid advances in the field of autonomous vehicles (AVs), the ways in which human–vehicle interaction (HVI) will take place inside the vehicle have attracted major interest and, as a result, intelligent interiors are being explored to improve the user experience, acceptance, and trust. This is also fueled by parallel research in areas such as perception and control of robots, safe human–robot interaction, wearable systems, and the underpinning flexible/printed electronics technologies. Some of these are being routed to AVs. Growing number of network of sensors are being integrated into the vehicles for multimodal interaction to draw correct inferences of the communicative cues from the user and to vary the interaction dynamics depending on the cognitive state of the user and contextual driving scenario. In response to this growing trend, this timely article presents a comprehensive review of the technologies that are being used or developed to perceive user's intentions for natural and intuitive in-vehicle interaction. The challenges that are needed to be overcome to attain truly interactive AVs and their potential solutions are discussed along with various new avenues for future research

    Simultaneous analysis of driver behaviour and road condition for driver distraction detection

    Get PDF
    The design of intelligent driver assistance systems is of increasing importance for the vehicle-producing industry and road-safety solutions. This article starts with a review of road-situation monitoring and driver's behaviour analysis. This article also discusses lane tracking using vision (or other) sensors, and the strength or weakness of different methods of driver behaviour analysis (e.g. iris or pupil status monitoring, and EEG spectrum analysis). This article focuses then on image analysis techniques and develops a multi-faceted approach in order to analyse driver's face and eye status via implementing a real-time AdaBoost cascade classifier with Haar-like features. The proposed method is tested in a research vehicle for driver distraction detection using a binocular camera. The developed algorithm is robust in detecting different types of driver distraction such as drowsiness, fatigue, drunk driving or the performance of secondary tasks

    Automotive user interfaces for the support of non-driving-related activities

    Get PDF
    Driving a car has changed a lot since the first car was invented. Today, drivers do not only maneuver the car to their destination but also perform a multitude of additional activities in the car. This includes for instance activities related to assistive functions that are meant to increase driving safety and reduce the driver’s workload. However, since drivers spend a considerable amount of time in the car, they often want to perform non-driving-related activities as well. In particular, these activities are related to entertainment, communication, and productivity. The driver’s need for such activities has vastly increased, particularly due to the success of smart phones and other mobile devices. As long as the driver is in charge of performing the actual driving task, such activities can distract the driver and may result in severe accidents. Due to these special requirements of the driving environment, the driver ideally performs such activities by using appropriately designed in-vehicle systems. The challenge for such systems is to enable flexible and easily usable non-driving-related activities while maintaining and increasing driving safety at the same time. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform non-driving-related activities besides the regular driving tasks. Using empirical methods that are commonly used in human-computer interaction, we investigate various aspects of automotive user interfaces with the goal to support the design and development of future interfaces that facilitate non-driving-related activities. The first aspect is related to using physiological data in order to infer information about the driver’s workload. As a second aspect, we propose a multimodal interaction style to facilitate the interaction with multiple activities in the car. In addition, we introduce two concepts for the support of commonly used and demanded non-driving-related activities: For communication with the outside world, we investigate the driver’s needs with regard to sharing ride details with remote persons in order to increase driving safety. Finally, we present a concept of time-adjusted activities (e.g., entertainment and productivity) which enable the driver to make use of times where only little attention is required. Starting with manual, non-automated driving, we also consider the rise of automated driving modes.When cars were invented, they allowed the driver and potential passengers to get to a distant location. The only activities the driver was able and supposed to perform were related to maneuvering the vehicle, i.e., accelerate, decelerate, and steer the car. Today drivers perform many activities that go beyond these driving tasks. This includes for example activities related to driving assistance, location-based information and navigation, entertainment, communication, and productivity. To perform these activities, drivers use functions that are provided by in-vehicle information systems in the car. Many of these functions are meant to increase driving safety or to make the ride more enjoyable. The latter is important since people spend a considerable amount of time in their cars and want to perform similar activities like those to which they are accustomed to from using mobile devices. However, as long as the driver is responsible for driving, these activities can be distracting and pose driver, passengers, and the environment at risk. One goal for the development of automotive user interfaces is therefore to enable an easy and appropriate operation of in-vehicle systems such that driving tasks and non-driving-related activities can be performed easily and safely. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform also non-driving-related activities while driving. Using empirical methods that are commonly used in human-computer interaction, we approach various aspects of automotive user interfaces in order to support the design and development of future interfaces that also enable non-driving-related activities. Starting with manual, non-automated driving, we also consider the transition towards automated driving modes. As a first part, we look at the prerequisites that enable non-driving-related activities in the car. We propose guidelines for the design and development of automotive user interfaces that also support non-driving-related activities. This includes for instance rules on how to adapt or interrupt activities when the level of automation changes. To enable activities in the car, we propose a novel interaction concept that facilitates multimodal interaction in the car by combining speech interaction and touch gestures. Moreover, we reveal aspects on how to infer information about the driver's state (especially mental workload) by using physiological data. We conducted a real-world driving study to extract a data set with physiological and context data. This can help to better understand the driver state, to adapt interfaces to the driver and driving situations, and to adapt the route selection process. Second, we propose two concepts for supporting non-driving-related activities that are frequently used and demanded in the car. For telecommunication, we propose a concept to increase driving safety when communicating with the outside world. This concept enables the driver to share different types of information with remote parties. Thereby, the driver can choose between different levels of details ranging from abstract information such as ``Alice is driving right now'' up to sharing a video of the driving scene. We investigated the drivers' needs on the go and derived guidelines for the design of communication-related functions in the car through an online survey and in-depth interviews. As a second aspect, we present an approach to offer time-adjusted entertainment and productivity tasks to the driver. The idea is to allow time-adjusted tasks during periods where the demand for the driver's attention is low, for instance at traffic lights or during a highly automated ride. Findings from a web survey and a case study demonstrate the feasibility of this approach. With the findings of this thesis we envision to provide a basis for future research and development in the domain of automotive user interfaces and non-driving-related activities in the transition from manual driving to highly and fully automated driving.Als das Auto erfunden wurde, ermöglichte es den Insassen hauptsächlich, entfernte Orte zu erreichen. Die einzigen Tätigkeiten, die Fahrerinnen und Fahrer während der Fahrt erledigen konnten und sollten, bezogen sich auf die Steuerung des Fahrzeugs. Heute erledigen die Fahrerinnen und Fahrer diverse Tätigkeiten, die über die ursprünglichen Aufgaben hinausgehen und sich nicht unbedingt auf die eigentliche Fahraufgabe beziehen. Dies umfasst unter anderem die Bereiche Fahrerassistenz, standortbezogene Informationen und Navigation, Unterhaltung, Kommunikation und Produktivität. Informationssysteme im Fahrzeug stellen den Fahrerinnen und Fahrern Funktionen bereit, um diese Aufgaben auch während der Fahrt zu erledigen. Viele dieser Funktionen verbessern die Fahrsicherheit oder dienen dazu, die Fahrt angenehm zu gestalten. Letzteres wird immer wichtiger, da man inzwischen eine beträchtliche Zeit im Auto verbringt und dabei nicht mehr auf die Aktivitäten und Funktionen verzichten möchte, die man beispielsweise durch die Benutzung von Smartphone und Tablet gewöhnt ist. Solange der Fahrer selbst fahren muss, können solche Aktivitäten von der Fahrtätigkeit ablenken und eine Gefährdung für die Insassen oder die Umgebung darstellen. Ein Ziel bei der Entwicklung automobiler Benutzungsschnittstellen ist daher eine einfache, adäquate Bedienung solcher Systeme, damit Fahraufgabe und Nebentätigkeiten gut und vor allem sicher durchgeführt werden können. Der Hauptbeitrag dieser Arbeit umfasst einen Leitfaden und beispielhafte Konzepte für automobile Benutzungsschnittstellen, die eine sichere, abwechslungsreiche und einfache Durchführung von Tätigkeiten jenseits der eigentlichen Fahraufgabe ermöglichen. Basierend auf empirischen Methoden der Mensch-Computer-Interaktion stellen wir verschiedene Lösungen vor, die die Entwicklung und Gestaltung solcher Benutzungsschnittstellen unterstützen. Ausgehend von der heute üblichen nicht automatisierten Fahrt betrachten wir dabei auch Aspekte des automatisierten Fahrens. Zunächst betrachten wir die notwendigen Voraussetzungen, um Tätigkeiten jenseits der Fahraufgabe zu ermöglichen. Wir stellen dazu einen Leitfaden vor, der die Gestaltung und Entwicklung von automobilen Benutzungsschnittstellen unterstützt, die das Durchführen von Nebenaufgaben erlauben. Dies umfasst zum Beispiel Hinweise, wie Aktivitäten angepasst oder unterbrochen werden können, wenn sich der Automatisierungsgrad während der Fahrt ändert. Um Aktivitäten im Auto zu unterstützen, stellen wir ein neuartiges Interaktionskonzept vor, das eine multimodale Interaktion im Fahrzeug mit Sprachbefehlen und Touch-Gesten ermöglicht. Für automatisierte Fahrzeugsysteme und zur Anpassung der Interaktionsmöglichkeiten an die Fahrsituation stellt der Fahrerzustand (insbesondere die mentale Belastung) eine wichtige Information dar. Durch eine Fahrstudie im realen Straßenverkehr haben wir einen Datensatz generiert, der physiologische Daten und Kontextinformationen umfasst und damit Rückschlüsse auf den Fahrerzustand ermöglicht. Mit diesen Informationen über Fahrerinnen und Fahrer wird es möglich, den Fahrerzustand besser zu verstehen, Benutzungsschnittstellen an die aktuelle Fahrsituation anzupassen und die Routenwahl anzupassen. Außerdem stellen wir zwei konkrete Konzepte zur Unterstützung von Nebentätigkeiten vor, die schon heute regelmäßig bei der Fahrt getätigt oder verlangt werden. Im Bereich der Telekommunikation stellen wir dazu ein Konzept vor, das die Fahrsicherheit beim Kommunizieren mit Personen außerhalb des Autos erhöht. Das Konzept erlaubt es dem Fahrer, unterschiedliche Arten von Kontextinformationen mit Kommunikationspartnern zu teilen. Dies reicht von der abstrakten Information, dass man derzeit im Auto unterwegs ist bis hin zum Teilen eines Live-Videos der aktuellen Fahrsituation. Diesbezüglich haben wir über eine Web-Umfrage und detaillierte Interviews die Bedürfnisse der Nutzer(innen) erhoben und ausgewertet. Zudem stellen wir ein prototypisches Konzept sowie Richtlinien vor, wie künftige Kommunikationsaufgaben im Fahrzeug gestaltet werden sollen. Als ein zweites Konzept betrachten wir zeitbeschränkte Aufgaben zur Unterhaltung und Produktivität im Fahrzeug. Die Idee ist hier, zeitlich begrenzte Aufgaben in Zeiten niedriger Belastung zuzulassen, wie zum Beispiel beim Warten an einer Ampel oder während einer hochautomatisierten (Teil-) Fahrt. Ergebnisse aus einer Web-Umfrage und einer Fallstudie zeigen die Machbarkeit dieses Ansatzes auf. Mit den Ergebnissen dieser Arbeit soll eine Basis für künftige Forschung und Entwicklung gelegt werden, um im Bereich automobiler Benutzungsschnittstellen insbesondere nicht-fahr-bezogene Aufgaben im Übergang zwischen manuellem Fahren und einer hochautomatisierten Autofahrt zu unterstützen

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    Extracting Physiological Measurements from Thermal Images

    Full text link
    Multiple techniques are used to extract physiological signals from the human body. These signals provide a reliable method to identify the physical and mental state of a person at any given point in time. However, these techniques require contact and cooperation of the individual as well as human effort for connecting the devices and collecting the needed measurement. Moreover, these methods can be invasive, timeconsuming, and infeasible in many cases. Recent efforts have been made in order to find alternatives to extract these measurements using noncontact and efficient techniques. One of these alternatives is the use of thermal cameras for health monitoring. Our work explores reliable methods for extracting respiration rate, skin temperature and heart rate from thermal video. These methods leverage a combination of image processing and signal processing techniques in order to extract and filter physiological signals from the thermal domain. Finally, we review the use of thermal imaging in several applications, such as deception detection, stress detection and emotion recognition.Master of ScienceComputer and Information Science, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167385/1/Christian Hessler Final Thesis.pdfDescription of Christian Hessler Final Thesis.pdf : Thesi

    Formation control of autonomous vehicles with emotion assessment

    Get PDF
    Autonomous driving is a major state-of-the-art step that has the potential to transform the mobility of individuals and goods fundamentally. Most developed autonomous ground vehicles (AGVs) aim to sense the surroundings and control the vehicle autonomously with limited or no driver intervention. However, humans are a vital part of such vehicle operations. Therefore, an approach to understanding human emotions and creating trust between humans and machines is necessary. This thesis proposes a novel approach for multiple AGVs, consisting of a formation controller and human emotion assessment for autonomous driving and collaboration. As the interaction between multiple AGVs is essential, the performance of two multi-robot control algorithms is analysed, and a platoon formation controller is proposed. On the other hand, as the interaction between AGVs and humans is equally essential to create trust between humans and AGVs, the human emotion assessment method is proposed and used as feedback to make autonomous decisions for AGVs. A novel simulation platform is developed for navigating multiple AGVs and testing controllers to realise this concept. Further to this simulation tool, a method is proposed to assess human emotion using the affective dimension model and physiological signals such as an electrocardiogram (ECG) and photoplethysmography (PPG). The experiments are carried out to verify that humans' felt arousal and valence levels could be measured and translated to different emotions for autonomous driving operations. A per-subject-based classification accuracy is statistically significant and validates the proposed emotion assessment method. Also, a simulation is conducted to verify AGVs' velocity control effect of different emotions on driving tasks
    corecore