1,156 research outputs found
Infrared-based facial points tracking and action units detection in context of car driving simulator
Smart HMI for an autonomous vehicle
El presente trabajo expone la arquitectura diseñada para la implementación de un HMI
(Human Machine Interface) en un vehículo autónomo desarrollado en la Universidad de
Alcalá. Este sistema hace uso del ecosistema ROS (Robot Operating System) para la
comunicación entre los diferentes modulos desarrollados en el vehículo.
Además se expone la creación de una herramienta de captación de datos de conductores
haciendo uso de la mirada de este, basada en OpenFace, una herramienta de código
libre para análisis de caras. Para ello se han desarrollado dos métodos, uno basado en
un método lineal y otro usando técnicas del algoritmo NARMAX. Se han desarrollado
diferentes test para demostrar la precisión de ambos métodos y han sido evaluados en el
dataset de accidentes DADA2000.This works presents the framework that composed the HMI (Human Machine Interface)
built in an autonomous vehicle from University of Alcalá. This system has been developed
using the framework ROS (Robot Operating System) for the communication between the
different sub-modules developed on the vehicle.
Also, a system to obtain gaze focalization data from drivers using a camera is presented, based on OpenFace, which is an open source tool for face analysis. Two different
methods are proposed, one linear and other based on NARMAX algorithm. Different test
has been done in order to prove their accuracy and they have been evaluated on the
challenging dataset DADA2000, which is composed by traffic accidents.Máster Universitario en Ingeniería Industrial (M141
A framework for context-aware driver status assessment systems
The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard.
Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented.
With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform.
To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior.
A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety.
Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition
Computationally efficient deformable 3D object tracking with a monocular RGB camera
182 p.Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices
Investigation of low-cost infrared sensing for intelligent deployment of occupant restraints
In automotive transport, airbags and seatbelts are effective at restraining the
driver and passenger in the event of a crash, with statistics showing a
dramatic reduction in the number of casualties from road crashes.
However, statistics also show that a small number of these people have been
injured or even killed from striking the airbag, and that the elderly and small
children are especially at risk of airbag-related injury. This is the result of the
fact that in-car restraint systems were designed for the average male at an
average speed of 50 km/hr, and people outside these norms are at risk.
Therefore one of the future safety goals of the car manufacturers is to deploy
sensors that would gain more information about the driver or passenger of
their cars in order to tailor the safety systems specifically for that person, and
this is the goal of this project.
This thesis describes a novel approach to occupant detection, position
measurement and monitoring using a low-cost thermal imaging based
system, which is a departure from traditional video camera-based systems,
and at an affordable price. Experiments were carried out using a specially
designed test rig and a car driving simulator with members of the public.
Results have shown that the thermal imager can detect a human in a car
cabin mock up and provide crucial real-time position data, which could be
used to support intelligent restraint deployment. Other valuable information
has been detected such as whether the driver is smoking, drinking a hot or
cold drink, using a mobile phone, which can help to infer the level of driver
attentiveness or engagement
Computationally efficient deformable 3D object tracking with a monocular RGB camera
182 p.Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices
- …