1,367 research outputs found

    Comparative analysis of Kinect-based and Oculus-based gaze region estimation methods in a driving simulator

    Get PDF
    Producción CientíficaDriver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.Dirección General de Tráfico y Ministerio del Interior - (Proyecto SPIP2015-01801

    Model-based estimation of the state of vehicle automation as derived from the driver’s spontaneous visual strategies

    Get PDF
    When manually steering a car, the driver’s visual perception of the driving scene and his or her motor actions to control the vehicle are closely linked. Since motor behaviour is no longer required in an automated vehicle, the sampling of the visual scene is affected. Autonomous driving typically results in less gaze being directed towards the road centre and a broader exploration of the driving scene, compared to manual driving. To examine the corollary of this situation, this study estimated the state of automation (manual or automated) on the basis of gaze behaviour. To do so, models based on partial least square regressions were computed by considering the gaze behaviour in multiple ways, using static indicators (percentage of time spent gazing at 13 areas of interests), dynamic indicators (transition matrices between areas) or both together. Analysis of the quality of predictions for the different models showed that the best result was obtained by considering both static and dynamic indicators. However, gaze dynamics played the most important role in distinguishing between manual and automated driving. This study may be relevant to the issue of driver monitoring in autonomous vehicles

    Mobility and Aging: Older Drivers’ Visual Searching, Lane Keeping and Coordination

    Get PDF
    This thesis examined older drivers’ mobility and behaviour through comprehensive measurements of driver-vehicle-environment interaction and investigated the associations between driving behaviour and cognitive functions. Data were collected and analysed for 50 older drivers using eye tracking, GNSS tracking, and GIS. Results showed that poor selective attention, spatial ability and executive function in older drivers adversely affect lane keeping, visual search and coordination. Visual-motor coordination measure is sensitive and effective for driving assessment in older drivers

    Regarding Pilot Usage of Display Technologies for Improving Awareness of Aircraft System States

    Get PDF
    ed systems and the procedures for ng in complexity. This interacting trend places a larger burden on pilots to manage increasing amounts of information and to understand system interactions. The result is an increase in the likelihood of loss of airplane state awareness (ASA). One way to gain more insight into this issue is through experimentation using objective measures of visual behavior. This study summarizes an analysis of oculometer data obtained during a high-fidelity flight simulation study that included a variety of complex pilot-system interactions that occur in current flight decks, as well as several planned for the next generation air transportation system. The study was comprised of various scenarios designed to induce low and high energy aircraft states coupled with other emulated causal factors in recent accidents. Three different display technologies were evaluated in this recent pilot-in-the-loop study conducted at NASA Langley Research Center. These technologies include a stall recovery guidance algorithm and display concept, an enhanced airspeed control indication of when the automation is no longer actively controlling airspeed, and enhanced synoptic diagrams with corresponding simplified electronic interactive checklists. Multiple data analyses were performed to understand how the 26 participating airline pilots were observing ASA-related information provided during different stag specific events within these stages

    Modeling eye movements in driving

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (leaves 87-88).by Leandro A. Veltri.M.Eng

    ON THE INFLUENCE OF SOCIAL ROBOTS IN COGNITIVE MULTITASKING AND ITS APPLICATION

    Get PDF
    [Objective] I clarify the impact of social robots on cognitive tasks, such as driving a car or driving an airplane, and show the possibility of industrial applications based on the principles of social robotics. [Approach] I adopted the MATB, a generalized version of the automobile and airplane operation tasks, as cognitive tasks to evaluate participants' performance on reaction speed, tracking performance, and short-term memory tasks that are widely applicable, rather than tasks specific to a particular situation. Also, as the stimuli from social robots, we used the iCub robot, which has been widely used in social communication research. In the analysis of participants, I not only analyzed performance, but also mental workload using skin conductance and emotional analysis of arousal-valence using facial expressions analysis. In the first experiment, I compared a social robot that use social signals with a nonsocial robot that do not use such signals and evaluated whether social robots affect cognitive task performances. In the second experiment, I focused on vitality forms and compared a calm social robot with an assertive social robot. As analysis methods, I adopted Mann-Whitney's U test for one-pair comparisons, and ART-ANOVA for analysis of variance in repeated task comparisons. Based on the results, I aimed to express vitality forms in a robot head, which is smaller in size and more flexible in placement than a full-body humanoid robot, considering car and airplane cockpit's limited space. For that, I developed a novel eyebrow and I decided to use a wire-driven technique, which is widely used in surgical robots to control soft materials. [Main results] In cognitive tasks such as car drivers and airplane pilots, I clarified the effects of social robots acting social behaviors on task performance, mental workload, and emotions. In addition, I focused on vitality forms, one of the parameters of social behaviors, and clarified the effects of different vitality forms of social robots' behavior on cognitive tasks.In cognitive tasks such as car drivers and airplane pilots, we clarified the effects of social robots acting in social behaviors on task performance, mental workload, and emotions, and showed that the presence of social robots can be effective in cognitive tasks. Furthermore, focusing on vitality forms, one of the parameters of social behaviors, we clarified the effects of different vitality forms of social robots' behaviors on cognitive tasks, and found that social robots with calm behaviors positively affected participants' facial expressions and improved their performance in a short-term memory task. Based on the results, I decided to adopt the configuration of a robot head, eliminating the torso from the social humanoid robot, iCub, considering the possibility of placement in a limited space such as cockpits of car or airplane. In designing the robot head, I developed a novel soft-material eyebrow that can be mounted on the iCub robot head to achieve continuous position and velocity changes, which is an important factor to express vitality forms. The novel eyebrows can express different vitality forms by changing the shape and velocity of the eyebrows, which was conventionally represented by the iCub's torso and arms. [Significance] The results of my research are important achievements that opens up the possibility of applying social robots to non-robotic industries such as automotive and aircraft. In addition, the newly developed soft-material eyebrows' precise shape and velocity changes have opened up new research possibilities in social robotics and social communication research themselves, enabling experiments with complex facial expressions that move beyond Ekman's simple facial expression changes definition, such as, joy, anger, sadness, and pleasure. Thus, the results of this research are one important step in both scientific and industrial applications. [Key-words] social robot, cognitive task, vitality form, robot head, facial expression, eyebro

    Artificial Intelligence for Suicide Assessment using Audiovisual Cues: A Review

    Get PDF
    Death by suicide is the seventh leading death cause worldwide. The recent advancement in Artificial Intelligence (AI), specifically AI applications in image and voice processing, has created a promising opportunity to revolutionize suicide risk assessment. Subsequently, we have witnessed fast-growing literature of research that applies AI to extract audiovisual non-verbal cues for mental illness assessment. However, the majority of the recent works focus on depression, despite the evident difference between depression symptoms and suicidal behavior and non-verbal cues. This paper reviews recent works that study suicide ideation and suicide behavior detection through audiovisual feature analysis, mainly suicidal voice/speech acoustic features analysis and suicidal visual cues. Automatic suicide assessment is a promising research direction that is still in the early stages. Accordingly, there is a lack of large datasets that can be used to train machine learning and deep learning models proven to be effective in other, similar tasks.Comment: Manuscript submitted to Arificial Intelligence Reviews (2022

    The driver response process in assisted and automated driving

    Get PDF
    Background: Safe assisted and automated driving can be achieved through a detailed understanding of the driver response process (the timing and quality of driver actions and visual behavior) triggered by an event such as a take-over request or a safety-relevant event. Importantly, most current evidence on driver response process in vehicle automation, and on automation effects (unsafe response process) is based on driving-simulator studies, whose results may not generalize to the real world. Objectives: To improve our understanding of the driver response process 1) in automated driving, which takes full responsibility for the driving task but assumes the driver is available to resume manual control upon request and 2) assisted driving, which supports the driver with longitudinal and lateral control but assumes the driver is responsible for safe driving at all times. Method: Data was collected in four experiments on a test track and public roads using the Wizard-of-Oz approach to simulate vehicle automation (assisted or automated). Results: The safety of the driver responses was found to depend on the type of vehicle automation. While a notable number of drivers crashed with a conflict object after experiencing highly reliable assisted driving, an automated driving function that issued a take-over request prior to the same event reduced the crash rate to zero. All participants who experienced automated driving were able to respond to the take-over requests and to potential safety-relevant events that occurred after automation deactivation. The responses to the take-over requests consisted of actions such as looking toward the instrument cluster, placing the hands on the steering wheel, deactivating automation, and moving the feet to the pedals. The order and timing of these actions varied among participants. Importantly, it was observed that the driver response process after receiving a take-over request included several off-path glances; in fact, drivers showed reduced visual attention to the forward road (compared to manual driving) for up to 15 s after the take-over request. Discussion: Overall, the work in this thesis could not confirm the presence of severe automation effects in terms of delayed response and a degraded intervention performance in safety-relevant events previously observed in driving simulators after automated driving. These differing findings likely stem from a combination of differences in the test environments and in the assumptions about the capabilities of the automated driving system. Conclusions: Assisted driving and automated driving should be designed separately: what is unsafe for assisted driving is not necessarily unsafe for automated driving and vice versa. While supervising drivers may crash in safety-relevant events without prior notification during highly reliable assisted driving, a clear and timely take-over request in automated driving ensures that drivers understand their responsibilities of acting in events when back in manual driving. In addition, when take-over requests are issued prior to the event onset, drivers generally perform similar manual driving and intervention performance as in a baseline. However, both before and just after the take-over requests, several drivers directed their gaze mainly off-road. Therefore, it is essential to consider the effect of take-over request designs not only on the time needed to deactivate automation, but also on drivers’ visual behavior. Overall, by reporting the results of tests of a future automated driving system (which is in line with future vehicle regulations and insurance company definitions) in realistic environments, this thesis provides novel findings that enhance the picture of automation effects that, before this thesis, seemed more severe
    • …
    corecore