8,670 research outputs found

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Computational driver behavior models for vehicle safety applications

    Get PDF
    The aim of this thesis is to investigate how human driving behaviors can be formally described in mathematical models intended for online personalization of advanced driver assistance systems (ADAS) or offline virtual safety evaluations. Both longitudinal (braking) and lateral (steering) behaviors in routine driving and emergencies are addressed. Special attention is paid to driver glance behavior in critical situations and the role of peripheral vision.First, a hybrid framework based on autoregressive models with exogenous input (ARX-models) is employed to predict and classify driver control in real time. Two models are suggested, one targeting steering behavior and the other longitudinal control behavior. Although the predictive performance is unsatisfactory, both models can distinguish between different driving styles.Moreover, a basic model for drivers\u27 brake initiation and modulation in critical longitudinal situations (specifically for rear-end conflicts) is constructed. The model is based on a conceptual framework of noisy evidence accumulation and predictive processing. Several model extensions related to gaze behavior are also proposed and successfully fitted to real-world crashes and near-crashes. The influence of gaze direction is further explored in a driving simulator study, showing glance response times to be independent of the glance\u27s visual eccentricity, while brake response times increase for larger gaze angles, as does the rate of missed target detections.Finally, the potential of a set of metrics to quantify subjectively perceived risk in lane departure situations to explain drivers\u27 recovery steering maneuvers was investigated. The most influential factors were the relative yaw angle and splay angle error at steering initiation. Surprisingly, it was observed that drivers often initiated the recovery steering maneuver while looking off-road.To sum up, the proposed models in this thesis facilitate the development of personalized ADASs and contribute to trustworthy virtual evaluations of current, future, and conceptual safety systems. The insights and ideas contribute to an enhanced, human-centric system development, verification, and validation process. In the long term, this will likely lead to improved vehicle safety and a reduced number of severe injuries and fatalities in traffic

    Cognitive Driver Distraction Improves Straight Lane Keeping: A Cybernetic Control Theoretic Explanation

    Get PDF
    Experimental data revealed that drivers performing a visual secondary task exhibited deteriorated lane keeping performance, but that the same drivers performing a cognitive secondary exhibited an improvement in lane keeping compared to baseline driving. In this paper we present a computational cybernetic driver model that characterizes the effect of difference in eye fixation durations between on and off road glances across the three task conditions on straight lane keeping performance. The model uses perceptual cues as control input, maintains internal representations of these cues across fixations through Bayesian updating, and each time a change in cue magnitude is perceived based on mechanisms akin to signal detection theory a change in control is applied. The model is shown to be able to capture the experimental results encouragingly well. The model also sheds light on the relative magnitude of lane keeping performance degradation caused by glancing away from the road and the fact that internal representations are degraded each time a saccade takes place. The adopted approach to modeling driver perception during and across fixations is expected to lead to new insights into the effects that various in-vehicle activities have on driving performance and risk

    Gaze Strategies in Driving : An Ecological Approach

    Get PDF
    Human performance in natural environments is deeply impressive, and still much beyond current AI. Experimental techniques, such as eye tracking, may be useful to understand the cognitive basis of this performance, and “the human advantage.” Driving is domain where these techniques may deployed, in tasks ranging from rigorously controlled laboratory settings through high-fidelity simulations to naturalistic experiments in the wild. This research has revealed robust patterns that can be reliably identified and replicated in the field and reproduced in the lab. The purpose of this review is to cover the basics of what is known about these gaze behaviors, and some of their implications for understanding visually guided steering. The phenomena reviewed will be of interest to those working on any domain where visual guidance and control with similar task demands is involved (e.g., many sports). The paper is intended to be accessible to the non-specialist, without oversimplifying the complexity of real-world visual behavior. The literature reviewed will provide an information base useful for researchers working on oculomotor behaviors and physiology in the lab who wish to extend their research into more naturalistic locomotor tasks, or researchers in more applied fields (sports, transportation) who wish to bring aspects of the real-world ecology under experimental scrutiny. Part of a Research Topic on Gaze Strategies in Closed Self-paced tasks, this aspect of the driving task is discussed. It is in particular emphasized why it is important to carefully separate the visual strategies driving (quite closed and self-paced) from visual behaviors relevant to other forms of driver behavior (an open-ended menagerie of behaviors). There is always a balance to strike between ecological complexity and experimental control. One way to reconcile these demands is to look for natural, real-world tasks and behavior that are rich enough to be interesting yet sufficiently constrained and well-understood to be replicated in simulators and the lab. This ecological approach to driving as a model behavior and the way the connection between “lab” and “real world” can be spanned in this research is of interest to anyone keen to develop more ecologically representative designs for studying human gaze behavior.Peer reviewe

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    Computational interaction models for automated vehicles and cyclists

    Get PDF
    Cyclists’ safety is crucial for a sustainable transport system. Cyclists are considered vulnerableroad users because they are not protected by a physical compartment around them. In recentyears, passenger car occupants’ share of fatalities has been decreasing, but that of cyclists hasactually increased. Most of the conflicts between cyclists and motorized vehicles occur atcrossings where they cross each other’s path. Automated vehicles (AVs) are being developedto increase traffic safety and reduce human errors in driving tasks, including when theyencounter cyclists at intersections. AVs use behavioral models to predict other road user’sbehaviors and then plan their path accordingly. Thus, there is a need to investigate how cyclistsinteract and communicate with motorized vehicles at conflicting scenarios like unsignalizedintersections. This understanding will be used to develop accurate computational models ofcyclists’ behavior when they interact with motorized vehicles in conflict scenarios.The overall goal of this thesis is to investigate how cyclists communicate and interact withmotorized vehicles in the specific conflict scenario of an unsignalized intersection. In the firstof two studies, naturalistic data was used to model the cyclists’ decision whether to yield to apassenger car at an unsignalized intersection. Interaction events were extracted from thetrajectory dataset, and cyclists’ behavioral cues were added from the sensory data. Bothcyclists’ kinematics and visual cues were found to be significant in predicting who crossed theintersection first. The second study used a cycling simulator to acquire in-depth knowledgeabout cyclists’ behavioral patterns as they interacted with an approaching vehicle at theunsignalized intersection. Two independent variables were manipulated across the trials:difference in time to arrival at the intersection (DTA) and visibility condition (field of viewdistance). Results from the mixed effect logistic model showed that only DTA affected thecyclist’s decision to cross before the vehicle. However, increasing the visibility at theintersection reduced the severity of the cyclists’ braking profiles. Both studies contributed tothe development of computational models of cyclist behavior that may be used to support safeautomated driving.Future work aims to find differences in cyclists’ interactions with different vehicle types, suchas passenger cars, taxis, and trucks. In addition, the interaction process may also be evaluatedfrom the driver’s perspective by using a driving simulator instead of a riding simulator. Thissetup would allow us to investigate how drivers respond to cyclists at the same intersection.The resulting data will contribute to the development of accurate predictive models for AVs
    • …
    corecore