522 research outputs found

    Exploration of the SHRP 2 Naturalistic Driving Study: Development of a Distracted Driving Prediction Model

    Get PDF
    The SHRP 2 NDS project was the largest naturalistic driving study ever conducted. The data obtained from the study was released to the researching public in 2014 through the project’s InSight webpage. The objectives of this research were to first explore the massive dataset and determine if it was possible to develop prediction models based on several performance measures that could be used to study driver distraction. Time series data on driver GPS speed, lateral and longitudinal acceleration, throttle position and yaw rate were discovered as five appropriate performance measures available from the NDS that could be used for the purpose of this research. Using this data the objective was to predict whether a driver was engaged in any of three specific groups of distracting tasks or no secondary task at all. The specific distracting tasks that were examined included: talking or listening on a hand-held phone, texting or dialing on a hand-held phone, and driver interacting with an adjacent passenger. Multiple logistic regression was the statistical method used to determine the odds of a driver being engaged in one of the secondary tasks given their corresponding driving performance data. The results indicated there were differences in the driving performance measures when the driver was engaged in a secondary task. The intent of this research was to determine if those differences present could be used to develop models that could adequate predict when a driver was engaged in the three secondary tasks of interest. The results of the MLR tests indicate this data could not be used to develop prediction models with statistically significant predictive power. Future work should focus on comparing this research’s results to prediction models developed using an alternative to the multiple logistical regression method

    Driver Engagement In Secondary Tasks: Behavioral Analysis and Crash Risk Assessment

    Get PDF
    Distracted driving has long been acknowledged as one of the leading causes of death or injury in roadway crashes. The focus of past research has been mainly on the change in driving performance due to distracted driving. However, only a few studies attempted to predict the type of distraction based on driving performance measures. In addition, past studies have proven that driving performance is influenced by the drivers’ socioeconomic characteristics, while not many studies have attempted to quantify that influence. In essence, this study utilizes the rich SHRP 2 Naturalistic Driving Study (NDS) database to (a) develop a model for detecting the likelihood of a driver’s involvement in secondary tasks from distinctive attributes of driving performance, and (b) develop a grading system to quantify the crash risk associated with socioeconomic characteristics and distracted driving. The results show that the developed neural network models were able to detect the drivers’ involvement in calling, texting, and passenger interaction with an accuracy of 99.6%, 99.1%, and 100%, respectively. These results show that the selected driving performance attributes were effective in detecting the associated secondary tasks with driving performance. On the other hand, the grading system was developed by three main parameters: the crash risk coefficient, the significance level coefficient, and the category contribution coefficient. At the end, each driver’s crash risk index could be calculated based on his or her socioeconomic characteristics. The developed detection models and the systematic grading process could assist the insurance company to identify a driver’s probability of conducting distracted driving and assisting the development of cellphone banning regulation by states’ Departments of Transportation

    Validation of the Static Load Test for Event Detection During Hands-Free Conversation

    Get PDF
    Objective. To see if visual event reaction times (RTs) during handsfree conversation conditions in the Enhanced Static Load Test (ESLT) can predict RTs in similar conditions in on-road driving. Methods. Brake reaction times to random center and side light events were measured while watching a driving video, attempting to keep a marker in the center of the lane with a steering wheel, answering the phone by pressing a button, and carrying on neutral or angry handsfree conversations in covert (silent) or overt mode on a hands-free phone device. Open-road tests were conducted in traffic for subjects with similar side and front light events, with foot reaction times measured while engaged in the same secondary tasks and conditions. Results. Mean RTs for the task segments in the lab were predictive of the mean RTs for the corresponding task segments in the on-road test (r = 0.90, df = 16, p \u3c 0.000001). Conclusion. This study validates the Enhanced Static Load Test as predictive of visual event RTs during open-road driving for the range of experimental conditions and tasks considered

    Effects of Reading Text While Driving: A Driving Simulator Study

    Get PDF
    Although 47 US states make the use of a mobile phone while driving illegal, many people use their phone for texting and other tasks while driving. This research project summarized the large literature on distracted driving and compared major outcomes with those of our study. We focused on distraction due to reading text because this activity is most common. For this research project, we collected simulator observations of 203 professional taxi drivers (175 male, and 28 female) working at the same Honolulu taxi company, using the mid-range driving simulator VS500M by Virage. After a familiarization period, drivers were asked to read realistic text content relating to passenger pick up displayed on a 7-inch tablet affixed to the dashboard. The experimental scenario was simulated on a two-lane rural highway having a speed limit of 60 mph and medium traffic. Drivers needed to follow the lead vehicle under regular and text-reading conditions. The large sample size of this study provided a strong statistical base for driving distraction investigation on a driving simulator. The comparison between regular and text-reading conditions revealed that the drivers significantly increased their headway (20.7%), lane deviations (354%), total time of driving blind (352%), maximum duration of driving blind (87.6% per glance), driving blind incidents (170%), driving blind distance (337%) and significantly decreased lane change frequency (35.1%). There was no significant effect on braking aggressiveness while reading text. The outcomes indicate that driving performance degrades significantly by reading text while driving. Additional analysis revealed that important predictors for maximum driving blind time changes are sociodemographic characteristics, such as age and race, and past behavior attributes

    Miles away. Determining the extent of secondary task interference on simulated driving

    Get PDF
    There is a seemingly perennial debate in the literature about the relative merits of using a secondary task as a measure of spare attentional capacity. One of the main drawbacks is that it could adversely affect the primary task, or other measures of mental workload. The present experiment therefore addressed an important methodological issue for the dual-task experimental approach – that of secondary task interference. The current experiment recorded data in both single- and dual-task scenarios to ascertain the level of secondary task interference in the Southampton Driving Simulator. The results indicated that a spatial secondary task did not have a detrimental effect on driving performance, although it consistently inflated subjective mental workload ratings. However, the latter effect was so consistent across all conditions that it was not considered to pose a problem. General issues of experimental design, as well as wider implications of the findings for multiple resources theory, are discusse

    Factors that influence visual attention and their effects on safety in driving: an eye movement tracking approach

    Get PDF
    Statistics show that a high percentage of road related accidents are due to factors that cause impaired driving. Since information extraction in driving is predominantly a visual task, visual distraction and its implications are therefore important safety issues. The main objective of this research is to study some of the implications of demands to human’s attention and perception and how it affects performance of tasks such as driving. Specifically, the study aims to determine the changes that occur in the visual behavior of drivers with different levels of driving experience by tracking the movement of the eye; examine the effects of different levels of task complexity on visual fixation strategies and visual stimulus recognition; investigate the effects of secondary task on attentional and visual focus and its impact on driving performance; and evaluate the implications of the use of information technology device (cellular phone) while driving on road safety. Thirty-eight students participated in the study consisting of two experiments. In the first experiment, the participants performed two driving sessions while wearing a head mounted eye tracking device. The second experiment involved driving while engaging in a cellular phone conversation. Fixation location, frequency, duration and saccadic path, were used to analyze eye movements. The study shows that differences in visual behavior of drivers exist; wherein drivers with infrequent driving per week fixated more on the dashboard area than on the front view (F(3,26) = 3.53, p\u3c0.05), in contrast to the driver with more frequent use of vehicle per week where higher fixations were recorded in the front/center view (F(3,26) = 4.26). The degree of visual distraction contributes to the deterioration of driving resulting to 55% more driving errors committed. Higher time where no fixation was detected was observed when driving with distraction (from 96% to 91% for drivers with less frequency of vehicle use and 55% to 44% for drivers with more frequent use of vehicle). The number of pre-identified errors committed increased from 64 to 81, due to the effect of visual tunneling. This research presents objective data that strengthens the argument on the detrimental effects of distraction in driving

    ARE DISTRACTED DRIVERS AWARE THAT THEY ARE DISTRACTED?: EXPLORING AWARENESS, SELF-REGULATION, AND PERFORMANCE IN DRIVERS PERFORMING SECONDARY TASKS

    Get PDF
    Research suggests that driving while talking on a mobile telephone causes drivers not to respond to important events but has a smaller effect on their lane-keeping ability. This pattern is similar to research on night driving and suggests that problems associated with distraction may parallel those of night driving. Here, participants evaluated their driving performance before and after driving a simulated curvy road under different distraction conditions. In experiment 1 drivers failed to appreciate their distraction-induced performance decrements and did not recognize the dissociation between lane-keeping and identification. In Experiment 2 drivers did not adjust their speed to offset being distracted. Continuous feedback that steering skills are robust to distraction may prevent drivers from being aware that they are distracted

    Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-task and Dual-task Environments

    Get PDF
    ABSTRACT Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces typically center on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system in both a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience

    Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-task and Dual-task Environments

    Get PDF
    ABSTRACT Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces typically center on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system in both a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience

    The Virtual Driver: Integrating Physical and Cognitive Human Models to Simulate Driving with a Secondary In-Vehicle Task.

    Full text link
    Models of human behavior provide insight into people’s choices and actions and form the basis of engineering tools for predicting performance and improving interface design. Most human models are either cognitive, focusing on the information processing underlying the decisions made when performing a task, or physical, representing postures and motions used to perform the task. In general, cognitive models contain a highly simplified representation of the physical aspects of a task and are best suited for analysis of tasks with only minor motor components. Physical models require a person experienced with the task and the software to enter detailed information about how and when movements should be made, a process that can be costly, time consuming, and inaccurate. Many tasks have both cognitive and physical components, which may interact in ways that could not be predicted using a cognitive or physical model alone. This research proposes a solution by combining a cognitive model, the Queuing Network – Model Human Processor, and a physical model, the Human Motion Simulation (HUMOSIM) Framework, to produce an integrated cognitive-physical human model that makes it possible to study complex human-machine interactions. The physical task environment is defined using the HUMOSIM Framework, which communicates relevant information such as movement times and difficulty to the QN-MHP. Action choice and movement sequencing are performed in the QN-MHP. The integrated model’s more natural movements, generated by motor commands from the QN-MHP, and more realistic cognitive decisions, made using physical information from the Framework, make it useful for evaluating different designs for tasks, spaces, systems, and jobs. The Virtual Driver is the application of the integrated model to driving with an in-vehicle task. A driving simulator experiment was used to tune and evaluate the integrated model. Increasing the visual and physical difficulty of the in-vehicle task affected the resource-sharing strategies drivers used and resulted in deterioration in driving and in-vehicle task performance, especially for shorter drivers. The Virtual Driver replicates basic driving, in-vehicle task, and resource-sharing behaviors and provides a new way to study driver distraction. The model has applicability to interface design and predictions about staffing requirements and performance.Ph.D.Biomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75847/1/hjaf_1.pd
    • …
    corecore