12,423 research outputs found

    Auditory Distractors in the Visual Modality: No Evidence for Perceptual Load Hypothesis or Auditory Dominance

    Get PDF
    Attention is a valuable resource with limited capacity, so knowing what will distract us during important tasks can be crucial in life. There is a lot of support for the Perceptual Load Hypothesis (PLH) when examining visual distractibility; however, less research has examined if PLH can predict auditory distractibility. Participants in the current study completed three experiments using visual selective attention tasks while being presented with auditory and visual distractions under low/high perceptual loads. In Experiment 1, I took the visual selective attention task from Robinson et al. (2018) and shortened the stimulus presentation while adding a no distractor baseline condition. In Experiment 2, I increased auditory distractor effects by requiring participants to periodically respond to the auditory information. In Experiment 3, I added a working memory task to increase cognitive load. Results showed no support for PLH with auditory distractors in Experiments 1 or 2, and instead showed the opposite pattern, with auditory distractors having a larger effect under high perceptual load (Experiment 2). Results from Experiment 3 show that increasing cognitive load had no effect on distractibility, which suggests the results from Experiment 2 were caused by periodically responding to the auditory stimuli. These findings have important implications for our understanding of selective attention and shed light on tasks that require the processing of multisensory information.No embargoAcademic Major: Psycholog

    EYE MOVEMENTS BEHAVIORS IN A DRIVING SIMULATOR DURING SIMPLE AND COMPLEX DISTRACTIONS

    Get PDF
    Road accidents occur frequently due to driving distractions all around the world. A driving simulator has been created to explore the cognitive effects of distractions while driving in order to address this problem. The purpose of this study is to discover the distraction-causing elements and how they affect driving performance. The simulator offers a secure and regulated setting for carrying out tests while being distracted by different visual distractions, such as solving mathematical equations and number memorizations. Several trials have been conducted in the studies, which were carried out under varied circumstances like varying driving sceneries and by displaying different distractions. Using Tobii Pro Fusion eye tracker, which records the participants\u27 eye movements and pupil dilation to detect distraction events, the cognitive load of distractions was assessed. In order to ascertain how distractions affect driving behavior, the simulator also gathered data on driving performance, such as steering wheel movements. It also gathered data on how much attention was being paid to the distractions by recording the user’s responses to the distractions. The preliminary findings of this study will shed light on the cognitive effects of driving distractions as well as the causes of driver distraction. With the help of this information, initiatives and interventions can be created to lower the prevalence of distracted driving and increase road safety. The results of this pilot study may also aid in the creation of safer standards for using electronic devices while driving and better driver training programs

    Bright-light distractions and visual performance

    Get PDF
    Visual distractions pose a significant risk to transportation safety, with laser attacks against aircraft pilots being a common example. This study used a research-grade High Dynamic Range (HDR) display to produce bright-light distractions for 12 volunteer participants performing a combined visual task across central and peripheral visual fields. The visual scene had an average luminance of 10 cd∙m−2 with targets of approximately 0.5° angular size, while the distractions had a maximum luminance of 9,000 cd∙m−2 and were 3.6° in size. The dependent variables were the mean fixation duration during task execution (representative of information processing time), and the critical stimulus duration required to support a target level of performance (representative of task efficiency). The experiment found a statistically significant increase in mean fixation duration, rising from 192 ms without distractions to 205 ms with bright-light distractions (p = 0.023). This indicates a decrease in visibility of the low contrast targets or an increase in cognitive workload that required greater processing time for each fixation in the presence of the bright-light distractions. Mean critical stimulus duration was not significantly affected by the distraction conditions used in this study. Future experiments are suggested to replicate driving and/or piloting tasks and employ bright-light distractions based on real-world data, and we advocate the use of eye-tracking metrics as sensitive measures of changes in performance

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    A Fuzzy-Logic Approach to Dynamic Bayesian Severity Level Classification of Driver Distraction Using Image Recognition

    Get PDF
    open access articleDetecting and classifying driver distractions is crucial in the prevention of road accidents. These distractions impact both driver behavior and vehicle dynamics. Knowing the degree of driver distraction can aid in accident prevention techniques, including transitioning of control to a level 4 semi- autonomous vehicle, when a high distraction severity level is reached. Thus, enhancement of Advanced Driving Assistance Systems (ADAS) is a critical component in the safety of vehicle drivers and other road users. In this paper, a new methodology is introduced, using an expert knowledge rule system to predict the severity of distraction in a contiguous set of video frames using the Naturalistic Driving American University of Cairo (AUC) Distraction Dataset. A multi-class distraction system comprises the face orientation, drivers’ activities, hands and previous driver distraction, a severity classification model is developed as a discrete dynamic Bayesian (DDB). Furthermore, a Mamdani-based fuzzy system was implemented to detect multi- class of distractions into a severity level of safe, careless or dangerous driving. Thus, if a high level of severity is reached the semi-autonomous vehicle will take control. The result further shows that some instances of driver’s distraction may quickly transition from a careless to dangerous driving in a multi-class distraction context

    Human-Centric Detection and Mitigation Approach for Various Levels of Cell Phone-Based Driver Distractions

    Get PDF
    abstract: Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.Dissertation/ThesisDoctoral Dissertation Applied Psychology 201

    Applying psychological science to the CCTV review process: a review of cognitive and ergonomic literature

    Get PDF
    As CCTV cameras are used more and more often to increase security in communities, police are spending a larger proportion of their resources, including time, in processing CCTV images when investigating crimes that have occurred (Levesley & Martin, 2005; Nichols, 2001). As with all tasks, there are ways to approach this task that will facilitate performance and other approaches that will degrade performance, either by increasing errors or by unnecessarily prolonging the process. A clearer understanding of psychological factors influencing the effectiveness of footage review will facilitate future training in best practice with respect to the review of CCTV footage. The goal of this report is to provide such understanding by reviewing research on footage review, research on related tasks that require similar skills, and experimental laboratory research about the cognitive skills underpinning the task. The report is organised to address five challenges to effectiveness of CCTV review: the effects of the degraded nature of CCTV footage, distractions and interrupts, the length of the task, inappropriate mindset, and variability in people’s abilities and experience. Recommendations for optimising CCTV footage review include (1) doing a cognitive task analysis to increase understanding of the ways in which performance might be limited, (2) exploiting technology advances to maximise the perceptual quality of the footage (3) training people to improve the flexibility of their mindset as they perceive and interpret the images seen, (4) monitoring performance either on an ongoing basis, by using psychophysiological measures of alertness, or periodically, by testing screeners’ ability to find evidence in footage developed for such testing, and (5) evaluating the relevance of possible selection tests to screen effective from ineffective screener

    Preference in the harried eye of the beholder: the effect of time pressure and task motivation.

    Get PDF
    We report a study in which eye tracking data were gathered to examine the impact of time-pressure and task motivation on the flow of visual attention during choice processing from a naturalistic stimulus-based product display. We find patterns of adaptation of visual attention to time pressure in terms of acceleration, filtration, and strategy shift that have not been reported previously. In addition we find, regardless of condition, strong correlation's between visual attention to the brands in the choice set and preference for the brands. Results are discussed in terms of strategic and non-strategic information acquisition during stimulus-based choice, and implications for attention theory are offered.
    • …
    corecore