82,891 research outputs found

    Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue

    Get PDF
    The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary.. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. Lappi & Lehtonen, 2013; GrĆ¼ner & Ansorge, 2017). In the present special issue, Cvahte OjsterÅ”ek & TopolÅ”ek (2019) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (2019) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however,a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (2021) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (2020) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlačić, Knežević, Mandal, Rođenkov & Vitsas (2020) propose a network approach with three target measures describing the individual saccade strategy of the participants in this study. In their analysis of the cognitive load of pilots, Babu, Jeevitha Shree, Prabhakar, Saluja, Pashilkar & Biswas (2019) investigated the ocular parameters of 14 pilots in a simulator and during test flights in an aircraft during air to ground attack training. Their results showed that ocular parameters are significantly different in different flying conditions and significantly correlate with altitude gradients during air to ground dive training tasks. In maritime training the use of simulations is per international regulations mandatory. Mao, Hildre & Zhang (2019) performed a study of crane lifting and compared novice and expert operators. Similarities and dissimilarities of eye behavior between novice and expert are outlined and discussed. The study of Atik & Arslan (2019) involves capturing and analyzing eye movement data of ship officers with sea experience in simulation exercises for assessing competency. Significant differences were found between electronic navigation competencies of expert and novice ship officers. The authors demonstrate that the eye tracking technology is a valuable tool for the assessment of electronic navigation competency. The focus of the study by Atik (2020) is the assessment and training of situational awareness of ship officers in naval Bridge Resource Management. The study shows that eye tracking provides the assessor with important novel data in simulator based maritime training, such as focus of attention, which is a decisive factor for the effectiveness of Bridge Resource Management training. The research presented in the different articles of this special thematic issue cover many different areas of application and involve specialists from different fields, but they converge on repeated demonstrations of the usefulness of measuring attentional processes by eye movements or using gaze parameters for controlling complex technological devices. Together, they share the common goal of improving the potential and safety of technology in the digital age by fitting it to human capabilities and limitations. References Atik, O. (2020). Eye tracking for assessment of situational awareness in bridge resource management training. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.7 Atik, O., & Arslan, O. (2019). Use of eye tracking for assessment of electronic navigation competency in maritime training. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.2 Babu, M. D., JeevithaShree, D. V., Prabhakar, G., Saluja, K. P. S., Pashilkar, A., & Biswas, P. (2019). Estimating pilotsā€™ cognitive load from ocular parameters through simulation and in-flight studies. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.3 Cvahte OjsterÅ”ek, T., & TopolÅ”ek, D. (2019). Eye tracking use in researching driver distraction: A scientometric and qualitative literature review approach. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.5 GrĆ¼ner, M., & Ansorge, U. (2017). Mobile eye tracking during real-world night driving: A selective review of findings and recommendations for future research. Journal of Eye Movement Research, 10(2). https://doi.org/10.16910/jemr.10.2.1 Lappi, O., & Lehtonen, E. (2013). Eye-movements in real curve driving: pursuit-like optokinesis in vehicle frame of reference, stability in an allocentric reference coordinate system. Journal of Eye Movement Research, 6(1). https://doi.org/10.16910/jemr.6.1.4 Mao, R., Li, G., Hildre, H. P., & Zhang, H. (2019). Analysis and evaluation of eye behavior for marine operation training - A pilot study. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.6 Niu, Y.- feng, Gao, Y., Xue, C.- qi, Zhang, Y.- ting, & Yang, L.- xin. (2020). Improving eyeā€“computer interaction interface design: Ergonomic investigations of the optimum target size and gaze-triggering dwell time. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.8 Schnebelen, D., Charron, C., & Mars, F. (2021). Model-based estimation of the state of vehicle automation as derived from the driverā€™s spontaneous visual strategies. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.10 Tuhkanen, S., Pekkanen, J., Lehtonen, E., & Lappi, O. (2019). Effects of an active visuomotor steering task on covert attention. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/Jemr.12.3.1 Vlačić, S. I., Knežević, A. Z., Mandal, S., Rođenkov, S., & Vitsas, P. (2020). Improving the pilot selection process by using eye-tracking tools. Journal of Eye Movement Research, 12(3). https://doi.org/10.16910/jemr.12.3.

    Convergent? Minds? Some questions about mental evolution

    Full text link
    In investigating convergent minds, we need to be sure that the things we are looking at are both minds and convergent. In determining whether a shared character state represents a convergence between two organisms, we must know the wider distribution and primitive state of that character so that we can map that character and its state transitions onto a phylogenetic tree. When we do this, some apparently primitive shared traits may prove to represent convergent losses of cognitive capacities. To avoid having to talk about the minds of plants and paramecia, we need to go beyond assessments of behaviourally defined cognition to ask questions about mind in the primary sense of the word, defined by the presence of mental events and consciousness. These phenomena depend upon the possession of brains of adequate size and centralized ontogeny and organization. They are probably limited to vertebrates. Recent discoveries suggest that consciousness is adaptively valuable as a late error-detection mechanism in the initiation of action, and point to experimental techniques for assessing its presence or absence in non-human mammals

    Levels of control during a collaborative carrying task

    Get PDF
    Three experiments investigated the effect of implementing low-level aspects of motor control for a collaborative carrying task within a VE interface, leaving participants free to devote their cognitive resources to the higher-level components of the task. In the task, participants collaborated with an autonomous virtual human in an immersive virtual environment (VE) to carry an object along a predefined path. In experiment 1, participants took up to three times longer to perform the task with a conventional VE interface, in which they had to explicitly coordinate their hand and body movements, than with an interface that controlled the low-level tasks of grasping and holding onto the virtual object. Experiments 2 and 3 extended the study to include the task of carrying an object along a path that contained obstacles to movement. By allowing participants' virtual arms to stretch slightly, the interface software was able to take over some aspects of obstacle avoidance (another low-level task), and this led to further significant reductions in the time that participants took to perform the carrying task. Improvements in performance also occurred when participants used a tethered viewpoint to control their movements because they could see their immediate surroundings in the VEs. This latter finding demonstrates the superiority of a tethered view perspective to a conventional, human'seye perspective for this type of task

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans ā€œprogramā€ behavior in-or train-each other

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participantā€™s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging usersā€™ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the usersā€™ eye movements and touch responses
    • ā€¦
    corecore