6,285 research outputs found

    Smooth Pursuit Target Speeds and Trajectories

    Get PDF
    In this paper we present an investigation of how the speed and trajectory of smooth pursuits targets impact on detection rates in gaze interfaces. Previous work optimized these values for the specific application for which smooth pursuit eye movements were employed. However, this may not always be possible. For example UI designers may want to minimize distraction caused by the stimulus, integrate it with a certain UI element (e.g., a button), or limit it to a certain area of the screen. In these cases an in-depth understanding of the interplay between speed, trajectory, and accuracy is required. To achieve this, we conducted a user study with 15 participants who had to follow targets with different speeds and on different trajectories using their gaze. We evaluated the data with respect to detectability. As a result, we obtained reasonable ranges for target speeds and demonstrate the effects of trajectory shapes. We show that slow moving targets are hard to detect by correlation and that introducing a delay improves the detection rate for fast moving targets. Our research is complemented by design rules which enable designers to implement better pursuit detectors and pursuit-based user interfaces

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Active inference and oculomotor pursuit: the dynamic causal modelling of eye movements.

    Get PDF
    This paper introduces a new paradigm that allows one to quantify the Bayesian beliefs evidenced by subjects during oculomotor pursuit. Subjects' eye tracking responses to a partially occluded sinusoidal target were recorded non-invasively and averaged. These response averages were then analysed using dynamic causal modelling (DCM). In DCM, observed responses are modelled using biologically plausible generative or forward models - usually biophysical models of neuronal activity

    A Distal Model of Congenital Nystagmus as Nonlinear Adaptive Oscillations

    Get PDF
    Congenital nystagmus (CN) is an incurable pathological spontaneous oscillation of the eyes with an onset in the first few months of life. The pathophysiology of CN is mysterious. There is no consistent neurological abnormality, but the majority of patients have a wide range of unrelated congenital visual abnormalities affecting either the cornea, lens, retina or optic nerve. In this theoretical study, we show that these eye oscillations could develop as an adaptive response to maximize visual contrast with poor foveal function in the infant visuomotor system, at a time of peak neural plasticity. We argue that in a visual system with abnormally poor high spatial frequency sensitivity, image contrast is not only maintained by keeping the image on the fovea (or its remnant) but also by some degree of image motion. Using the calculus of variations, we show that the optimal trade-off between these conflicting goals is to generate oscillatory eye movements with increasing velocity waveforms, as seen in real CN. When we include a stochastic component to the start of each epoch (quick-phase inaccuracy) various observed waveforms (including pseudo-cycloid) emerge as optimal strategies. Using the delay embedding technique, we find a low fractional dimension as reported in real data. We further show that, if a velocity command-based pre-motor circuitry (neural integrator) is harnessed to generate these waveforms, the emergence of a null region is inevitable. We conclude that CN could emerge paradoxically as an ‘optimal’ adaptive response in the infant visual system during an early critical period. This can explain why CN does not emerge later in life and why CN is so refractory to treatment. It also implies that any therapeutic intervention would need to be very early in life

    Circular formation control of fixed-wing UAVs with constant speeds

    Full text link
    In this paper we propose an algorithm for stabilizing circular formations of fixed-wing UAVs with constant speeds. The algorithm is based on the idea of tracking circles with different radii in order to control the inter-vehicle phases with respect to a target circumference. We prove that the desired equilibrium is exponentially stable and thanks to the guidance vector field that guides the vehicles, the algorithm can be extended to other closed trajectories. One of the main advantages of this approach is that the algorithm guarantees the confinement of the team in a specific area, even when communications or sensing among vehicles are lost. We show the effectiveness of the algorithm with an actual formation flight of three aircraft. The algorithm is ready to use for the general public in the open-source Paparazzi autopilot.Comment: 6 pages, submitted to IROS 201

    Flight of the dragonflies and damselflies

    Get PDF
    This work is a synthesis of our current understanding of the mechanics, aerodynamics and visually mediated control of dragonfly and damselfly flight, with the addition of new experimental and computational data in several key areas. These are: the diversity of dragonfly wing morphologies, the aerodynamics of gliding flight, force generation in flapping flight, aerodynamic efficiency, comparative flight performance and pursuit strategies during predatory and territorial flights. New data are set in context by brief reviews covering anatomy at several scales, insect aerodynamics, neuromechanics and behaviour. We achieve a new perspective by means of a diverse range of techniques, including laser-line mapping of wing topographies, computational fluid dynamics simulations of finely detailed wing geometries, quantitative imaging using particle image velocimetry of on-wing and wake flow patterns, classical aerodynamic theory, photography in the field, infrared motion capture and multi-camera optical tracking of free flight trajectories in laboratory environments. Our comprehensive approach enables a novel synthesis of datasets and subfields that integrates many aspects of flight from the neurobiology of the compound eye, through the aeromechanical interface with the surrounding fluid, to flight performance under cruising and higher-energy behavioural modes

    Calibration-free gaze interfaces based on linear smooth pursuit

    Get PDF
    Since smooth pursuit eye movements can be used without calibration in spontaneous gaze interaction, the intuitiveness of the gaze interface design has been a topic of great interest in the human-computer interaction field. However, since most related research focuses on curved smooth-pursuit trajectories, the design issues of linear trajectories are poorly understood. Hence, this study evaluated the user performance of gaze interfaces based on linear smooth pursuit eye movements. We conducted an experiment to investigate how the number of objects (6, 8, 10, 12, or 15) and object moving speed (7.73 ˚/s vs. 12.89 ˚/s) affect the user performance in a gaze-based interface. Results show that the number and speed of the displayed objects influence users’ performance with the interface. The number of objects significantly affected the correct and false detection rates when selecting objects in the display. Participants’ performance was highest on interfaces containing 6 and 8 objects and decreased for interfaces with 10, 12, and 15 objects. Detection rates and orientation error were significantly influenced by the moving speed of displayed objects. Faster moving speed (12.89 ˚/s) resulted in higher detection rates and smaller orientation error compared to slower moving speeds (7.73 ˚/s). Our findings can help to enable a calibration-free accessible interaction with gaze interfaces.DFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische UniversitĂ€t Berli
    • 

    corecore