834 research outputs found

    Ocular biomechanics modelling for visual fatigue assessment in virtual environments

    Full text link
    The study objectively quantifies visual fatigue caused by immersion in virtual reality. Visual fatigue assessment is done through ocular biomechanics modelling and eye tracking to analyse eye movement and muscle forces into a visual fatigue index

    Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster

    Get PDF
    Flies rely heavily on visual feedback for several aspects of flight control. As a fly approaches an object, the image projected across its retina expands, providing the fly with visual feedback that can be used either to trigger a collision-avoidance maneuver or a landing response. To determine how a fly makes the decision to land on or avoid a looming object, we measured the behaviors generated in response to an expanding image during tethered flight in a visual closed-loop flight arena. During these experiments, each fly varied its wing-stroke kinematics to actively control the azimuth position of a 15°×15° square within its visual field. Periodically, the square symmetrically expanded in both the horizontal and vertical directions. We measured changes in the fly's wing-stroke amplitude and frequency in response to the expanding square while optically tracking the position of its legs to monitor stereotyped landing responses. Although this stimulus could elicit both the landing responses and collision-avoidance reactions, separate pathways appear to mediate the two behaviors. For example, if the square is in the lateral portion of the fly's field of view at the onset of expansion, the fly increases stroke amplitude in one wing while decreasing amplitude in the other, indicative of a collision-avoidance maneuver. In contrast, frontal expansion elicits an increase in wing-beat frequency and leg extension, indicative of a landing response. To further characterize the sensitivity of these responses to expansion rate, we tested a range of expansion velocities from 100 to 10000° s^(-1). Differences in the latency of both the collision-avoidance reactions and the landing responses with expansion rate supported the hypothesis that the two behaviors are mediated by separate pathways. To examine the effects of visual feedback on the magnitude and time course of the two behaviors, we presented the stimulus under open-loop conditions, such that the fly's response did not alter the position of the expanding square. From our results we suggest a model that takes into account the spatial sensitivities and temporal latencies of the collision-avoidance and landing responses, and is sufficient to schematically represent how the fly uses integration of motion information in deciding whether to turn or land when confronted with an expanding object

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Intrinsic activity in the fly brain gates visual information during behavioral choices

    Get PDF
    The small insect brain is often described as an input/output system that executes reflex-like behaviors. It can also initiate neural activity and behaviors intrinsically, seen as spontaneous behaviors, different arousal states and sleep. However, less is known about how intrinsic activity in neural circuits affects sensory information processing in the insect brain and variability in behavior. Here, by simultaneously monitoring Drosophila's behavioral choices and brain activity in a flight simulator system, we identify intrinsic activity that is associated with the act of selecting between visual stimuli. We recorded neural output (multiunit action potentials and local field potentials) in the left and right optic lobes of a tethered flying Drosophila, while its attempts to follow visual motion (yaw torque) were measured by a torque meter. We show that when facing competing motion stimuli on its left and right, Drosophila typically generate large torque responses that flip from side to side. The delayed onset (0.1-1 s) and spontaneous switch-like dynamics of these responses, and the fact that the flies sometimes oppose the stimuli by flying straight, make this behavior different from the classic steering reflexes. Drosophila, thus, seem to choose one stimulus at a time and attempt to rotate toward its direction. With this behavior, the neural output of the optic lobes alternates; being augmented on the side chosen for body rotation and suppressed on the opposite side, even though the visual input to the fly eyes stays the same. Thus, the flow of information from the fly eyes is gated intrinsically. Such modulation can be noise-induced or intentional; with one possibility being that the fly brain highlights chosen information while ignoring the irrelevant, similar to what we know to occur in higher animals

    Population-scale organization of cerebellar granule neuron signaling during a visuomotor behavior.

    Get PDF
    Granule cells at the input layer of the cerebellum comprise over half the neurons in the human brain and are thought to be critical for learning. However, little is known about granule neuron signaling at the population scale during behavior. We used calcium imaging in awake zebrafish during optokinetic behavior to record transgenically identified granule neurons throughout a cerebellar population. A significant fraction of the population was responsive at any given time. In contrast to core precerebellar populations, granule neuron responses were relatively heterogeneous, with variation in the degree of rectification and the balance of positive versus negative changes in activity. Functional correlations were strongest for nearby cells, with weak spatial gradients in the degree of rectification and the average sign of response. These data open a new window upon cerebellar function and suggest granule layer signals represent elementary building blocks under-represented in core sensorimotor pathways, thereby enabling the construction of novel patterns of activity for learning

    Isoperimetric Partitioning: A New Algorithm for Graph Partitioning

    Full text link
    Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.National Institute of Mental Health (R01 DC02582

    Aerospace medicine and biology. A continuing bibliography (supplement 231)

    Get PDF
    This bibliography lists 284 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1982

    Adaptive Neural Models of Queuing and Timing in Fluent Action

    Full text link
    Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.National Institute of Mental Health (R01 DC02852
    corecore