47,915 research outputs found

    Visual correlates of functional difficulties in Parkinson's disease and Alzheimer's disease

    Full text link
    Thesis (Ph.D.)--Boston UniversityAlthough motor dysfunction in Parkinson's disease (PD) and memory deficits in Alzheimer's disease (AD) are the respective hallmark symptoms, both neurodegenerative disorders are also associated with significant disruptions in visual functioning. In PD, visuospatial function is impaired, particularly in patients with left-side onset of motor symptoms (LPD), reflecting pathology in right hemisphere brain regions, including the parietal lobe. LPD visuospatial performance is characterized by perceptual distortions, suggesting that lower-level visual processing may contribute to abnormal performance. In AD and PD, reduced contrast sensitivity and other visual difficulties have the potential to impact everyday functioning. The relation of PD visuospatial problems, and AD and PD contrast sensitivity deficits to higher-order impairments is understudied. The present experiments examined visual and visuospatial difficulties in these groups and evaluated an intervention to improve everyday visual function. Experiment I assessed performance on a line bisection task in PD. Participants included non-demented patients (10 LPD, 10 with right-side motor onset [RPD]) and 11 normal control adults (NC). Performance was related to data from measures of retinal structure (Optical Coherence Tomography) and function (Frequency Doubling Technology; FDT) across the eye. Correlations of structure and function were found for all groups. LPD showed predicted downward bisection bias in some sections of the left visual field. Expected rightward bisection bias in LPD was not consistently seen using this presentation method. For RPD, in some sectors, worse FDT sensitivity correlated with upward line bisection bias, as predicted. Experiment II investigated if performance of a complex, familiar visual search task (bingo) could be enhanced in AD and PD by manipulating the visual components of contrast, size, and visual complexity of task stimuli. Participants were 19 younger adults, 14 AD, 17 PD, and 33 NC. Increased stimulus size and decreased complexity improved performance for all groups. Increasing contrast also benefited the AD patients, presumably by compensating for their contrast sensitivity deficit, which was more severe than in the PD and NC groups. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance

    Comparing Segmentation by Time and by Motion in Visual Search: An fMRI Investigation

    Get PDF
    Abstract Brain activity was recorded while participants engaged in a difficult visual search task for a target defined by the spatial configuration of its component elements. The search displays were segmented by time (a preview then a search display), by motion, or were unsegmented. A preparatory network showed activity to the preview display, in the time but not in the motion segmentation condition. A region of the precuneus showed (i) higher activation when displays were segmented by time or by motion, and (ii) correlated activity with larger segmentation benefits behaviorally, regardless of the cue. Additionally, the results revealed that success in temporal segmentation was correlated with reduced activation in early visual areas, including V1. The results depict partially overlapping brain networks for segmentation in search by time and motion, with both cue-independent and cue-specific mechanisms.</jats:p

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Nonhuman primates as models of hemispheric specialization

    Get PDF
    The present chapter concerns the issue of hemispheric specialization for perceptual and cognitive processes. In spite of a long-lasting view that only humans are lateralized (e.g., Warren, 1980), there is now strong documentation for anatomical lateralizations, functional lateralizations, or both in several animal taxa, including birds, rodents, and nonhuman primates (see Bradshaw & Rogers, 1993; Hellige, 1993). We selectively report demonstrations from studies of nonhuman primates. After a short review of the evidence for structural (anatomical) lateralization, we describe..

    Neuronal processing of translational optic flow in the visual system of the shore crab Carcinus maenas

    Get PDF
    This paper describes a search for neurones sensitive to optic flow in the visual system of the shore crab Carcinus maenas using a procedure developed from that of Krapp and Hengstenberg. This involved determining local motion sensitivity and its directional selectivity at many points within the neurone's receptive field and plotting the results on a map. Our results showed that local preferred directions of motion are independent of velocity, stimulus shape and type of motion (circular or linear). Global response maps thus clearly represent real properties of the neurones' receptive fields. Using this method, we have discovered two families of interneurones sensitive to translational optic flow. The first family has its terminal arborisations in the lobula of the optic lobe, the second family in the medulla. The response maps of the lobula neurones (which appear to be monostratified lobular giant neurones) show a clear focus of expansion centred on or just above the horizon, but at significantly different azimuth angles. Response maps such as these, consisting of patterns of movement vectors radiating from a pole, would be expected of neurones responding to self-motion in a particular direction. They would be stimulated when the crab moves towards the pole of the neurone's receptive field. The response maps of the medulla neurones show a focus of contraction, approximately centred on the horizon, but at significantly different azimuth angles. Such neurones would be stimulated when the crab walked away from the pole of the neurone's receptive field. We hypothesise that both the lobula and the medulla interneurones are representatives of arrays of cells, each of which would be optimally activated by self-motion in a different direction. The lobula neurones would be stimulated by the approaching scene and the medulla neurones by the receding scene. Neurones tuned to translational optic flow provide information on the three-dimensional layout of the environment and are thought to play a role in the judgment of heading

    CONFIGR: A Vision-Based Model for Long-Range Figure Completion

    Full text link
    CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.Air Force Office of Scientific Research (F49620-01-1-0423); National Geospatial-Intelligence Agency (NMA 201-01-1-0216); National Science Foundation (SBE-0354378); Office of Naval Research (N000014-01-1-0624

    fMRI Investigation of Cortical and Subcortical Networks in the Learning of Abstract and Effector-Specific Representations of Motor Sequences

    Get PDF
    A visuomotor sequence can be learned as a series of visuo-spatial cues or as a sequence of effector movements. Earlier imaging studies have revealed that a network of brain areas is activated in the course of motor sequence learning. However these studies do not address the question of the type of representation being established at various stages of visuomotor sequence learning. In an earlier behavioral study, we demonstrated that acquisition of visuo-spatial sequence representation enables rapid learning in the early stage and progressive establishment of somato-motor representation helps speedier execution by the late stage. We conducted functional magnetic resonance imaging (fMRI) experiments wherein subjects learned and practiced the same sequence alternately in normal and rotated settings. In one rotated setting (visual), subjects learned a new motor sequence in response to an identical sequence of visual cues as in normal. In another rotated setting (motor), the display sequence was altered as compared to normal, but the same sequence of effector movements were used to perform the sequence. Comparison of different rotated settings revealed analogous transitions both in the cortical and subcortical sites during visuomotor sequence learning  a transition of activity from parietal to parietal-premotor and then to premotor cortex and a concomitant shift was observed from anterior putamen to a combined activity in both anterior and posterior putamen and finally to posterior putamen. These results suggest a putative role for engagement of different cortical and subcortical networks at various stages of learning in supporting distinct sequence representations

    Radio Properties of Low Redshift Broad Line Active Galactic Nuclei Including Extended Radio Sources

    Full text link
    We present a study of the extended radio emission in a sample of 8434 low redshift (z < 0.35) broad line active galactic nuclei (AGN) from the Sloan Digital Sky Survey (SDSS). To calculate the jet and lobe contributions to the total radio luminosity, we have taken the 846 radio core sources detected in our previous study of this sample and performed a systematic search in the Faint Images of the Radio Sky at Twenty-centimeters (FIRST) database for extended radio emission that is likely associated with the optical counterparts. We found 51 out of 846 radio core sources have extended emission (> 4" from the optical AGN) that is positively associated with the AGN, and we have identified an additional 12 AGN with extended radio emission but no detectable radio core emission. Among these 63 AGN, we found 6 giant radio galaxies (GRGs), with projected emission exceeding 750 kpc in length, and several other AGN with unusual radio morphologies also seen in higher redshift surveys. The optical spectra of many of the extended sources are similar to that of typical broad line radio galaxy spectra, having broad Hα\alpha emission lines with boxy profiles and large M_BH. With extended emission taken into account, we find strong evidence for a bimodal distribution in the radio-loudness parameter R, where the lower radio luminosity core-only sources appear as a population separate from the extended sources, with a dividing line at log(R) 1.75\approx 1.75. This dividing line ensures that these are indeed the most radio-loud AGN, which may have different or extreme physical conditions in their central engines when compared to the more numerous radio quiet AGN.Comment: 25 pages, 6 figures, accepted to A

    A helmet mounted display to adapt the telerobotic environment to human vision

    Get PDF
    A Helmet Mounted Display system has been developed. It provides the capability to display stereo images with the viewpoint tied to subjects' head orientation. The type of display might be useful in a telerobotic environment provided the correct operating parameters are known. The effects of update frequency were tested using a 3D tracking task. The effects of blur were tested using both tracking and pick-and-place tasks. For both, researchers found that operator performance can be degraded if the correct parameters are not used. Researchers are also using the display to explore the use of head movements as part of gaze as subjects search their visual field for target objects
    corecore