1,514,391 research outputs found

    Information processing in visual systems

    No full text
    One of the goals of neuroscience is to understand how animals perceive sensory information. This thesis focuses on visual systems, to unravel how neuronal structures process aspects of the visual environment. To characterise the receptive field of a neuron, we developed spike-triggered independent component analysis. Alongside characterising the receptive field of a neuron, this method provides an insight into its underlying network structure. When applied to recordings from the H1 neuron of blowflies, it accurately recovered the sub-structure of the neuron. This sub-structure was studied further by recording H1's response to plaid stimuli. Based on the response, H1 can be classified as a component cell. We then fitted an anatomically inspired model to the response, and found the critical component to explain H1's response to be a sigmoid non-linearity at output of elementary movement detectors. The simpler blowfly visual system can help us understand elementary sensory information processing mechanisms. How does the more complex mammalian cortex implement these principles in its network? To study this, we used multi-electrode arrays to characterise the receptive field properties of neurons in the visual cortex of anaesthetised mice. Based on these recordings, we estimated the cortical limits on the performance of a visual task; the behavioural performance observed by Prusky and Douglas (2004) is within these limits. Our recordings were carried out in anaesthetised animals. During anaesthesia, cortical UP states are considered "fragments of wakefulness" and from simultaneous whole-cell and extracellular recordings, we found these states to be revealed in the phase of local field potentials. This finding was used to develop a method of detecting cortical state based on extracellular recordings, which allows us to explore information processing during different cortical states. Across this thesis, we have developed, tested and applied methods that help improve our understanding of information processing in visual systems

    Neural ensemble decoding reveals a correlate of viewer- to object-centered spatial transformation in monkey parietal cortex

    Get PDF
    The parietal cortex contains representations of space in multiple coordinate systems including retina-, head-, body-, and world-based systems. Previously, we found that when monkeys are required to perform spatial computations on objects, many neurons in parietal area 7a represent position in an object-centered coordinate system as well. Because visual information enters the brain in a retina-centered reference frame, generation of an object-centered reference requires the brain to perform computation on the visual input. We provide evidence that area 7a contains a correlate of that computation. Specifically, area 7a contains neurons that code information in retina- and object-centered coordinate systems. The information in retina-centered coordinates emerges first, followed by the information in object-centered coordinates. We found that the strength and accuracy of these representations is correlated across trials. Finally, we found that retina-centered information could be used to predict subsequent object-centered signals, but not vice versa. These results are consistent with the hypothesis that either area 7a, or an area that precedes area 7a in the visual processing hierarchy, is performing the retina- to object-centered transformation

    The CLAIRE visual analytics system for analysing IR evaluation data

    Get PDF
    In this paper, we describe Combinatorial visuaL Analytics system for Information Retrieval Evaluation (CLAIRE), a Visual Analytics (VA) system for exploring and making sense of the performances of a large amount of Information Retrieval (IR) systems, in order to quickly and intuitively grasp which system configurations are preferred, what are the contributions of the different components and how these components interact together

    Visual-Inertial Mapping with Non-Linear Factor Recovery

    Full text link
    Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches

    Visual Information Display Systems. A Survey

    Get PDF
    Visual information display systems that are computer connected or updated with computer generated informatio

    VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback

    Full text link
    Modern recommender systems model people and items by discovering or `teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.Comment: AAAI'1

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    Fluctuations in instantaneous frequency predict alpha amplitude during visual perception.

    Get PDF
    Rhythmic neural activity in the alpha band (8-13 Hz) is thought to have an important role in the selective processing of visual information. Typically, modulations in alpha amplitude and instantaneous frequency are thought to reflect independent mechanisms impacting dissociable aspects of visual information processing. However, in complex systems with interacting oscillators such as the brain, amplitude and frequency are mathematically dependent. Here, we record electroencephalography in human subjects and show that both alpha amplitude and instantaneous frequency predict behavioral performance in the same visual discrimination task. Consistent with a model of coupled oscillators, we show that fluctuations in instantaneous frequency predict alpha amplitude on a single trial basis, empirically demonstrating that these metrics are not independent. This interdependence suggests that changes in amplitude and instantaneous frequency reflect a common change in the excitatory and inhibitory neural activity that regulates alpha oscillations and visual information processing
    • …
    corecore