2,540 research outputs found

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 – 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ‘perceptual information preservation algorithm’, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpe’s retinal model

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Retinal drug delivery: rethinking outcomes for the efficient replication of retinal behavior

    Get PDF
    The retina is a highly organized structure that is considered to be "an approachable part of the brain." It is attracting the interest of development scientists, as it provides a model neurovascular system. Over the last few years, we have been witnessing significant development in the knowledge of the mechanisms that induce the shape of the retinal vascular system, as well as knowledge of disease processes that lead to retina degeneration. Knowledge and understanding of how our vision works are crucial to creating a hardware-adaptive computational model that can replicate retinal behavior. The neuronal system is nonlinear and very intricate. It is thus instrumental to have a clear view of the neurophysiological and neuroanatomic processes and to take into account the underlying principles that govern the process of hardware transformation to produce an appropriate model that can be mapped to a physical device. The mechanistic and integrated computational models have enormous potential toward helping to understand disease mechanisms and to explain the associations identified in large model-free data sets. The approach used is modulated and based on different models of drug administration, including the geometry of the eye. This work aimed to review the recently used mathematical models to map a directed retinal network.The authors acknowledge the financial support received from the Portuguese Science and Technology Foundation (FCT/MCT) and the European Funds (PRODER/COMPETE) for the project UIDB/04469/2020 (strategic fund), co-financed by FEDER, under the Partnership Agreement PT2020. The authors also acknowledge FAPESP – São Paulo Research Foundation, for the financial support for the publication of the article.info:eu-repo/semantics/publishedVersio
    corecore