271 research outputs found

    The Neurosciences at the Max Planck Institute for Biophysical Chemistry in Göttingen

    Get PDF

    Immunocytochemical and electrophysiological characterization of GABA receptors in the frog and turtle retina

    Get PDF
    AbstractThe expression of GABA receptors (GABARs) was studied in frog and turtle retinae. Using immunocytochemical methods, GABAARs and GABACRs were preferentially localized to the inner plexiform layer (IPL). Label in the IPL was punctate indicating a synaptic clustering of GABARs. Distinct, but weaker label was also present in the outer plexiform layer. GABAAR and GABACR mediated effects were studied by recording electroretinograms (ERGs) and by the application of specific antagonists. Bicuculline, the GABAAR antagonist, produced a significant increase of the ERG. Picrotoxin, when co-applied with saturating doses of bicuculline, caused a further increase of the ERG due to blocking of GABACRs. The putative GABACR antagonist Imidazole-4-acidic acid (I4AA) failed to antagonize GABACR mediated inhibition and, in contrast, appeared rather as an agonist of GABARs

    Foveated image processing for faster object detection and recognition in embedded systems using deep convolutional neural networks

    Get PDF
    Object detection and recognition algorithms using deep convolutional neural networks (CNNs) tend to be computationally intensive to implement. This presents a particular challenge for embedded systems, such as mobile robots, where the computational resources tend to be far less than for workstations. As an alternative to standard, uniformly sampled images, we propose the use of foveated image sampling here to reduce the size of images, which are faster to process in a CNN due to the reduced number of convolution operations. We evaluate object detection and recognition on the Microsoft COCO database, using foveated image sampling at different image sizes, ranging from 416×416 to 96×96 pixels, on an embedded GPU – an NVIDIA Jetson TX2 with 256 CUDA cores. The results show that it is possible to achieve a 4× speed-up in frame rates, from 3.59 FPS to 15.24 FPS, using 416×416 and 128×128 pixel images respectively. For foveated sampling, this image size reduction led to just a small decrease in recall performance in the foveal region, to 92.0% of the baseline performance with full-sized images, compared to a significant decrease to 50.1% of baseline recall performance in uniformly sampled images, demonstrating the advantage of foveated sampling

    Extended morphometric analysis of neuronal cells with Minkowski valuations

    Full text link
    Minkowski valuations provide a systematic framework for quantifying different aspects of morphology. In this paper we apply vector- and tensor-valued Minkowski valuations to neuronal cells from the cat's retina in order to describe their morphological structure in a comprehensive way. We introduce the framework of Minkowski valuations, discuss their implementation for neuronal cells and show how they can discriminate between cells of different types.Comment: 14 pages, 18 postscript figure

    Analysis of spatial relationships in three dimensions: tools for the study of nerve cell patterning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multiple technologies have been brought to bear on understanding the three-dimensional morphology of individual neurons and glia within the brain, but little progress has been made on understanding the rules controlling cellular patterning. We describe new matlab-based software tools, now available to the scientific community, permitting the calculation of spatial statistics associated with 3D point patterns. The analyses are largely derived from the Delaunay tessellation of the field, including the nearest neighbor and Voronoi domain analyses, and from the spatial autocorrelogram.</p> <p>Results</p> <p>Our tools enable the analysis of the spatial relationship between neurons within the central nervous system in 3D, and permit the modeling of these fields based on lattice-like simulations, and on simulations of minimal-distance spacing rules. Here we demonstrate the utility of our analysis methods to discriminate between two different simulated neuronal populations.</p> <p>Conclusion</p> <p>Together, these tools can be used to reveal the presence of nerve cell patterning and to model its foundation, in turn informing on the potential developmental mechanisms that govern its establishment. Furthermore, in conjunction with analyses of dendritic morphology, they can be used to determine the degree of dendritic coverage within a volume of tissue exhibited by mature nerve cells.</p

    Expression of SPIG1 Reveals Development of a Retinal Ganglion Cell Subtype Projecting to the Medial Terminal Nucleus in the Mouse

    Get PDF
    Visual information is transmitted to the brain by roughly a dozen distinct types of retinal ganglion cells (RGCs) defined by a characteristic morphology, physiology, and central projections. However, our understanding about how these parallel pathways develop is still in its infancy, because few molecular markers corresponding to individual RGC types are available. Previously, we reported a secretory protein, SPIG1 (clone name; D/Bsp120I #1), preferentially expressed in the dorsal region in the developing chick retina. Here, we generated knock-in mice to visualize SPIG1-expressing cells with green fluorescent protein. We found that the mouse retina is subdivided into two distinct domains for SPIG1 expression and SPIG1 effectively marks a unique subtype of the retinal ganglion cells during the neonatal period. SPIG1-positive RGCs in the dorsotemporal domain project to the dorsal lateral geniculate nucleus (dLGN), superior colliculus, and accessory optic system (AOS). In contrast, in the remaining region, here named the pan-ventronasal domain, SPIG1-positive cells form a regular mosaic and project exclusively to the medial terminal nucleus (MTN) of the AOS that mediates the optokinetic nystagmus as early as P1. Their dendrites costratify with ON cholinergic amacrine strata in the inner plexiform layer as early as P3. These findings suggest that these SPIG1-positive cells are the ON direction selective ganglion cells (DSGCs). Moreover, the MTN-projecting cells in the pan-ventronasal domain are apparently composed of two distinct but interdependent regular mosaics depending on the presence or absence of SPIG1, indicating that they comprise two functionally distinct subtypes of the ON DSGCs. The formation of the regular mosaic appears to be commenced at the end of the prenatal stage and completed through the peak period of the cell death at P6. SPIG1 will thus serve as a useful molecular marker for future studies on the development and function of ON DSGCs

    Standard Anatomical and Visual Space for the Mouse Retina: Computational Reconstruction and Transformation of Flattened Retinae with the Retistruct Package

    Get PDF
    The concept of topographic mapping is central to the understanding of the visual system at many levels, from the developmental to the computational. It is important to be able to relate different coordinate systems, e.g. maps of the visual field and maps of the retina. Retinal maps are frequently based on flat-mount preparations. These use dissection and relaxing cuts to render the quasi-spherical retina into a 2D preparation. The variable nature of relaxing cuts and associated tears limits quantitative cross-animal comparisons. We present an algorithm, "Retistruct," that reconstructs retinal flat-mounts by mapping them into a standard, spherical retinal space. This is achieved by: stitching the marked-up cuts of the flat-mount outline; dividing the stitched outline into a mesh whose vertices then are mapped onto a curtailed sphere; and finally moving the vertices so as to minimise a physically-inspired deformation energy function. Our validation studies indicate that the algorithm can estimate the position of a point on the intact adult retina to within 8° of arc (3.6% of nasotemporal axis). The coordinates in reconstructed retinae can be transformed to visuotopic coordinates. Retistruct is used to investigate the organisation of the adult mouse visual system. We orient the retina relative to the nictitating membrane and compare this to eye muscle insertions. To align the retinotopic and visuotopic coordinate systems in the mouse, we utilised the geometry of binocular vision. In standard retinal space, the composite decussation line for the uncrossed retinal projection is located 64° away from the retinal pole. Projecting anatomically defined uncrossed retinal projections into visual space gives binocular congruence if the optical axis of the mouse eye is oriented at 64° azimuth and 22° elevation, in concordance with previous results. Moreover, using these coordinates, the dorsoventral boundary for S-opsin expressing cones closely matches the horizontal meridian

    Ambient light modulation of exogenous attention to threat

    Full text link
    Planet Earth’s motion yields a 50 % day–50 % night yearly balance in every latitude or longitude, so survival must be guaranteed in very different light conditions in many species, including human. Cone- and rod-dominant vision, respectively specialized in light and darkness, present several processing differences, which are—at least partially—reflected in event-related potentials (ERPs). The present experiment aimed at characterizing exogenous attention to threatening (spiders) and neutral (wheels) distractors in two environmental light conditions, low mesopic (L, 0.03 lx) and high mesopic (H, 6.5 lx), yielding a differential photoreceptor activity balance: rod &gt; cone and rod &lt; cone, respectively. These distractors were presented in the lower visual hemifield while the 40 participants were involved in a digit categorization task. Stimuli, both targets (digits) and distractors, were exactly the same in L and H. Both ERPs and behavioral performance in the task were recorded. Enhanced attentional capture by salient distractors was observed regardless of ambient light level. However, ERPs showed a differential pattern as a function of ambient light. Thus, significantly enhanced amplitude to salient distractors was observed in posterior P1 and early anterior P2 (P2a) only during the H context, in late P2a during the L context, and in occipital P3 during both H and L contexts. In other words, while exogenous attention to threat was equally efficient in light and darkness, cone-dominant exogenous attention was faster than rod-dominant, in line with previous data indicating slower processing times for rod- than for cone-dominant visionThis research was supported by the Grants PSI2014-54853-P and PSI2012-37090 from the Ministerio de Economía y Competitividad of Spain (MINECO

    A First- and Second-Order Motion Energy Analysis of Peripheral Motion Illusions Leads to Further Evidence of “Feature Blur” in Peripheral Vision

    Get PDF
    Anatomical and physiological differences between the central and peripheral visual systems are well documented. Recent findings have suggested that vision in the periphery is not just a scaled version of foveal vision, but rather is relatively poor at representing spatial and temporal phase and other visual features. Shapiro, Lu, Huang, Knight, and Ennis (2010) have recently examined a motion stimulus (the “curveball illusion”) in which the shift from foveal to peripheral viewing results in a dramatic spatial/temporal discontinuity. Here, we apply a similar analysis to a range of other spatial/temporal configurations that create perceptual conflict between foveal and peripheral vision.To elucidate how the differences between foveal and peripheral vision affect super-threshold vision, we created a series of complex visual displays that contain opposing sources of motion information. The displays (referred to as the peripheral escalator illusion, peripheral acceleration and deceleration illusions, rotating reversals illusion, and disappearing squares illusion) create dramatically different perceptions when viewed foveally versus peripherally. We compute the first-order and second-order directional motion energy available in the displays using a three-dimensional Fourier analysis in the (x, y, t) space. The peripheral escalator, acceleration and deceleration illusions and rotating reversals illusion all show a similar trend: in the fovea, the first-order motion energy and second-order motion energy can be perceptually separated from each other; in the periphery, the perception seems to correspond to a combination of the multiple sources of motion information. The disappearing squares illusion shows that the ability to assemble the features of Kanisza squares becomes slower in the periphery.The results lead us to hypothesize “feature blur” in the periphery (i.e., the peripheral visual system combines features that the foveal visual system can separate). Feature blur is of general importance because humans are frequently bringing the information in the periphery to the fovea and vice versa
    corecore