2,450 research outputs found

    From receptive profiles to a metric model of V1

    Full text link
    In this work we show how to construct connectivity kernels induced by the receptive profiles of simple cells of the primary visual cortex (V1). These kernels are directly defined by the shape of such profiles: this provides a metric model for the functional architecture of V1, whose global geometry is determined by the reciprocal interactions between local elements. Our construction adapts to any bank of filters chosen to represent a set of receptive profiles, since it does not require any structure on the parameterization of the family. The connectivity kernel that we define carries a geometrical structure consistent with the well-known properties of long-range horizontal connections in V1, and it is compatible with the perceptual rules synthesized by the concept of association field. These characteristics are still present when the kernel is constructed from a bank of filters arising from an unsupervised learning algorithm.Comment: 25 pages, 18 figures. Added acknowledgement

    How spiking neurons give rise to a temporal-feature map

    Get PDF
    A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit from the nonlinearity

    CMB Anisotropy in Compact Hyperbolic Universes II: COBE Maps and Limits

    Full text link
    We calculate the CMB anisotropy in compact hyperbolic universe models using the regularized method of images described in paper-I, including the 'line-of-sight `integrated Sachs-Wolfe' effect, as well as the last-scattering surface terms. We calculate the Bayesian probabilities for a selection of models by confronting our theoretical pixel-pixel temperature correlation functions with the COBE-DMR data. Our results demonstrate that strong constraints on compactness arise: if the universe is small compared to the `horizon' size, correlations appear in the maps that are irreconcilable with the observations. This conclusion is qualitatively insensitive to the matter content of the universe, in particular, the presence of a cosmological constant. If the universe is of comparable size to the 'horizon', the likelihood function is very dependent upon orientation of the manifold wrt the sky. While most orientations may be strongly ruled out, it sometimes happens that for a specific orientation the predicted correlation patterns are preferred over those for the conventional infinite models. The full Bayesian analysis we use is the most complete statistical test that can be done on the COBE maps, taking into account all possible signals and their variances in the theoretical skies, in particular the high degree of anisotropic correlation that can exist. We show that standard visual measures for comparing theoretical predictions with the data such as the isotropized power spectrum CC_\ell are not so useful in small compact spaces because of enhanced cosmic variance associated with the breakdown of statistical isotropy.Comment: 29 pages, Latex, 15 figures, submitted to Phys. Rev. D, March 11, 1999. Full resolution figures can be obtained from ftp://ftp.cita.utoronto.ca/pogosyan/prdB

    Emergent Orientation Selectivity from Random Networks in Mouse Visual Cortex

    Get PDF
    The connectivity principles underlying the emergence of orientation selectivity in primary visual cortex (V1) of mammals lacking an orientation map (such as rodents and lagomorphs) are poorly understood. We present a computational model in which random connectivity gives rise to orientation selectivity that matches experimental observations. The model predicts that mouse V1 neurons should exhibit intricate receptive fields in the two-dimensional frequency domain, causing a shift in orientation preferences with spatial frequency. We find evidence for these features in mouse V1 using calcium imaging and intracellular whole-cell recordings. Pattadkal et al. show that orientation selectivity can emerge from random connectivity, and offer a distinct perspective for how computations occur in the neocortex. They propose that a random convergence of inputs can provide signals for orientation preference in contrast with the dominant model that requires a precise arrangement.Fil: Pattadkal, Jagruti J.. University of Texas at Austin; Estados UnidosFil: Mato, German. Comisión Nacional de Energía Atómica. Gerencia del Área de Energía Nuclear. Instituto Balseiro; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: van Vreeswijk, Carl. Centre National de la Recherche Scientifique; FranciaFil: Priebe, Nicholas J.. University of Texas at Austin; Estados UnidosFil: Hansel, David. Centre National de la Recherche Scientifique; Franci

    A robust framework for medical image segmentation through adaptable class-specific representation

    Get PDF
    Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section

    Image Local Features Description through Polynomial Approximation

    Get PDF
    This work introduces a novel local patch descriptor that remains invariant under varying conditions of orientation, viewpoint, scale, and illumination. The proposed descriptor incorporate polynomials of various degrees to approximate the local patch within the image. Before feature detection and approximation, the image micro-texture is eliminated through a guided image filter with the potential to preserve the edges of the objects. The rotation invariance is achieved by aligning the local patch around the Harris corner through the dominant orientation shift algorithm. Weighted threshold histogram equalization (WTHE) is employed to make the descriptor in-sensitive to illumination changes. The correlation coefficient is used instead of Euclidean distance to improve the matching accuracy. The proposed descriptor has been extensively evaluated on the Oxford's affine covariant regions dataset, and absolute and transition tilt dataset. The experimental results show that our proposed descriptor can categorize the feature with more distinctiveness in comparison to state-of-the-art descriptors. - 2013 IEEE.This work was supported by the Qatar National Library.Scopu

    Pattern integration in the normal and abnormal human visual system

    Get PDF
    The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood

    Screened poisson hyperfields for shape coding

    Get PDF
    We present a novel perspective on shape characterization using the screened Poisson equation. We discuss that the effect of the screening parameter is a change of measure of the underlying metric space. Screening also indicates a conditioned random walker biased by the choice of measure. A continuum of shape fields is created by varying the screening parameter or, equivalently, the bias of the random walker. In addition to creating a regional encoding of the diffusion with a different bias, we further break down the influence of boundary interactions by considering a number of independent random walks, each emanating from a certain boundary point, whose superposition yields the screened Poisson field. Probing the screened Poisson equation from these two complementary perspectives leads to a high-dimensional hyperfield: a rich characterization of the shape that encodes global, local, interior, and boundary interactions. To extract particular shape information as needed in a compact way from the hyperfield, we apply various decompositions either to unveil parts of a shape or parts of a boundary or to create consistent mappings. The latter technique involves lower-dimensional embeddings, which we call screened Poisson encoding maps (SPEM). The expressive power of the SPEM is demonstrated via illustrative experiments as well as a quantitative shape retrieval experiment over a public benchmark database on which the SPEM method shows a high-ranking performance among the existing state-of-the-art shape retrieval methods

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus
    corecore