13,814 research outputs found

    Interactive object contour extraction for shape modeling

    Get PDF
    In this paper we present a semi-automatic segmentation approach suitable for extracting object contours as a precursor to 2D shape modeling. The approach is a modified and extended version of an existing state-of-the-art approach based on the concept of a Binary Partition Tree (BPT) [1]. The resulting segmentation tool facilitates quick and easy extraction of an object’s contour via a small amount of user interaction that is easy to perform, even in complicated scenes. Illustrative segmentation results are presented and the usefulness of the approach in generating object shape models is discussed

    The asymmetries of colour constancy as determined through illumination discrimination using tuneable LED light sources

    Get PDF
    PhD ThesisThe light reflected from object surfaces changes with the spectral content of the illumination. Despite these changes, the human visual system tends to keep the colours of surfaces constant, a phenomenon known as colour constancy. Colour constancy is known to be imperfect under many conditions; however, it is unknown whether the underlying mechanisms present in the retina and the cortex are optimised for the illuminations under which they have evolved, namely, natural daylights, or for particular objects. A novel method of measuring colour constancy, by illumination discrimination, is presented and explored. This method, unlike previous methods of measuring colour constancy, allows the testing of multiple, real, illuminations with arbitrary spectral content, through the application of tuneable, multi-channel LED light sources. Data from both real scenes, under real illuminations, and computer simulations are presented which support the hypothesis that the visual system maintains higher levels of colour constancy for daylight illumination changes, and in particular in the “bluer” direction, which are also the changes most frequent in nature. The low-level cone inputs for various experimental scenes are examined which challenge all traditional theories of colour constancy supporting the conclusions that higher-level mechanisms of colour constancy are biased for particular illuminations. Furthermore, real and simulated neutral (grey) surfaces are shown to affect levels of colour constancy. Moreover, the conceptual framework for discussing colour constancy with respect to emergent LED light sources is discussed.EPSR

    Facial Expression Recognition

    Get PDF

    Temporal structure in spiking patterns of ganglion cells defines perceptual thresholds in rodents with subretinal prosthesis.

    Get PDF
    Subretinal prostheses are designed to restore sight in patients blinded by retinal degeneration using electrical stimulation of the inner retinal neurons. To relate retinal output to perception, we studied behavioral thresholds in blind rats with photovoltaic subretinal prostheses stimulated by full-field pulsed illumination at 20 Hz, and measured retinal ganglion cell (RGC) responses to similar stimuli ex-vivo. Behaviorally, rats exhibited startling response to changes in brightness, with an average contrast threshold of 12%, which could not be explained by changes in the average RGC spiking rate. However, RGCs exhibited millisecond-scale variations in spike timing, even when the average rate did not change significantly. At 12% temporal contrast, changes in firing patterns of prosthetic response were as significant as with 2.3% contrast steps in visible light stimulation of healthy retinas. This suggests that millisecond-scale changes in spiking patterns define perceptual thresholds of prosthetic vision. Response to the last pulse in the stimulation burst lasted longer than the steady-state response during the burst. This may be interpreted as an excitatory OFF response to prosthetic stimulation, and can explain behavioral response to decrease in illumination. Contrast enhancement of images prior to delivery to subretinal prosthesis can partially compensate for reduced contrast sensitivity of prosthetic vision

    Are v1 simple cells optimized for visual occlusions? : A comparative study

    Get PDF
    Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice

    Characterisation of neural activity across the mouse visual cortex during virtual navigation

    Get PDF
    The brain’s visual and navigational systems are thought to be involved in distinct neural processes. Yet, it is known that neurons in areas involved in the formation of spatial representations, such as the hippocampus, are also influenced by visual signals. In this Thesis I asked whether a similar influence exists in the opposite direction, namely whether navigational signals influence processing in primary visual cortex (V1) and in six higher visual areas. In parallel, given that little is known about the role of higher visual areas, especially during behaviour, I will seek to characterise their functional properties and differences across conditions of increased behavioural complexity, from passive viewing of drifting gratings all the way to virtual navigation. In the first Results chapter, Chapter 3, I will demonstrate that during running through a virtual reality environment, visual responses as early as in V1 are strongly influenced by spatial position. From Chapter 4 onward, together with V1 I will also focus on 6 higher visual areas (LM, AL, RL, A, AM and PM). Specifically, I will attempt to probe activity in these areas across a wide spectrum of conditions: passive viewing of drifting gratings (Chapter 4); active engagement in virtual reality (Chapter 5) and passive viewing in virtual reality (Chapter 6). The results presented in Chapters 5 and 6 will suggest that spatial modulation is present across visual areas specifically during active behaviour. Finally, in Chapter 7 I will ask whether activity in V1, AL and the posterior parietal cortex (PPC) depends on yet another navigational variable, distance run, and how is this dependence different between areas. In summary, by combining ideas and approaches from research in vision and navigation, I will seek to provide new, intriguing evidence about how neurons across the visual cortex combine visual with navigation-related signals to inform behaviour

    A Structured Model of Video Reproduces Primary Visual Cortical Organisation

    Get PDF
    The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition

    A Global Human Settlement Layer from optical high resolution imagery - Concept and first results

    Get PDF
    A general framework for processing of high and very-high resolution imagery for creating a Global Human Settlement Layer (GHSL) is presented together with a discussion on the results of the first operational test of the production workflow. The test involved the mapping of 24.3 millions of square kilometres of the Earth surface spread over four continents, corresponding to an estimated population of 1.3 billion of people in 2010. The resolution of the input image data ranges from 0.5 to 10 meters, collected by a heterogeneous set of platforms including satellite SPOT (2 and 5), CBERS-2B, RapidEye (2 and 4), WorldView (1 and 2), GeoEye-1, QuickBird-2, Ikonos-2, and airborne sensors. Several imaging modes were tested including panchromatic, multispectral and pan-sharpened images. A new fully automatic image information extraction, generalization and mosaic workflow is presented that is based on multiscale textural and morphological image features extraction. New image feature compression and optimization are introduced, together with new learning and classification techniques allowing for the processing of HR/VHR image data using low-resolution thematic layers as reference. A new systematic approach for quality control and validation allowing global spatial and thematic consistency checking is proposed and applied. The quality of the results are discussed by sensor, by band, by resolution, and eco-regions. Critical points, lessons learned and next steps are highlighted.JRC.G.2-Global security and crisis managemen
    corecore