51,155 research outputs found
A Neural Model of Surface Perception: Lightness, Anchoring, and Filling-in
This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.Air Force Office of Scientific Research (F49620-01-1-0397); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-01-1-0624
Recommended from our members
A biologically inspired spiking model of visual processing for image feature detection
To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images
A Neuromorphic Model for Achromatic and Chromatic Surface Representation of Natural Images
This study develops a neuromorphic model of human lightness perception that is inspired by how the mammalian visual system is designed for this function. It is known that biological visual representations can adapt to a billion-fold change in luminance. How such a system determines absolute lightness under varying illumination conditions to generate a consistent interpretation of surface lightness remains an unsolved problem. Such a process, called "anchoring" of lightness, has properties including articulation, insulation, configuration, and area effects. The model quantitatively simulates such psychophysical lightness data, as well as other data such as discounting the illuminant, the double brilliant illusion, and lightness constancy and contrast effects. The model retina embodies gain control at retinal photoreceptors, and spatial contrast adaptation at the negative feedback circuit between mechanisms that model the inner segment of photoreceptors and interacting horizontal cells. The model can thereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A new anchoring mechanism, called the Blurred-Highest-Luminance-As-White (BHLAW) rule, helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural color images under variable lighting conditions, and is compared with the popular RETINEX model.Air Force Office of Scientific Research (F496201-01-1-0397); Defense Advanced Research Project and the Office of Naval Research (N00014-95-0409, N00014-01-1-0624
Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding
Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one
Learning sparse representations of depth
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page
Angular sensitivity of blowfly photoreceptors: intracellular measurements and wave-optical predictions
The angular sensitivity of blowfly photoreceptors was measured in detail at wavelengths λ = 355, 494 and 588 nm.
The measured curves often showed numerous sidebands, indicating the importance of diffraction by the facet lens.
The shape of the angular sensitivity profile is dependent on wavelength. The main peak of the angular sensitivities at the shorter wavelengths was flattened. This phenomenon as well as the overall shape of the main peak can be quantitatively described by a wave-optical theory using realistic values for the optical parameters of the lens-photoreceptor system.
At a constant response level of 6 mV (almost dark adapted), the visual acuity of the peripheral cells R1-6 is at longer wavelengths mainly diffraction limited, while at shorter wavelengths the visual acuity is limited by the waveguide properties of the rhabdomere.
Closure of the pupil narrows the angular sensitivity profile at the shorter wavelengths. This effect can be fully described by assuming that the intracellular pupil progressively absorbs light from the higher order modes.
In light-adapted cells R1-6 the visual acuity is mainly diffraction limited at all wavelengths.
Quality criteria benchmark for hyperspectral imagery
Hyperspectral data appear to be of a growing interest
over the past few years. However, applications for hyperspectral
data are still in their infancy as handling the significant size of
the data presents a challenge for the user community. Efficient
compression techniques are required, and lossy compression,
specifically, will have a role to play, provided its impact on remote
sensing applications remains insignificant. To assess the data
quality, suitable distortion measures relevant to end-user applications
are required. Quality criteria are also of a major interest
for the conception and development of new sensors to define their
requirements and specifications. This paper proposes a method to
evaluate quality criteria in the context of hyperspectral images.
The purpose is to provide quality criteria relevant to the impact
of degradations on several classification applications. Different
quality criteria are considered. Some are traditionnally used in
image and video coding and are adapted here to hyperspectral
images. Others are specific to hyperspectral data.We also propose
the adaptation of two advanced criteria in the presence of different
simulated degradations on AVIRIS hyperspectral images. Finally,
five criteria are selected to give an accurate representation of the
nature and the level of the degradation affecting hyperspectral
data
- …