10,221 research outputs found

    Chromatic Illumination Discrimination Ability Reveals that Human Colour Constancy Is Optimised for Blue Daylight Illuminations

    Get PDF
    The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed

    Strangeness production in a statistical effective model of hadronisation

    Full text link
    We suppose that overall strangeness production in both high energy elementary and heavy ion collisions can be described within the framework of an equilibrium statistical model in which the effective degrees of freedom are constituent quarks as used in effective lagrangian models. In this picture, the excess of relative strangeness production in heavy ion collisions with respect to elementary particle collisions arises from the unbalance between initial non-strange matter and antimatter and from the exact colour and flavour quantum number conservation over different finite volumes. The comparison with the data and the possible sources of model dependence are discussed.Comment: 7 pages, 2 .eps figures. Talk given at QCD@work, MartinaFranca (Italy) June 16-20 2001, to be published in the Proceeding

    A Unified Quantitative Model of Vision and Audition

    Get PDF
    We have put forwards a unified quantitative framework of vision and audition, based on existing data and theories. According to this model, the retina is a feedforward network self-adaptive to inputs in a specific period. After fully grown, cells become specialized detectors based on statistics of stimulus history. This model has provided explanations for perception mechanisms of colour, shape, depth and motion. Moreover, based on this ground we have put forwards a bold conjecture that single ear can detect sound direction. This is complementary to existing theories and has provided better explanations for sound localization.Comment: 7 pages, 3 figure

    Extending minkowski norm illuminant estimation

    Get PDF
    The ability to obtain colour images invariant to changes of illumination is called colour constancy. An algorithm for colour constancy takes sensor responses - digital images - as input, estimates the ambient light and returns a corrected image in which the illuminant influence over the colours has been removed. In this thesis we investigate the step of illuminant estimation for colour constancy and aim to extend the state of the art in this field. We first revisit the Minkowski Family Norm framework for illuminant estimation. Because, of all the simple statistical approaches, it is the most general formulation and, crucially, delivers the best results. This thesis makes four technical contributions. First, we reformulate the Minkowski approach to provide better estimation when a constraint on illumination is employed. Second, we show how the method can (by orders of magnitude) be implemented to run much faster than previous algorithms. Third, we show how a simple edge based variant delivers improved estimation compared with the state of the art across many datasets. In contradistinction to the prior state of the art our definition of edges is fixed (a simple combination of first and second derivatives) i.e. we do not tune our algorithm to particular image datasets. This performance is further improved by incorporating a gamut constraint on surface colour -our 4th contribution. The thesis finishes by considering our approach in the context of a recent OSA competition run to benchmark computational algorithms operating on physiologically relevant cone based input data. Here we find that Constrained Minkowski Norms operi ii ating on spectrally sharpened cone sensors (linear combinations of the cones that behave more like camera sensors) supports competition leading illuminant estimation

    Cubical Gamut Mapping Colour Constancy

    Get PDF
    A new color constancy algorithm called Cubical Gamut Mapping (CGM) is introduced. CGM is computationally very simple, yet performs better than many currently known algorithms in terms of median illumination estimation error. Moreover, it can be tuned to minimize the maximum error. Being able to reduce the maximum error, possibly at the expense of increased median error, is an advantage over many published color constancy algorithms, which may perform quite well in terms of median illumination-estimation error, but have very poor worst-case performance. CGM is based on principles similar to existing gamut mapping algorithms; however, it represents the gamut of image chromaticities as a simple cube characterized by the image’s maximum and minimum rgb chromaticities rather than their more complicated convex hull. It also uses the maximal RGBs as an additional source of information about the illuminant. The estimate of the scene illuminant is obtained by linearly mapping the chromaticity of the maximum RGB, minimum rgb and maximum rgb values. The algorithm is trained off-line on a set of synthetically generated images. Linear programming techniques for optimizing the mapping both in terms of the sum of errors and in terms of the maximum error are used. CGM uses a very simple image pre-processing stage that does not require image segmentation. For each pixel in the image, the pixels in the Nby- N surrounding block are averaged. The pixels for which at least one of the neighbouring pixels in the N-by-N surrounding block differs from the average by more than a given threshold are removed. This pre-processing not only improves CGM, but also improves the performance of other published algorithms such as max RGB and Grey World

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    True colour retrieval from multiple illuminant scene’s image

    Get PDF
    This paper presents an algorithm to retrieve the true colour of an image captured under multiple illuminant. The proposed method uses a histogram analysis and K-means++ clustering technique to split the input image into a number of segments. It then determines normalised average absolute difference (NAAD) for each resulting segment’s colour component. If the NAAD of the segment’s component is greater than an empirically determined threshold. It assumes that the segment does not represent a uniform colour area, hence the segment’s colour component is selected to be used for image colour constancy adjustment. The initial colour balancing factor for each chosen segment’s component is calculated using the Minkowski norm based on the principal that the average values of image colour components are achromatic. It finally calculates colour constancy adjustment factors for each image pixel by fusing the initial colour constancy factors of the chosen segments weighted by the normalised Euclidian distances of the pixel from the centroids of the selected segments. Experimental results using benchmark single and multiple illuminant image datasets, show that the proposed method’s images subjectively exhibit highest colour constancy in the presence of multiple illuminant and also when image contains uniform colour areas
    • …
    corecore