10,999 research outputs found

    No difference in variability of unique hue selections and binary hue selections

    Get PDF
    If unique hues have special status in phenomenological experience as perceptually pure, it seems reasonable to assume that they are represented more precisely by the visual system than are other colors. Following the method of Malkoc et al. (J. Opt. Soc. Am. A22, 2154 [2005]), we gathered unique and binary hue selections from 50 subjects. For these subjects we repeated the measurements in two separate sessions, allowing us to measure test-retest reliabilities (0.52≀ρ≀0.78; pâ‰Ș0.01). We quantified the within-individual variability for selections of each hue. Adjusting for the differences in variability intrinsic to different regions of chromaticity space, we compared the within-individual variability for unique hues to that for binary hues. Surprisingly, we found that selections of unique hues did not show consistently lower variability than selections of binary hues. We repeated hue measurements in a single session for an independent sample of 58 subjects, using a different relative scaling of the cardinal axes of MacLeod-Boynton chromaticity space. Again, we found no consistent difference in adjusted within-individual variability for selections of unique and binary hues. Our finding does not depend on the particular scaling chosen for the Y axis of MacLeod-Boynton chromaticity space

    Colour layering and colour constancy

    Get PDF
    Loosely put, colour constancy for example occurs when you experience a partly shadowed wall to be uniformly coloured, or experience your favourite shirt to be the same colour both with and without sunglasses on. Controversy ensues when one seeks to interpret ‘experience’ in these contexts, for evidence of a constant colour may be indicative a constant colour in the objective world, a judgement that a constant colour would be present were things thus and so, et cetera. My primary aim is to articulate a viable conception of Present Constancy, of what occurs when a constant colour is present in experience, despite the additional presence of some experienced colour variation (e.g., correlating to a change in illumination). My proposed conception involves experienced colour layering – experiencing one opaque colour through another transparent one – and in particular requires one of those experienced layers to remain constant while the other changes. The aim is not to propose this layering conception of colour constancy as the correct interpretation of all constancy cases, but rather to develop the conception enough to demonstrate how it could and plausibly should be applied to various cases, and the virtues it has over rivals. Its virtues include a seamless application to constancy cases involving variations in filters (e.g., sunglasses) and illuminants; its ability to accommodate experiences of partial colours and error-free interpretations of difficult cases; and its broad theoretical-neutrality, allowing it to be incorporated into numerous perceptual epistemologies and ontologies. If layered constancy is prevalent, as I suspect it is, then our experiential access to colours is critically nuanced: we have been plunged into a world of colour without being told that we will rarely, if ever, look to a location and experience just one of them

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    On the Computational Modeling of Human Vision

    Full text link

    Color-coordinate system from a 13th-century account of rainbows.

    Get PDF
    We present a new analysis of Robert Grosseteste’s account of color in his treatise De iride (On the Rainbow), dating from the early 13th century. The work explores color within the 3D framework set out in Grosseteste’s De colore [see J. Opt. Soc. Am. A 29, A346 (2012)], but now links the axes of variation to observable properties of rainbows. We combine a modern understanding of the physics of rainbows and of human color perception to resolve the linguistic ambiguities of the medieval text and to interpret Grosseteste’s key terms

    Understanding deep features with computer-generated imagery

    Get PDF
    We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses corresponding to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three CNNs trained on large image datasets: AlexNet, Places, and Oxford VGG. We observe important differences across the networks and CNN layers for different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images
    • 

    corecore