583 research outputs found

    Colour layering and colour constancy

    Get PDF
    Loosely put, colour constancy for example occurs when you experience a partly shadowed wall to be uniformly coloured, or experience your favourite shirt to be the same colour both with and without sunglasses on. Controversy ensues when one seeks to interpret ‘experience’ in these contexts, for evidence of a constant colour may be indicative a constant colour in the objective world, a judgement that a constant colour would be present were things thus and so, et cetera. My primary aim is to articulate a viable conception of Present Constancy, of what occurs when a constant colour is present in experience, despite the additional presence of some experienced colour variation (e.g., correlating to a change in illumination). My proposed conception involves experienced colour layering – experiencing one opaque colour through another transparent one – and in particular requires one of those experienced layers to remain constant while the other changes. The aim is not to propose this layering conception of colour constancy as the correct interpretation of all constancy cases, but rather to develop the conception enough to demonstrate how it could and plausibly should be applied to various cases, and the virtues it has over rivals. Its virtues include a seamless application to constancy cases involving variations in filters (e.g., sunglasses) and illuminants; its ability to accommodate experiences of partial colours and error-free interpretations of difficult cases; and its broad theoretical-neutrality, allowing it to be incorporated into numerous perceptual epistemologies and ontologies. If layered constancy is prevalent, as I suspect it is, then our experiential access to colours is critically nuanced: we have been plunged into a world of colour without being told that we will rarely, if ever, look to a location and experience just one of them

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    What is neurorepresentationalism?:From neural activity and predictive processing to multi-level representations and consciousness

    Get PDF
    This review provides an update on Neurorepresentationalism, a theoretical framework that defines conscious experience as multimodal, situational survey and explains its neural basis from brain systems constructing best-guess representations of sensations originating in our environment and body (Pennartz, 2015)

    Extending Machine Language Models toward Human-Level Language Understanding

    Get PDF
    Language is central to human intelligence. We review recent break- throughs in machine language processing and consider what re- mains to be achieved. Recent approaches rely on domain general principles of learning and representation captured in artificial neu- ral networks. Most current models, however, focus too closely on language itself. In humans, language is part of a larger system for acquiring, representing, and communicating about objects and sit- uations in the physical and social world, and future machine lan- guage models should emulate such a system. We describe exist- ing machine models linking language to concrete situations, and point toward extensions to address more abstract cases. Human language processing exploits complementary learning systems, in- cluding a deep neural network-like learning system that learns grad- ually as machine systems do, as well as a fast-learning system that supports learning new information quickly. Adding such a system to machine language models will be an important further step toward truly human-like language understanding
    corecore