583 research outputs found
Colour layering and colour constancy
Loosely put, colour constancy for example occurs when you experience a partly shadowed wall to be uniformly coloured, or experience your favourite shirt to be the same colour both with and without sunglasses on. Controversy ensues when one seeks to interpret ‘experience’ in these contexts, for evidence of a constant colour may be indicative a constant colour in the objective world, a judgement that a constant colour would be present were things thus and so, et cetera. My primary aim is to articulate a viable conception of Present Constancy, of what occurs when a constant colour is present in experience, despite the additional presence of some experienced colour variation (e.g., correlating to a change in illumination). My proposed conception involves experienced colour layering – experiencing one opaque colour through another transparent one – and in particular requires one of those experienced layers to remain constant while the other changes. The aim is not to propose this layering conception of colour constancy as the correct interpretation of all constancy cases, but rather to develop the conception enough to demonstrate how it could and plausibly should be applied to various cases, and the virtues it has over rivals. Its virtues include a seamless application to constancy cases involving variations in filters (e.g., sunglasses) and illuminants; its ability to accommodate experiences of partial colours and error-free interpretations of difficult cases; and its broad theoretical-neutrality, allowing it to be incorporated into numerous perceptual epistemologies and ontologies. If layered constancy is prevalent, as I suspect it is, then our experiential access to colours is critically nuanced: we have been plunged into a world of colour without being told that we will rarely, if ever, look to a location and experience just one of them
Recommended from our members
Neural similarity between overlapping events at learning differentially affects reinstatement across the cortex
Episodic memory often involves high overlap between the actors, locations, and objects of everyday events. Under some circumstances, it may be beneficial to distinguish, or differentiate, neural representations of similar events to avoid interference at recall. Alternatively, forming overlapping representations of similar events, or integration, may aid recall by linking shared information between memories. It is currently unclear how the brain supports these seemingly conflicting functions of differentiation and integration. We used multivoxel pattern similarity analysis (MVPA) of fMRI data and neural-network analysis of visual similarity to examine how highly overlapping naturalistic events are encoded in patterns of cortical activity, and how the degree of differentiation versus integration at encoding affects later retrieval. Participants performed an episodic memory task in which they learned and recalled naturalistic video stimuli with high feature overlap. Visually similar videos were encoded in overlapping patterns of neural activity in temporal, parietal, and occipital regions, suggesting integration. We further found that encoding processes differentially predicted later reinstatement across the cortex. In visual processing regions in occipital cortex, greater differentiation at encoding predicted later reinstatement. Higher-level sensory processing regions in temporal and parietal lobes showed the opposite pattern, whereby highly integrated stimuli showed greater reinstatement. Moreover, integration in high-level sensory processing regions during encoding predicted greater accuracy and vividness at recall. These findings provide novel evidence that encoding-related differentiation and integration processes across the cortex have divergent effects on later recall of highly similar naturalistic events
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
What is neurorepresentationalism?:From neural activity and predictive processing to multi-level representations and consciousness
This review provides an update on Neurorepresentationalism, a theoretical framework that defines conscious experience as multimodal, situational survey and explains its neural basis from brain systems constructing best-guess representations of sensations originating in our environment and body (Pennartz, 2015)
Extending Machine Language Models toward Human-Level Language Understanding
Language is central to human intelligence. We review recent break- throughs in machine language processing and consider what re- mains to be achieved. Recent approaches rely on domain general principles of learning and representation captured in artificial neu- ral networks. Most current models, however, focus too closely on language itself. In humans, language is part of a larger system for acquiring, representing, and communicating about objects and sit- uations in the physical and social world, and future machine lan- guage models should emulate such a system. We describe exist- ing machine models linking language to concrete situations, and point toward extensions to address more abstract cases. Human language processing exploits complementary learning systems, in- cluding a deep neural network-like learning system that learns grad- ually as machine systems do, as well as a fast-learning system that supports learning new information quickly. Adding such a system to machine language models will be an important further step toward truly human-like language understanding
Recommended from our members
Traditionalism and parallel distributed processing as qualitatively distinct models of the mind.
My main concern in this work is answering the question: does parallel distributed processing (PDP) as a model of the mind offer a genuine alternative to traditionalism? There has been vigorous debate within the last eight years on the subject of the relative merits of the one model over the other; however, a detailed examination of the nature of their respective differences has not been attempted. The mental realm is that realm in which causal interaction is governed by laws quantifying over representational states. Traditionalism is the thesis that the law-governed transitions between mental states are transitions between computational states. PDP is the thesis that the transitions between mental states are transitions between distributed representational states in a PDP-type system. The representational content of a distributed state is determined by the causal history of the system as a whole, and results from the changing of system parameters via learning so as to insert this state in the causal chain between the perception of some external state-of-affairs and behavior. Traditionalism and PDP are best considered not as providing a detailed picture of the causal processes involved in mental activity, but rather as providing a general framework that sets broad constraints on how such law-governed transitions proceed. I describe two aspects of qualitative distinctness that can be used even when comparing such non-specific models. The first involves examining the ontological commitment of each: assuming a realist interpretation, what must exist if traditionalism (or PDP) is a true model of the mind? If the two models make the same commitments, one may ask the further question: do the constraints imposed on the form that mental causal transitions take allow the possibility of an isomorphism between causal sequences permitted by the one model with those permitted by the other? An examination of the manner in which representational content is determined within PDP systems shows that there is no possible isomorphism. Therefore, the two models are qualitatively distinct
- …