10,961 research outputs found

    Conjunctive Chain Modification to the Boundary Contour System Neural Vision Model

    Full text link
    The Boundary Contour System neural vision model reproduces perceptual illusory boundary formation by a conjunctive boundary completion process within a large cellular receptive field. The conjunctive chain allows the same kind of conjunction to occur across multiple receptive fields, which allows for sharper, more flexible boundary completion

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    A Neural Theory of Attentive Visual Search: Interactions of Boundary, Surface, Spatial, and Object Representations

    Full text link
    Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.Air Force Office of Scientific Research (F49620-92-J-0499, 90-0175, F49620-92-J-0334); Advanced Research Projects Agency (AFOSR 90-0083, ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100); Northeast Consortium for Engineering Education (NCEE/A303/21-93 Task 0021); British Petroleum (89-A-1204); National Science Foundation (NSF IRI-90-00530

    #strokesurvivor on Instagram: Conjunctive experiences of adapting to disability

    Get PDF
    This study investigates practices of sharing the experience of stroke on Instagram through use of the hashtag #strokesurvivor. The hashtag brings together people from different cultural backgrounds and professions and those who experience different kinds of healthcare and varying degrees of physical or cognitive impairment. Through a digital ethnography of #strokesurvivor, the conjunctive experiences and communicative practices of the community are reconstructed. Instagram enables specific forms of sociality and sharing, like long-term visual storytelling and influencer dynamics. Adapting to a transformed body and identity is perceived and practiced as a conjunctive experience and a struggle. A strong orientation towards a “normal life” is a recurring theme. Mourning and perseverance are put forward as two modes of coping with and adapting to a transforming body and self

    Visualising Discourse Coherence in Non-Linear Documents

    Get PDF
    To produce coherent linear documents, Natural Language Generation systems have traditionally exploited the structuring role of textual discourse markers such as relational and referential phrases. These coherence markers of the traditional notion of text, however, do not work in non-linear documents: a new set of graphical devices is needed together with formation rules to govern their usage, supported by sound theoretical frameworks. If in linear documents graphical devices such as layout and formatting complement textual devices in the expression of discourse coherence, in non-linear documents they play a more important role. In this paper, we present our theoretical and empirical work in progress, which explores new possibilities for expressing coherence in the generation of hypertext documents

    Learning viewpoint invariant perceptual representations from cluttered images

    Get PDF
    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations

    "What was Molyneux's Question A Question About?"

    Get PDF
    Molyneux asked whether a newly sighted person could distinguish a sphere from a cube by sight alone, given that she was antecedently able to do so by touch. This, we contend, is a question about general ideas. To answer it, we must ask (a) whether spatial locations identified by touch can be identified also by sight, and (b) whether the integration of spatial locations into an idea of shape persists through changes of modality. Posed this way, Molyneux’s Question goes substantially beyond question (a), about spatial locations, alone; for a positive answer to (a) leaves open whether a perceiver might cross-identify locations, but not be able to identify the shapes that collections of locations comprise. We further emphasize that MQ targets general ideas so as to distinguish it from corresponding questions about experiences of shape and about the property of tangible (vs. visual) shape. After proposing a generalized formulation of MQ, we extend earlier work (“Many Molyneux Questions,” Australasian Journal of Philosophy 2020) by showing that MQ does not admit a single answer across the board. Some integrative data-processes transfer across modalities; others do not. Seeing where and how such transfer succeeds and fails in individual cases has much to offer to our understanding of perception and its modalities

    What was Molyneux's Question A Question About?

    Get PDF
    Molyneux asked whether a newly sighted person could distinguish a sphere from a cube by sight alone, given that she was antecedently able to do so by touch. This, we contend, is a question about general ideas. To answer it, we must ask (a) whether spatial locations identified by touch can be identified also by sight, and (b) whether the integration of spatial locations into an idea of shape persists through changes of modality. Posed this way, Molyneux’s Question goes substantially beyond question (a), about spatial locations, alone; for a positive answer to (a) leaves open whether a perceiver might cross-identify locations, but not be able to identify the shapes that collections of locations comprise. We further emphasize that MQ targets general ideas so as to distinguish it from corresponding questions about experiences of shape and about the property of tangible (vs. visual) shape. After proposing a generalized formulation of MQ, we extend earlier work (“Many Molyneux Questions,” Australasian Journal of Philosophy 2020) by showing that MQ does not admit a single answer across the board. Some integrative data-processes transfer across modalities; others do not. Seeing where and how such transfer succeeds and fails in individual cases has much to offer to our understanding of perception and its modalities
    corecore