4,002 research outputs found

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    The Structure of Sensorimotor Explanation

    Get PDF
    The sensorimotor theory of vision and visual consciousness is often described as a radical alternative to the computational and connectionist orthodoxy in the study of visual perception. However, it is far from clear whether the theory represents a significant departure from orthodox approaches or whether it is an enrichment of it. In this study, I tackle this issue by focusing on the explanatory structure of the sensorimotor theory. I argue that the standard formulation of the theory subscribes to the same theses of the dynamical hypothesis and that it affords covering-law explanations. This however exposes the theory to the mere description worry and generates a puzzle about the role of representations. I then argue that the sensorimotor theory is compatible with a mechanistic framework, and show how this can overcome the mere description worry and solve the problem of the explanatory role of representations. By doing so, it will be shown that the theory should be understood as an enrichment of the orthodoxy, rather than an alternative

    The Latent Relation Mapping Engine: Algorithm and Experiments

    Full text link
    Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.Comment: related work available at http://purl.org/peter.turney
    • …
    corecore