210,626 research outputs found

    Associating object names with descriptions of shape that distinguish possible from impossible objects.

    Get PDF
    Five experiments examine the proposal that object names are closely linked torepresentations of global, 3D shape by comparing memory for simple line drawings of structurally possible and impossible novel objects.Objects were rendered impossible through local edge violations to global coherence (cf. Schacter, Cooper, & Delaney, 1990) and supplementary observations confirmed that the sets of possible and impossible objects were matched for their distinctiveness. Employing a test of explicit recognition memory, Experiment 1 confirmed that the possible and impossible objects were equally memorable. Experiments 2–4 demonstrated that adults learn names (single-syllable non-words presented as count nouns, e.g., “This is a dax”) for possible objectsmore easily than for impossible objects, and an item-based analysis showed that this effect was unrelated to either the memorability or the distinctiveness of the individual objects. Experiment 3 indicated that the effects of object possibility on name learning were long term (spanning at least 2months), implying that the cognitive processes being revealed can support the learning of object names in everyday life. Experiment 5 demonstrated that hearing someone else name an object at presentation improves recognition memory for possible objects, but not for impossible objects. Taken together, the results indicate that object names are closely linked to the descriptions of global, 3D shape that can be derived for structurally possible objects but not for structurally impossible objects. In addition, the results challenge the view that object decision and explicit recognition necessarily draw on separate memory systems,with only the former being supported by these descriptions of global object shape. It seems that recognition also can be supported by these descriptions, provided the original encoding conditions encourage their derivation. Hearing an object named at encoding appears to be just such a condition. These observations are discussed in relation to the effects of naming in other visual tasks, and to the role of visual attention in object identification

    Learning with Latent Language

    Full text link
    The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter's loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without

    Creativity as Cognitive design \ud The case of mesoscopic variables in Meta-Structures\ud

    Get PDF
    Creativity is an open problem which has been differently approached by several disciplines since a long time. In this contribution we consider as creative the constructivist design an observer does on the description levels of complex phenomena, such as the self-organized and emergent ones ( e.g., Bènard rollers, Belousov-Zhabotinsky reactions, flocks, swarms, and more radical cognitive and social emergences). We consider this design as related to the Gestaltian creation of a language fit for representing natural processes and the observer in an integrated way. Organised systems, both artificial and most of the natural ones are designed/ modelled according to a logical closed model which masters all the inter-relation between their constitutive elements, and which can be described by an algorithm or a single formal model. We will show there that logical openness and DYSAM (Dynamical Usage of Models) are the proper tools for those phenomena which cannot be described by algorithms or by a single formal model. The strong correlation between emergence and creativity suggests that an open model is the best way to provide a formal definition of creativity. A specific application relates to the possibility to shape the emergence of Collective Behaviours. Different modelling approaches have been introduced, based on symbolic as well as sub-symbolic rules of interaction to simulate collective phenomena by means of computational emergence. Another approach is based on modelling collective phenomena as sequences of Multiple Systems established by percentages of conceptually interchangeable agents taking on the same roles at different times and different roles at the same time. In the Meta-Structures project we propose to use mesoscopic variables as creative design, invention, good continuity and imitation of the description level. In the project we propose to define the coherence of sequences of Multiple Systems by using the values taken on by the dynamic mesoscopic clusters of its constitutive elements, such as the instantaneous number of elements having, in a flock, the same speed, distance from their nearest neighbours, direction and altitude. In Meta-Structures the collective behaviour’s coherence corresponds, for instance, to the scalar values taken by speed, distance, direction and altitude along time, through statistical strategies of interpolation, quasi-periodicity, levels of ergodicity and their reciprocal relationship. In this case the constructivist role of the observer is considered creative as it relates to neither non-linear replication nor transposition of levels of description and models used for artificial systems, like reductionism. Creativity rather lies in inventing new mesoscopic variables able to identify coherent patterns in complex systems. As it is known, mesoscopic variables represent partial macroscopic properties of a system by using some of the microscopic degrees of freedom possessed by composing elements. Such partial usage of microscopic as well as macroscopic properties allows a kind of Gestaltian continuity and imitation between levels of descriptions for mesoscopic modelling. \ud \u

    Extending the 5S Framework of Digital Libraries to support Complex Objects, Superimposed Information, and Content-Based Image Retrieval Services

    Get PDF
    Advanced services in digital libraries (DLs) have been developed and widely used to address the required capabilities of an assortment of systems as DLs expand into diverse application domains. These systems may require support for images (e.g., Content-Based Image Retrieval), Complex (information) Objects, and use of content at fine grain (e.g., Superimposed Information). Due to the lack of consensus on precise theoretical definitions for those services, implementation efforts often involve ad hoc development, leading to duplication and interoperability problems. This article presents a methodology to address those problems by extending a precisely specified minimal digital library (in the 5S framework) with formal definitions of aforementioned services. The theoretical extensions of digital library functionality presented here are reinforced with practical case studies as well as scenarios for the individual and integrative use of services to balance theory and practice. This methodology has implications that other advanced services can be continuously integrated into our current extended framework whenever they are identified. The theoretical definitions and case study we present may impact future development efforts and a wide range of digital library researchers, designers, and developers

    A Neural Model for Generating Natural Language Summaries of Program Subroutines

    Full text link
    Source code summarization -- creating natural language descriptions of source code behavior -- is a rapidly-growing research topic with applications to automatic documentation generation, program comprehension, and software maintenance. Traditional techniques relied on heuristics and templates built manually by human experts. Recently, data-driven approaches based on neural machine translation have largely overtaken template-based systems. But nearly all of these techniques rely almost entirely on programs having good internal documentation; without clear identifier names, the models fail to create good summaries. In this paper, we present a neural model that combines words from code with code structure from an AST. Unlike previous approaches, our model processes each data source as a separate input, which allows the model to learn code structure independent of the text in code. This process helps our approach provide coherent summaries in many cases even when zero internal documentation is provided. We evaluate our technique with a dataset we created from 2.1m Java methods. We find improvement over two baseline techniques from SE literature and one from NLP literature
    • …
    corecore