50,055 research outputs found

    Towards a shared ontology: a generic classification of cognitive processes in conceptual design

    Get PDF
    Towards addressing ontological issues in design cognition research, this paper presents the first generic classification of cognitive processes investigated in protocol studies on conceptual design cognition. The classification is based on a systematic review of 47 studies published over the past 30 years. Three viewpoints on the nature of design cognition are outlined (search, exploration and design activities), highlighting considerable differences in the concepts and terminology applied to describe cognition. To provide a more unified view of the cognitive processes fundamentally under study, we map specific descriptions of cognitive processes provided in protocol studies to more generic, established definitions in the cognitive psychology literature. This reveals a set of 6 categories of cognitive process that appear to be commonly studied and are therefore likely to be prevalent in conceptual design: (1) long-term memory; (2) semantic processing; (3) visual perception; (4) mental imagery processing; (5) creative output production and (6) executive functions. The categories and their constituent processes are formalised in the generic classification. The classification provides the basis for a generic, shared ontology of cognitive processes in design that is conceptually and terminologically consistent with the ontology of cognitive psychology and neuroscience. In addition, the work highlights 6 key avenues for future empirical research: (1) the role of episodic and semantic memory; (2) consistent definitions of semantic processes; (3) the role of sketching from alternative theoretical perspectives on perception and mental imagery; (4) the role of working memory; (5) the meaning and nature of synthesis and (6) unidentified cognitive processes implicated in conceptual design elsewhere in the literature

    Ontology-Based MEDLINE Document Classification

    Get PDF
    An increasing and overwhelming amount of biomedical information is available in the research literature mainly in the form of free-text. Biologists need tools that automate their information search and deal with the high volume and ambiguity of free-text. Ontologies can help automatic information processing by providing standard concepts and information about the relationships between concepts. The Medical Subject Headings (MeSH) ontology is already available and used by MEDLINE indexers to annotate the conceptual content of biomedical articles. This paper presents a domain-independent method that uses the MeSH ontology inter-concept relationships to extend the existing MeSH-based representation of MEDLINE documents. The extension method is evaluated within a document triage task organized by the Genomics track of the 2005 Text REtrieval Conference (TREC). Our method for extending the representation of documents leads to an improvement of 17% over a non-extended baseline in terms of normalized utility, the metric defined for the task. The SVMlight software is used to classify documents

    Acquiring Word-Meaning Mappings for Natural Language Interfaces

    Full text link
    This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance

    Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning

    Full text link
    Distributional word representation methods exploit word co-occurrences to build compact vector encodings of words. While these representations enjoy widespread use in modern natural language processing, it is unclear whether they accurately encode all necessary facets of conceptual meaning. In this paper, we evaluate how well these representations can predict perceptual and conceptual features of concrete concepts, drawing on two semantic norm datasets sourced from human participants. We find that several standard word representations fail to encode many salient perceptual features of concepts, and show that these deficits correlate with word-word similarity prediction errors. Our analyses provide motivation for grounded and embodied language learning approaches, which may help to remedy these deficits.Comment: Accepted at RoboNLP 201
    • …
    corecore