29,584 research outputs found

    Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods

    Full text link
    Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.Comment: 23 page

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis

    The Usage and Evaluation of Anthropomorphic Form in Robot Design

    Get PDF
    There are numerous examples illustrating the application of human shape in everyday products. Usage of anthropomorphic form has long been a basic design strategy, particularly in the design of intelligent service robots. As such, it is desirable to use anthropomorphic form not only in aesthetic design but also in interaction design. Proceeding from how anthropomorphism in various domains has taken effect on human perception, we assumed that anthropomorphic form used in appearance and interaction design of robots enriches the explanation of its function and creates familiarity with robots. From many cases we have found, misused anthropomorphic form lead to user disappointment or negative impressions on the robot. In order to effectively use anthropomorphic form, it is necessary to measure the similarity of an artifact to the human form (humanness), and then evaluate whether the usage of anthropomorphic form fits the artifact. The goal of this study is to propose a general evaluation framework of anthropomorphic form for robot design. We suggest three major steps for framing the evaluation: 'measuring anthropomorphic form in appearance', 'measuring anthropomorphic form in Human-Robot Interaction', and 'evaluation of accordance of two former measurements'. This evaluation process will endow a robot an amount of humanness in their appearance equivalent to an amount of humanness in interaction ability, and then ultimately facilitate user satisfaction. Keywords: Anthropomorphic Form; Anthropomorphism; Human-Robot Interaction; Humanness; Robot Design</p

    Knowledge-based Transfer Learning Explanation

    Get PDF
    Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain. In this paper, we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases. The evaluation with US flight data and DBpedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting.Comment: Accepted by International Conference on Principles of Knowledge Representation and Reasoning, 201

    Components of cultural complexity relating to emotions: A conceptual framework

    Get PDF
    Many cultural variations in emotions have been documented in previous research, but a general theoretical framework involving cultural sources of these variations is still missing. The main goal of the present study was to determine what components of cultural complexity interact with the emotional experience and behavior of individuals. The proposed framework conceptually distinguishes five main components of cultural complexity relating to emotions: 1) emotion language, 2) conceptual knowledge about emotions, 3) emotion-related values, 4) feelings rules, i.e. norms for subjective experience, and 5) display rules, i.e. norms for emotional expression

    The emotional recall task : juxtaposing recall and recognition-based affect scales

    Get PDF
    Existing affect scales typically involve recognition of emotions from a predetermined emotion checklist. However, a recognition-based checklist may fail to capture sufficient breadth and specificity of an individual’s recalled emotional experiences and may therefore miss emotions that frequently come to mind. More generally, how do recalled emotions differ from recognized emotions? To address these issues, we present and evaluate an affect scale based on recalled emotions. Participants are asked to produce 10 words that best described their emotions over the past month and then to rate each emotion for how often it was experienced. We show that average weighted valence of the words produced in this task, the Emotional Recall Task (ERT), is strongly correlated with scales related to general affect, such as the PANAS, Ryff’s Scales of Psychological Well-being, the Satisfaction with Life Scale, Depression Anxiety and Stress Scales, and a few other related scales. We further show that the Emotional Recall Task captures a breadth and specificity of emotions not available in other scales but that are nonetheless commonly reported as experienced emotions. We test a general version of the ERT (the ERT general) that is language neutral and can be used across cultures. Finally, we show that the ERT is valid in a test-retest paradigm. In sum, the ERT measures affect based on emotion terms relevant to an individual’s idiosyncratic experience. It is consistent with recognition-based scales, but also offers a new direction towards enriching our understanding of individual differences in recalled and recognized emotions
    corecore