1,265 research outputs found

    Generalized Relation Learning with Semantic Correlation Awareness for Link Prediction

    Full text link
    Developing link prediction models to automatically complete knowledge graphs has recently been the focus of significant research interest. The current methods for the link prediction taskhavetwonaturalproblems:1)the relation distributions in KGs are usually unbalanced, and 2) there are many unseen relations that occur in practical situations. These two problems limit the training effectiveness and practical applications of the existing link prediction models. We advocate a holistic understanding of KGs and we propose in this work a unified Generalized Relation Learning framework GRL to address the above two problems, which can be plugged into existing link prediction models. GRL conducts a generalized relation learning, which is aware of semantic correlations between relations that serve as a bridge to connect semantically similar relations. After training with GRL, the closeness of semantically similar relations in vector space and the discrimination of dissimilar relations are improved. We perform comprehensive experiments on six benchmarks to demonstrate the superior capability of GRL in the link prediction task. In particular, GRL is found to enhance the existing link prediction models making them insensitive to unbalanced relation distributions and capable of learning unseen relations.Comment: Preprint of accepted AAAI2021 pape

    Semantic correlation of behavior for the interoperability of heterogeneous simulations

    Get PDF
    A desirable goal of military simulation training is to provide large scale or joint exercises to train personnel at higher echelons. To help meet this goal, many of the lower echelon combatants must consist of computer generated forces with some of these echelons composed of units from different simulations. The object of the research described is to correlate the behaviors of entities in different simulations so that they can interoperate with one another to support simulation training. Specific source behaviors can be translated to a form in terms of general behaviors which can then be correlated to any desired specific destination simulation behavior without prior knowledge of the pairing. The correlation, however, does not result in 100% effectiveness because most simulations have different semantics and were designed for different training needs. An ontology of general behaviors and behavior parameters, a database of source behaviors written in terms of these general behaviors with a database of destination behaviors. This comparison is based upon the similarity of sub-behaviors and the behavior parameters. Source behaviors/parameters may be deemed similar based upon their sub-behaviors or sub-parameters and their relationship (more specific or more general) to destination behaviors/parameters. As an additional constraint for correlation, a conversion path from all required destination parameters to a source parameter must be found in order for the behavior to be correlated and thus executed. The length of this conversion path often determines the similarity for behavior parameters, both source and destination. This research has shown, through a set of experiments, that heuristic metrics, in conjunction with a corresponding behavior and parameter ontology, are sufficient for the correlation of heterogeneous simulation behavior. These metrics successfully correlated known pairings provided by experts and provided reasonable correlations for behaviors that have no corresponding destination behavior. For different simulations, these metrics serve as a foundation for more complex methods of behavior correlation

    Visual Re-ranking with Natural Language Understanding for Text Spotting

    Get PDF
    Many scene text recognition approaches are based on purely visual information and ignore the semantic relation between scene and text. In this paper, we tackle this problem from natural language processing perspective to fill the gap between language and vision. We propose a post-processing approach to improve scene text recognition accuracy by using occurrence probabilities of words (unigram language model), and the semantic correlation between scene and text. For this, we initially rely on an off-the-shelf deep neural network, already trained with a large amount of data, which provides a series of text hypotheses per input image. These hypotheses are then re-ranked using word frequencies and semantic relatedness with objects or scenes in the image. As a result of this combination, the performance of the original network is boosted with almost no additional cost. We validate our approach on ICDAR'17 dataset.Comment: Accepted by ACCV 2018. arXiv admin note: substantial text overlap with arXiv:1810.0977

    Visual re-ranking with natural language understanding for text spotting

    Get PDF
    The final publication is available at link.springer.comMany scene text recognition approaches are based on purely visual information and ignore the semantic relation between scene and text. In this paper, we tackle this problem from natural language processing perspective to fill the gap between language and vision. We propose a post processing approach to improve scene text recognition accuracy by using occurrence probabilities of words (unigram language model), and the semantic correlation between scene and text. For this, we initially rely on an off-the-shelf deep neural network, already trained with large amount of data, which provides a series of text hypotheses per input image. These hypotheses are then re-ranked using word frequencies and semantic relatedness with objects or scenes in the image. As a result of this combination, the performance of the original network is boosted with almost no additional cost. We validate our approach on ICDAR'17 dataset.Peer ReviewedPostprint (author's final draft

    Visual Semantic Re-ranker for Text Spotting

    Get PDF
    Many current state-of-the-art methods for text recognition are based on purely local information and ignore the semantic correlation between text and its surrounding visual context. In this paper, we propose a post-processing approach to improve the accuracy of text spotting by using the semantic relation between the text and the scene. We initially rely on an off-the-shelf deep neural network that provides a series of text hypotheses for each input image. These text hypotheses are then re-ranked using the semantic relatedness with the object in the image. As a result of this combination, the performance of the original network is boosted with a very low computational cost. The proposed framework can be used as a drop-in complement for any text-spotting algorithm that outputs a ranking of word hypotheses. We validate our approach on ICDAR'17 shared task dataset
    corecore