17,271 research outputs found

    A framework for community detection in heterogeneous multi-relational networks

    Full text link
    There has been a surge of interest in community detection in homogeneous single-relational networks which contain only one type of nodes and edges. However, many real-world systems are naturally described as heterogeneous multi-relational networks which contain multiple types of nodes and edges. In this paper, we propose a new method for detecting communities in such networks. Our method is based on optimizing the composite modularity, which is a new modularity proposed for evaluating partitions of a heterogeneous multi-relational network into communities. Our method is parameter-free, scalable, and suitable for various networks with general structure. We demonstrate that it outperforms the state-of-the-art techniques in detecting pre-planted communities in synthetic networks. Applied to a real-world Digg network, it successfully detects meaningful communities.Comment: 27 pages, 10 figure

    Towards automatic extraction of expressive elements from motion pictures : tempo

    Full text link
    This paper proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high level semantics of stories portrayed, thus enabling better video annotation and interpretation systems. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step towards demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for four full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful attribute in its own right and a promising component of semantic constructs such as tone or mood of a film

    1st INCF Workshop on Sustainability of Neuroscience Databases

    Get PDF
    The goal of the workshop was to discuss issues related to the sustainability of neuroscience databases, identify problems and propose solutions, and formulate recommendations to the INCF. The report summarizes the discussions of invited participants from the neuroinformatics community as well as from other disciplines where sustainability issues have already been approached. The recommendations for the INCF involve rating, ranking, and supporting database sustainability

    Ontology-based model abstraction

    Get PDF
    In recent years, there has been a growth in the use of reference conceptual models to capture information about complex and critical domains. However, as the complexity of domain increases, so does the size and complexity of the models that represent them. Over the years, different techniques for complexity management in large conceptual models have been developed. In particular, several authors have proposed different techniques for model abstraction. In this paper, we leverage on the ontologically well-founded semantics of the modeling language OntoUML to propose a novel approach for model abstraction in conceptual models. We provide a precise definition for a set of Graph-Rewriting rules that can automatically produce much-reduced versions of OntoUML models that concentrate the models’ information content around the ontologically essential types in that domain, i.e., the so-called Kinds. The approach has been implemented using a model-based editor and tested over a repository of OntoUML models

    ViP-CNN: Visual Phrase Guided Convolutional Neural Network

    Full text link
    As the intermediate level task connecting image captioning and object detection, visual relationship detection started to catch researchers' attention because of its descriptive power and clear structure. It detects the objects and captures their pair-wise interactions with a subject-predicate-object triplet, e.g. person-ride-horse. In this paper, each visual relationship is considered as a phrase with three components. We formulate the visual relationship detection as three inter-connected recognition problems and propose a Visual Phrase guided Convolutional Neural Network (ViP-CNN) to address them simultaneously. In ViP-CNN, we present a Phrase-guided Message Passing Structure (PMPS) to establish the connection among relationship components and help the model consider the three problems jointly. Corresponding non-maximum suppression method and model training strategy are also proposed. Experimental results show that our ViP-CNN outperforms the state-of-art method both in speed and accuracy. We further pretrain ViP-CNN on our cleansed Visual Genome Relationship dataset, which is found to perform better than the pretraining on the ImageNet for this task.Comment: 10 pages, 5 figures, accepted by CVPR 201
    • …
    corecore