8,674 research outputs found

    Towards ensuring Satisfiability of Merged Ontology

    Get PDF
    AbstractThe last decade has seen researchers developing efficient algorithms for the mapping and merging of ontologies to meet the demands of interoperability between heterogeneous and distributed information systems. But, still state-of-the-art ontology mapping and merging systems is semi-automatic that reduces the burden of manual creation and maintenance of mappings, and need human intervention for their validation. The contribution presented in this paper makes human intervention one step more down by automatically identifying semantic inconsistencies in the early stages of ontology merging. Our methodology detects inconsistencies based on structural mismatches that occur due to conflicts among the set of Generalized Concept Inclusions, and Disjoint Relations due to the differences between disjoint partitions in the local heterogeneous ontologies. We present novel methodologies to detect and repair semantic inconsistencies from the list of initial mappings. This results in global merged ontology free from ‘circulatory error in class/property hierarchy’, „common class/instance between disjoint classes error’, ‘redundancy of subclass/subproperty relations’, ‘redundancy of disjoint relations’ and other types of „semantic inconsistency’ errors. In this way, our methodology saves time and cost of traversing local ontologies for the validation of mappings, improves performance by producing only consistent accurate mappings, and reduces the user dependability for ensuring the satisfiability and consistency of merged ontology. The experiments show that the newer approach with automatic inconsistency detection yields a significantly higher precision

    Growing a Tree in the Forest: Constructing Folksonomies by Integrating Structured Metadata

    Full text link
    Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.Comment: 10 pages, To appear in the Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining(KDD) 201

    Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers

    Full text link
    Scene parsing, or semantic segmentation, consists in labeling each pixel in an image with the category of the object it belongs to. It is a challenging task that involves the simultaneous detection, segmentation and recognition of all the objects in the image. The scene parsing method proposed here starts by computing a tree of segments from a graph of pixel dissimilarities. Simultaneously, a set of dense feature vectors is computed which encodes regions of multiple sizes centered on each pixel. The feature extractor is a multiscale convolutional network trained from raw pixels. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average "purity" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The convolutional network feature extractor is trained end-to-end from raw pixels, alleviating the need for engineered features. After training, the system is parameter free. The system yields record accuracies on the Stanford Background Dataset (8 classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) while being an order of magnitude faster than competing approaches, producing a 320 \times 240 image labeling in less than 1 second.Comment: 9 pages, 4 figures - Published in 29th International Conference on Machine Learning (ICML 2012), Jun 2012, Edinburgh, United Kingdo

    Optimal modularity: A demonstration of the evolutionary advantage of modular architectures

    Get PDF
    Modularity is an important concept in evolutionary theorizing but lack of a consistent definition renders study difficult. Using the generalised NK-model of fitness landscapes, we differentiate modularity from decomposability. Modular and decomposable systems are both composed of subsystems but in the former these subsystems are connected via interface standards while in the latter subsystems are completely isolated. We derive the optimal level of modularity, which minimises the time required to globally optimise a system, both for the case of two-layered systems and for the general case of multi-layered hierarchical systems containing modules within modules. This derivation supports the hypothesis of modularity as a mechanism to increase the speed of evolution. Our formal definition clarifies the concept of modularity and provides a framework and an analytical baseline for further research.Modularity, Decomposability, Near-decomposability, Complexity, NK-model, Search, hierarchy
    • …
    corecore