425 research outputs found

    Document similarity

    Get PDF
    In recent years, development of tools and methods for measuring document similarity has become a thriving field in informatics, computer science, and digital humanities. Historically, questions of document similarity have been (and still are) important or even crucial in a large variety of situations. Typically, similarity is judged by criteria which depend on context. The move from traditional to digital text technology has not only provided new possibilities for discovery and measurement of document similarity, it has also posed new challenges. Some of these challenges are technical, others conceptual. This paper argues that a particular, well-established, traditional way of starting with an arbitrary document and constructing a document similar to it, namely transcription, may fruitfully be brought to bear on questions concerning similarity criteria for digital documents. Some simple similarity measures are presented and their application to marked up documents are discussed. We conclude that when documents are encoded in the same vocabulary, n-grams constructed to include markup can be used to recognize structural similarities between documents.publishedVersio

    Structure and content semantic similarity detection of eXtensible markup language documents using keys

    Get PDF
    XML (eXtensible Mark-up Language) has become the fundamental standard for efficient data management and exchange. Due to the widespread use of XML for describing and exchanging data on the web, XML-based comparison is central issues in database management and information retrieval. In fact, although many heterogeneous XML sources have similar content, they may be described using different tag names and structures. This work proposes a series of algorithms for detection of structural and content changes among XML data. The first is an algorithm called XDoI (XML Data Integration Based on Content and Structure Similarity Using Keys) that clusters XML documents into subtrees using leaf-node parents as clustering points. This algorithm matches subtrees using the key concept and compares unmatched subtrees for similarities in both content and structure. The experimental results show that this approach finds much more accurate matches with or without the presence of keys in the subtrees. A second algorithm proposed here is called XDI-CSSK (a system for detecting xml similarity in content and structure using relational database); it eliminates unnecessary clustering points using instance statistics and a taxonomic analyzer. As the number of subtrees to be compared is reduced, the overall execution time is reduced dramatically. Semantic similarity plays a crucial role in precise computational similarity measures. A third algorithm, called XML-SIM (structure and content semantic similarity detection using keys) is based on previous work to detect XML semantic similarity based on structure and content. This algorithm is an improvement over XDI-CSSK and XDoI in that it determines content similarity based on semantic structural similarity. In an experimental evaluation, it outperformed previous approaches in terms of both execution time and false positive rates. Information changes periodically; therefore, it is important to be able to detect changes among different versions of an XML document and use that information to identify semantic similarities. Finally, this work introduces an approach to detect XML similarity and thus to join XML document versions using a change detection mechanism. In this approach, subtree keys still play an important role in order to avoid unnecessary subtree comparisons within multiple versions of the same document. Real data sets from bibliographic domains demonstrate the effectiveness of all these algorithms --Abstract, page iv-v

    Online Analysis of Dynamic Streaming Data

    Get PDF
    Die Arbeit zum Thema "Online Analysis of Dynamic Streaming Data" beschĂ€ftigt sich mit der Distanzmessung dynamischer, semistrukturierter Daten in kontinuierlichen Datenströmen um Analysen auf diesen Datenstrukturen bereits zur Laufzeit zu ermöglichen. Hierzu wird eine Formalisierung zur Distanzberechnung fĂŒr statische und dynamische BĂ€ume eingefĂŒhrt und durch eine explizite Betrachtung der Dynamik von Attributen einzelner Knoten der BĂ€ume ergĂ€nzt. Die Echtzeitanalyse basierend auf der Distanzmessung wird durch ein dichte-basiertes Clustering ergĂ€nzt, um eine Anwendung des Clustering, einer Klassifikation, aber auch einer Anomalieerkennung zu demonstrieren. Die Ergebnisse dieser Arbeit basieren auf einer theoretischen Analyse der eingefĂŒhrten Formalisierung von Distanzmessungen fĂŒr dynamische BĂ€ume. Diese Analysen werden unterlegt mit empirischen Messungen auf Basis von Monitoring-Daten von Batchjobs aus dem Batchsystem des GridKa Daten- und Rechenzentrums. Die Evaluation der vorgeschlagenen Formalisierung sowie der darauf aufbauenden Echtzeitanalysemethoden zeigen die Effizienz und Skalierbarkeit des Verfahrens. Zudem wird gezeigt, dass die Betrachtung von Attributen und Attribut-Statistiken von besonderer Bedeutung fĂŒr die QualitĂ€t der Ergebnisse von Analysen dynamischer, semistrukturierter Daten ist. Außerdem zeigt die Evaluation, dass die QualitĂ€t der Ergebnisse durch eine unabhĂ€ngige Kombination mehrerer Distanzen weiter verbessert werden kann. Insbesondere wird durch die Ergebnisse dieser Arbeit die Analyse sich ĂŒber die Zeit verĂ€ndernder Daten ermöglicht

    Survey over Existing Query and Transformation Languages

    Get PDF
    A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability of many current Semantic Web approaches to cope with data available in such diverging representation formalisms as XML, RDF, or Topic Maps. A common query language is the first step to allow transparent access to data in any of these formats. To further the understanding of the requirements and approaches proposed for query languages in the conventional as well as the Semantic Web, this report surveys a large number of query languages for accessing XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from all these areas. From the detailed survey of these query languages, a common classification scheme is derived that is useful for understanding and differentiating languages within and among all three areas

    A Two-Level Identity Model To Support Interoperability of Identity Information in Electronic Health Record Systems.

    Get PDF
    The sharing and retrieval of health information for an electronic health record (EHR) across distributed systems involves a range of identified entities that are possible subjects of documentation (e.g., specimen, clinical analyser). Contemporary EHR specifications limit the types of entities that can be the subject of a record to health professionals and patients, thus limiting the use of two level models in healthcare information systems that contribute information to the EHR. The literature describes several information modelling approaches for EHRs, including so called “two level models”. These models differ in the amount of structure imposed on the information to be recorded, but they generally require the health documentation process for the EHR to focus exclusively on the patient as the subject of care and this definition is often a fixed one. In this thesis, the author introduces a new identity modelling approach to create a generalised reference model for sharing archetype-constrained identity information between diverse identity domains, models and services, while permitting reuse of published standard-based archetypes. The author evaluates its use for expressing the major types of existing demographic reference models in an extensible way, and show its application for standards-compliant two-level modelling alongside heterogeneous demographics models. This thesis demonstrates how the two-level modelling approach that is used for EHRs could be adapted and reapplied to provide a highly-flexible and expressive means for representing subjects of information in allied health settings that support the healthcare process, such as the laboratory domain. By relying on the two level modelling approach for representing identity, the proposed design facilitates cross-referencing and disambiguation of certain demographics standards and information models. The work also demonstrates how it can also be used to represent additional clinical identified entities such as specimen and order as subjects of clinical documentation

    Operational change management and change pattern identification for ontology evolution

    Get PDF
    Ontologies can support a variety of purposes, ranging from capturing the conceptual knowledge to the organization of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. In this sense, the application and representation of ontology changes in terms of higher-level change operations can describe more meaningful semantics behind the applied change. We propose a four phase process that covers the operationalization, representation and detection of higher-level changes in ontology evolution life cycle. We present different levels of change operators based on the granularity and domain-specificity of changes. The first layer is based on generic atomic level change operators, whereas the next two layers are user-defined (generic/domain-specific) change patterns. We introduce the layered change logs for an explicit and complete operational representation of ontology changes. The layered change log model has been used to achieve two purposes, i.e. recording of ontology changes and mining of implicit knowledge such as intent of change, change patterns etc. We formalize the change log using a graph-based approach. We introduce a technique to identify composite changes that not only assist in formulating ontology change log data in a more concise manner, but also help in realizing the semantics and intent behind any applied change. Furthermore, we discover the reusable ordered/unordered domain-specific change patterns. We describe the pattern mining algorithms and evaluate their performance

    Security-Policy Analysis with eXtended Unix Tools

    Get PDF
    During our fieldwork with real-world organizations---including those in Public Key Infrastructure (PKI), network configuration management, and the electrical power grid---we repeatedly noticed that security policies and related security artifacts are hard to manage. We observed three core limitations of security policy analysis that contribute to this difficulty. First, there is a gap between policy languages and the tools available to practitioners. Traditional Unix text-processing tools are useful, but practitioners cannot use these tools to operate on the high-level languages in which security policies are expressed and implemented. Second, practitioners cannot process policy at multiple levels of abstraction but they need this capability because many high-level languages encode hierarchical object models. Finally, practitioners need feedback to be able to measure how security policies and policy artifacts that implement those policies change over time. We designed and built our eXtended Unix tools (XUTools) to address these limitations of security policy analysis. First, our XUTools operate upon context-free languages so that they can operate upon the hierarchical object models of high-level policy languages. Second, our XUTools operate on parse trees so that practitioners can process and analyze texts at multiple levels of abstraction. Finally, our XUTools enable new computational experiments on multi-versioned structured texts and our tools allow practitioners to measure security policies and how they change over time. Just as programmers use high-level languages to program more efficiently, so can practitioners use these tools to analyze texts relative to a high-level language. Throughout the historical transmission of text, people have identified meaningful substrings of text and categorized them into groups such as sentences, pages, lines, function blocks, and books to name a few. Our research interprets these useful structures as different context-free languages by which we can analyze text. XUTools are already in demand by practitioners in a variety of domains and articles on our research have been featured in various news outlets that include ComputerWorld, CIO Magazine, Communications of the ACM, and Slashdot

    Matching Metamodels with Semantic Systems - An Experience Report

    Get PDF
    Abstract: Ontology and schema matching are well established techniques, which have been applied in various integration scenarios, e.g., web service composition and database integration. Consequently, matching tools enabling automatic matching of various kinds of schemas are available. In the field of model-driven engineering, in contrast to schema and ontology integration, the integration of modeling languages relies on manual tasks such as writing model transformation code, which is tedious and error-prone. Therefore, we propose the application of ontology and schema matching techniques for automatically exploring semantic correspondences between metamodels, which are currently the modeling language definitions of choice. The main focus of this paper is on reporting preliminary results and lessons learned by evaluating currently available ontology matching tools for their metamodel matching potential.

    Data replication and update propagation in XML P2P data management systems

    Get PDF
    XML P2P data management systems are P2P systems that use XML as the underlying data format shared between peers in the network. These systems aim to bring the benefits of XML and P2P systems to the distributed data management field. However, P2P systems are known for their lack of central control and high degree of autonomy. Peers may leave the network at any time at will, increasing the risk of data loss. Despite this, most research in XML P2P systems focus on novel and efficient XML indexing and retrieval techniques. Mechanisms for ensuring data availability in XML P2P systems has received comparatively little attention. This project attempts to address this issue. We design an XML P2P data management framework to improve data availability. This framework includes mechanisms for wide-spread data replication, replica location and update propagation. It allows XML documents to be broken down into fragments. By doing so, we aim to reduce the cost of replicating data by distributing smaller XML fragments throughout the network rather than entire documents. To tackle the data replication problem, we propose a suite of selection and placement algorithms that may be interchanged to form a particular replication strategy. To support the placement of replicas anywhere in the network, we use a Fragment Location Catalogue, a global index that maintains the locations of replicas. We also propose a lazy update propagation algorithm to propagate updates to replicas. Experiments show that the data replication algorithms improve data availability in our experimental network environment. We also find that breaking XML documents into smaller pieces and replicating those instead of whole XML documents considerably reduces the replication cost, but at the price of some loss in data availability. For the update propagation tests, we find that the probability that queries return up-to-date results increases, but improvements to the algorithm are necessary to handle environments with high update rates
    • 

    corecore