1,869 research outputs found

    Mistakes in medical ontologies: Where do they come from and how can they be detected?

    Get PDF
    We present the details of a methodology for quality assurance in large medical terminologies and describe three algorithms that can help terminology developers and users to identify potential mistakes. The methodology is based in part on linguistic criteria and in part on logical and ontological principles governing sound classifications. We conclude by outlining the results of applying the methodology in the form of a taxonomy different types of errors and potential errors detected in SNOMED-CT

    "More Deterministic" vs. "Smaller" Buechi Automata for Efficient LTL Model Checking

    Get PDF
    The standard technique for LTL model checking (M\models\neg\vi) consists on translating the negation of the LTL specification, \vi, into a B\"uchi automaton A_\vi, and then on checking if the product M \times A_\vi has an empty language. The efforts to maximize the efficiency of this process have so far concentrated on developing translation algorithms producing B\"uchi automata which are ``{\em as small as possible}'', under the implicit conjecture that this fact should make the final product smaller. In this paper we build on a different conjecture and present an alternative approach in which we generate instead B\"uchi automata which are ``{\em as deterministic as possible}'', in the sense that we try to reduce as much as we are able to the presence of non-deterministic decision states in A_\vi. We motivate our choice and present some empirical tests to support this approach

    Abstract Canonical Inference

    Full text link
    An abstract framework of canonical inference is used to explore how different proof orderings induce different variants of saturation and completeness. Notions like completion, paramodulation, saturation, redundancy elimination, and rewrite-system reduction are connected to proof orderings. Fairness of deductive mechanisms is defined in terms of proof orderings, distinguishing between (ordinary) "fairness," which yields completeness, and "uniform fairness," which yields saturation.Comment: 28 pages, no figures, to appear in ACM Trans. on Computational Logi

    A new methodology for automatic fault tree construction based on component and mark libraries

    Get PDF
    During the design stage of the development of a new system, automated fault tree construction would produce results a lot sooner than the manual process and hence be highly beneficial in order to modify the system design based on identified weakest areas. Although much work has been performed in this area, the construction of fault trees is still generally done manually. In this paper, a new methodology of constructing fault trees from a system description is proposed. Multi-state input/output tables are introduced, which have the capability to capture output deviations during the normal operation of a component as well as under the influence of abnormality or failure. Two libraries, namely, a component library and a mark library, are introduced. The former stores component models and the latter stores a range of marks. The main purpose of a mark is to identify a certain feature of the system, such as a feedback loop or multiple redundancies. These two libraries are used to redraw the system in a graphical environment where the designer can witness the system come together and also input the necessary failure data for each component. An algorithm has been developed, that uses input/output tables and marks, to automatically construct fault trees for failure modes of interest. In order to demonstrate this methodology, it is applied to an automotive emission control system, and a fault tree is generated using the algorithm developed in this work

    All the World's a (Hyper)Graph: A Data Drama

    Get PDF
    We introduce Hyperbard, a dataset of diverse relational data representationsderived from Shakespeare's plays. Our representations range from simple graphscapturing character co-occurrence in single scenes to hypergraphs encodingcomplex communication settings and character contributions as hyperedges withedge-specific node weights. By making multiple intuitive representationsreadily available for experimentation, we facilitate rigorous representationrobustness checks in graph learning, graph mining, and network analysis,highlighting the advantages and drawbacks of specific representations.Leveraging the data released in Hyperbard, we demonstrate that many solutionsto popular graph mining problems are highly dependent on the representationchoice, thus calling current graph curation practices into question. As anhomage to our data source, and asserting that science can also be art, wepresent all our points in the form of a play.<br

    Semantic optimisation in datalog programs

    Get PDF
    Bibliography: leaves 138-142.Datalog is the fusion of Prolog and Database technologies aimed at producing an efficient, logic-based, declarative language for databases. This fusion takes the best of logic programming for the syntax of Datalog, and the best of database systems for the operational part of Datalog. As is the case with all declarative languages, optimisation is necessary to improve the efficiency of programs. Semantic optimisation uses meta-knowledge describing the data in the database to optimise queries and rules, aiming to reduce the resources required to answer queries. In this thesis, I analyse prior work that has been done on semantic optimisation and then propose an optimisation system for Datalog that includes optimisation of recursive programs and a semantic knowledge management module. A language, DatalogiC, which is an extension of Datalog that allows semantic knowledge to be expressed, has also been devised as an implementation vehicle. Finally, empirical results concerning the benefits of semantic optimisation are reported

    The Use of the Belief Revision Concept to Ontology Revision

    Get PDF
    corecore