1,205 research outputs found

    Institutionalising Ontology-Based Semantic Integration

    No full text
    We address what is still a scarcity of general mathematical foundations for ontology-based semantic integration underlying current knowledge engineering methodologies in decentralised and distributed environments. After recalling the first-order ontology-based approach to semantic integration and a formalisation of ontological commitment, we propose a general theory that uses a syntax-and interpretation-independent formulation of language, ontology, and ontological commitment in terms of institutions. We claim that our formalisation generalises the intuitive notion of ontology-based semantic integration while retaining its basic insight, and we apply it for eliciting and hence comparing various increasingly complex notions of semantic integration and ontological commitment based on differing understandings of semantics

    Reasoning About a Simulated Printer Case Investigation with Forensic Lucid

    Get PDF
    In this work we model the ACME (a fictitious company name) "printer case incident" and make its specification in Forensic Lucid, a Lucid- and intensional-logic-based programming language for cyberforensic analysis and event reconstruction specification. The printer case involves a dispute between two parties that was previously solved using the finite-state automata (FSA) approach, and is now re-done in a more usable way in Forensic Lucid. Our simulation is based on the said case modeling by encoding concepts like evidence and the related witness accounts as an evidential statement context in a Forensic Lucid program, which is an input to the transition function that models the possible deductions in the case. We then invoke the transition function (actually its reverse) with the evidential statement context to see if the evidence we encoded agrees with one's claims and then attempt to reconstruct the sequence of events that may explain the claim or disprove it.Comment: 18 pages, 3 figures, 7 listings, TOC, index; this article closely relates to arXiv:0906.0049 and arXiv:0904.3789 but to remain stand-alone repeats some of the background and introductory content; abstract presented at HSC'09 and the full updated paper at ICDF2C'11. This is an updated/edited version after ICDF2C proceedings with more references and correction

    Reason Maintenance - Conceptual Framework

    Get PDF
    This paper describes the conceptual framework for reason maintenance developed as part of WP2

    Generating Multiple Outputs from Ω

    Get PDF
    International audienceIn this paper, we describe how to generate multiple outputs (DVI, PostScript, PDF, XML, ...) from the same Ω document. The Ω engine is augmented with a library for manipulating mul-tidimensional contexts. Each macro can be defined in multiple versions, and macros can thereby adapt to differing contexts. Macros can be specialized for several different output formats, without changing the overall structure. As a result, the same document can be used to easily produce different output formats, with appropriate specializations for each of them, without having to make any changes to the document itself.Dans cet article nous décrivons le processus de génération de sorties multiples (DVI, PostScript, PDF, XML, ...) à partir du même document Ω. Le moteur Ω a été muni d’une bibliothèque de sous-routines dédiée à la manipulation de contextes multi-dimensionnels. Les macros TEX peuvent être spécialisés selon le format de sortie, sans changer leur structure globale. Ainsi, le même document peut, sans la moindre modification, produire facilement différents formats de sortie avec les spécialisations ad hoc

    An extensible approach to high-quality multilingual typesetting

    Get PDF
    International audienceWe propose to create and study a new model for the micro-typography part of automated multilingual typesetting. This new model will support quality typesetting for a number of modern and ancient scripts. The major innovations in the proposal are: the process is refined into four phases, each dependent on a multidimensional tree-structured context summarizing the current linguistic and cultural environment. The four phases are: preparing the input stream for typesetting; segmenting the stream into clusters (words); typesetting these clusters; and then recombining the clusters into a typeset text stream. The context is pervasive throughout the process; the algorithms used in each phase are context-dependent, as are the meanings of fundamental entities such as language, script, font and character

    Taxonomy for Humans or Computers? Cognitive Pragmatics for Big Data

    Get PDF
    Criticism of big data has focused on showing that more is not necessarily better, in the sense that data may lose their value when taken out of context and aggregated together. The next step is to incorporate an awareness of pitfalls for aggregation into the design of data infrastructure and institutions. A common strategy minimizes aggregation errors by increasing the precision of our conventions for identifying and classifying data. As a counterpoint, we argue that there are pragmatic trade-offs between precision and ambiguity that are key to designing effective solutions for generating big data about biodiversity. We focus on the importance of theory-dependence as a source of ambiguity in taxonomic nomenclature and hence a persistent challenge for implementing a single, long-term solution to storing and accessing meaningful sets of biological specimens. We argue that ambiguity does have a positive role to play in scientific progress as a tool for efficiently symbolizing multiple aspects of taxa and mediating between conflicting hypotheses about their nature. Pursuing a deeper understanding of the trade-offs and synthesis of precision and ambiguity as virtues of scientific language and communication systems then offers a productive next step for realizing sound, big biodiversity data services

    Quantitative Concept Analysis

    Get PDF
    Formal Concept Analysis (FCA) begins from a context, given as a binary relation between some objects and some attributes, and derives a lattice of concepts, where each concept is given as a set of objects and a set of attributes, such that the first set consists of all objects that satisfy all attributes in the second, and vice versa. Many applications, though, provide contexts with quantitative information, telling not just whether an object satisfies an attribute, but also quantifying this satisfaction. Contexts in this form arise as rating matrices in recommender systems, as occurrence matrices in text analysis, as pixel intensity matrices in digital image processing, etc. Such applications have attracted a lot of attention, and several numeric extensions of FCA have been proposed. We propose the framework of proximity sets (proxets), which subsume partially ordered sets (posets) as well as metric spaces. One feature of this approach is that it extracts from quantified contexts quantified concepts, and thus allows full use of the available information. Another feature is that the categorical approach allows analyzing any universal properties that the classical FCA and the new versions may have, and thus provides structural guidance for aligning and combining the approaches.Comment: 16 pages, 3 figures, ICFCA 201

    Developing a group model for student software engineering teams

    Get PDF
    Work on developing team models for use in adaptive systems generally and intelligent tutoring systems more specifically has largely focused on the task skills or learning efficacy of teams working on short-term projects in highly-controlled virtual environments. In this work, we report on the development of a balanced team model that takes into account task skills, teamwork behaviours and team workflow that has been empirically evaluated via an uncontrolled real-world long-term pilot study of student software engineering teams. We also discuss the use of the the J4.8 machine learning algorithm with our team model in the construction of a team performance prediction system

    Viewpoints on emergent semantics

    Get PDF
    Authors include:Philippe Cudr´e-Mauroux, and Karl Aberer (editors), Alia I. Abdelmoty, Tiziana Catarci, Ernesto Damiani, Arantxa Illaramendi, Robert Meersman, Erich J. Neuhold, Christine Parent, Kai-Uwe Sattler, Monica Scannapieco, Stefano Spaccapietra, Peter Spyns, and Guy De Tr´eWe introduce a novel view on how to deal with the problems of semantic interoperability in distributed systems. This view is based on the concept of emergent semantics, which sees both the representation of semantics and the discovery of the proper interpretation of symbols as the result of a self-organizing process performed by distributed agents exchanging symbols and having utilities dependent on the proper interpretation of the symbols. This is a complex systems perspective on the problem of dealing with semantics. We highlight some of the distinctive features of our vision and point out preliminary examples of its applicatio

    The Mysterious Appearance of Objects

    Get PDF
    Moving from some reflections on the empirical practice of measurement and on the nature of visual perception, we present a constructivist approach to objects. At the basis of such approach there is the idea that all we may know about what is out there is always mediated by some sort of apparatus, being it a measurement instrument or our perceptual system. Given this perspective, some questions are in order: how are objects identified and re-identified through time from the outcomes of apparatuses? How can we distinguish different (kinds of) objects? Our first goal will be to make explicit the mechanism used to build objects from the apparatuses\u27 outcomes, emphasizing what are the ontological and representational problems this construction faces. A second contribution will be a preliminary discussion of some possible ways to distinguish social objects, the constructed objects par excellence, from physical ones. A third contribution will be an attempt to make a bridge between two scientific communities that rarely seek contact or mutual recognition: that of formal ontologies and that of formal concept analysis
    corecore