12 research outputs found

    Unique Names Violations, a Problem for Model Integration, You Say Tomato, I Say Tomahto

    Get PDF
    The article of record as published may be found at https://doi.org/10.1287/ijoc.3.2.107The tomato-tomahto problem (known as the synonymy problem in the database literature) arises in the context of model management when different names are used in different models for what should be identical variables, and these different models are to be integrated or combined into a larger model. When this problem occurs, it is said that the unique names assumption has been violated. We propose a method by which violations of the unique names assumption can be automatically detected. The method relies on declaring four kinds of information and modeling variables: dimensional information, laws relating dimensional expressions, information (called the quiddity) about the intended interpretation of the variables, and laws relating quiddity expressions. We present and discuss the method and the principles and theory behind it, and we describe our (prototype) implementation of the method, as an additional function of an existing model management system

    Mitigating response distortion in IS ethics research

    Get PDF
    Distributed construction of conceptual models may lead to a set of problems when these models are to be compared or integrated. Different kinds of comparison conflicts are known (e.g. naming conflicts or structural conflicts), the resolution of which is subject of different approaches. However, the expost resolution of naming conflicts raises subsequent problems that origin from semantic diversities of namings – even if they are syntactically the same. Therefore, we propose an approach that allows for avoiding naming conflicts in conceptual models already during modelling. This way, the ex-post resolution of naming conflicts becomes obsolete. In order to realise this approach we combine domain thesauri as lexical conventions for the use of terms, and linguistic grammars as conventions for valid phrase structures. The approach is generic in order to make it reusable for any conceptual modelling language

    COMPOSITION RULES FOR BUILDING LINEAR PROGRAMMING MODELS FROM COMPONENT MODELS

    Get PDF
    This paper describes some rules for combining component models into complete linear programs. The objective is to lay the foundations for systems that give users flexibility in designing new models and reusing old ones, while at the same time, providing better documentation and better diagnostics than currently available. The results presented here rely on two different sets of properties of LP models: first, the syntactic relationships among indices that define the rows and columns of the LP, and second, the meanings attached to these indices. These two kinds of information allow us to build a complete algebraic statement of a model from a collection of components provided by the model builder.Information Systems Working Papers Serie

    Unified Enterprise Knowledge Representation with Conceptual Models - Capturing Corporate Language in Naming Conventions

    Get PDF
    Conceptual modeling is an established instrument in the knowledge engineering process. However, a precondition for the usability of conceptual models is not only their syntactic correctness but also their semantic comparability. Assuring comparability is quite challenging especially when models are developed by different persons. Empirical studies show that such models can vary heavily, especially in model element naming, even if they are meant to express the same issue. In contrast to most ontology-driven approaches proposing the resolution of these differences ex-post, we introduce an approach that avoids naming differences in conceptual models already during modeling. Therefore we formalize naming conventions combining domain thesauri and phrase structures based on a linguistic grammar. This allows for guiding modelers automatically during the modeling process using standardized labels for model elements, thus assuring unified enterprise knowledge representation. Our approach is generic, making it applicable for any modeling language

    SUPPORTING TERMINOLOGICAL STANDARDIZATION IN CONCEPTUAL MODELS - A PLUGIN FOR A META-MODELLING TOOL

    Get PDF
    Today´s enterprises are accumulating huge repositories of conceptual models, such as data models, organisational charts and most notably business process models. Those models often grow heterogeneously with the company and are thus often terminologically divers and complex. This terminological diversity originates from the fact that natural language allows an issu to be described in a large variety of ways especially when many modellers are involved. This diversity can become a pitfall when conceptual models are subject to model analysis techniqus, which require terminologically comparable model elements. Therefore, it is essential to ensure model quality by enforcing naming conventions. This paper introduces a prototype, which intends to resolve all associated issus of terminological standardisation already during the modelling phase or ex-post based on existing models. The modeller is guided through the standardization process by providing an automated list of all correct phrase propositions according to his entered phrase. In this approach, naming conventions can easily be defined and enforced. This leads to terminologically unambiguous conceptual models, which are easier to understand and ready for further analysis purposes

    ON THE LOGIC OF GENERALIZED HYPERTEXT

    Get PDF
    Hypertext is one of those neat ideas in computing that periodically burst upon the scene, quickly demonstrating their usefulness and gaining widespread acceptance. As interesting, useful and exciting as hypertext is, the concept has certain problems and limitations, many of which are widely recognized. In this paper we describe what we call basic hypertext and we present a logic model for it. Basic hypertext should be thought of as a rigorously-presented approximation of first-generation hypertext concepts. Following our discussion of basic hypertext, we present our concept of generalized hypertext, which is aimed at overcoming certain of the limitations of basic hypertext and which we have implemented in a DSS shell called Max. We then present a logic model for browsing in generalized hypertext.Information Systems Working Papers Serie

    On the Logic of Generalized Hypertext

    Get PDF
    Hypertext is one of those neat ideas in computing that periodically burst upon the scene, quickly demonstrating their usefulness and gaining widespread acceptance. As interesting, useful and exciting as hypertext is, the concept has certain problems and limitations, many of which are widely recognized. In this paper we describe what we call basic hypertext and we present a logic model for it. Basic hypertext should be thought of as a rigorously-presented approximation of first-generation hypertext concepts. Following our discussion of basic hypertext, we present our concept of generalized hypertext, which is aimed at overcoming certain of the limitations of basic hypertext and which we have implemented in a DSS shell called Max. We then present a logic model for browsing in generalized hypertext

    ON THE LOGIC OF GENERALIZED HYPERTEXT

    Get PDF
    Hypertext is one of those neat ideas in computing that periodically burst upon the scene, quickly demonstrating their usefulness and gaining widespread acceptance. As interesting, useful and exciting as hypertext is, the concept has certain problems and limitations, many of which are widely recognized. In this paper we describe what we call basic hypertext and we present a logic model for it. Basic hypertext should be thought of as a rigorously-presented approximation of first-generation hypertext concepts. Following our discussion of basic hypertext, we present our concept of generalized hypertext, which is aimed at overcoming certain of the limitations of basic hypertext and which we have implemented in a DSS shell called Max. We then present a logic model for browsing in generalized hypertext.Information Systems Working Papers Serie

    Powers for Dispositionalism: A Metaphysical Ground for New Actualism

    Get PDF
    In this dissertation, I develop a metaphysics of powers to ground Dispositionalism, the theory of the source of modality according to which all alethic modal truths are grounded in dispositional properties instantiated in the actual world. I consider a number of key theses that powers metaphysics display, and investigate which can be incorporated in the metaphysical base of Dispositionalism, and how. In the first part I examine the interaction of two core principles of powers ontologies: Directedness, the thesis that powers ‘point at’ their manifestations, and Independence, the thesis that powers can fail to manifest. These two principles are in tension: there is an argument, known as Too Much Possibility, to the effect that they are inconsistent. I examine various strategies to resist the argument. These involve Physical Intentionality, numerical identity between power and manifestation, process ontologies, and platonic universals. I conclude that they are all unsatisfactory.  In the second part, I develop a ‘minimal metaphysics of powers’ that is immune from the threat of Too Much Possibility. This involves considering unmanifested manifestations to be akin to (a suitably re-vamped version of) Mere Logical Existents. I argue that the best way to avoid the tension at the heart of powers ontologies is to conceive of unmanifested manifestations as non-essentially non-spatiotemporally located entities. I then consider some consequences of minimal metaphysics: I examine which ontological category the manifestations of power can belong to, and what are the prospects of grounding metaphysical, as opposed to natural, modality. Finally, in the third part, I investigate whether further key theses of powers ontologies can be incorporated into the minimal metaphysics. This leads to discuss the relationship of the minimal metaphysics with grounding and dependency relations, the metaphysics of time, the truthmaking principle, and tendential theories of powers

    Implementing reusable solvers : an object-oriented framework for operations research algorithms

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 1998.Includes bibliographical references (p. 325-338) and indexes.by John Douglas Ruark.Ph.D
    corecore