374 research outputs found

    GPU Concurrency: Weak Behaviours and Programming Assumptions

    Get PDF
    Concurrency is pervasive and perplexing, particularly on graphics processing units (GPUs). Current specifications of languages and hardware are inconclusive; thus programmers often rely on folklore assumptions when writing software. To remedy this state of affairs, we conducted a large empirical study of the concurrent behaviour of deployed GPUs. Armed with litmus tests (i.e. short concurrent programs), we questioned the assumptions in programming guides and vendor documentation about the guarantees provided by hardware. We developed a tool to generate thousands of litmus tests and run them under stressful workloads. We observed a litany of previously elusive weak behaviours, and exposed folklore beliefs about GPU programming---often supported by official tutorials---as false. As a way forward, we propose a model of Nvidia GPU hardware, which correctly models every behaviour witnessed in our experiments. The model is a variant of SPARC Relaxed Memory Order (RMO), structured following the GPU concurrency hierarchy

    A compositional semantics for statecharts

    Get PDF

    Compositional Verification of Compiler Optimisations on Relaxed Memory

    Get PDF
    This paper is about verifying program transformations on an axiomatic relaxed memory model of the kind used in C/C++ and Java. Relaxed models present particular challenges for verifying program transformations, because they generate many additional modes of interaction between code and context. For a block of code being transformed, we define a denotation from its behaviour in a set of representative contexts. Our denotation summarises interactions of the code block with the rest of the program both through local and global variables, and through subtle synchronisation effects due to relaxed memory. We can then prove that a transformation does not introduce new program behaviours by comparing the denotations of the code block before and after. Our approach is compositional: by examining only representative contexts, transformations are verified for any context. It is also fully abstract, meaning any valid transformation can be verified. We cover several tricky aspects of C/C++-style memory models, including release-acquire operations, sequentially consistent fences, and non-atomics. We also define a variant of our denotation that is finite at the cost of losing full abstraction. Based on this variant, we have implemented a prototype verification tool and ap

    Ontology Pattern-Based Data Integration

    Get PDF
    Data integration is concerned with providing a unified access to data residing at multiple sources. Such a unified access is realized by having a global schema and a set of mappings between the global schema and the local schemas of each data source, which specify how user queries at the global schema can be translated into queries at the local schemas. Data sources are typically developed and maintained independently, and thus, highly heterogeneous. This causes difficulties in integration because of the lack of interoperability in the aspect of architecture, data format, as well as syntax and semantics of the data. This dissertation represents a study on how small, self-contained ontologies, called ontology design patterns, can be employed to provide semantic interoperability in a cross-repository data integration system. The idea of this so-called ontology pattern- based data integration is that a collection of ontology design patterns can act as the global schema that still contains sufficient semantics, but is also flexible and simple enough to be used by linked data providers. On the one side, this differs from existing ontology-based solutions, which are based on large, monolithic ontologies that provide very rich semantics, but enforce too restrictive ontological choices, hence are shunned by many data providers. On the other side, this also differs from the purely linked data based solutions, which do offer simplicity and flexibility in data publishing, but too little in terms of semantic interoperability. We demonstrate the feasibility of this idea through the actual development of a large scale data integration project involving seven ocean science data repositories from five institutions in the U.S. In addition, we make two contributions as part of this dissertation work, which also play crucial roles in the aforementioned data integration project. First, we develop a collection of more than a dozen ontology design patterns that capture the key notions in the ocean science occurring in the participating data repositories. These patterns contain axiomatization of the key notions and were developed with an intensive involvement from the domain experts. Modeling of the patterns was done in a systematic workflow to ensure modularity, reusability, and flexibility of the whole pattern collection. Second, we propose the so-called pattern views that allow data providers to publish their data in very simple intermediate schema and show that they can greatly assist data providers to publish their data without requiring a thorough understanding of the axiomatization of the patterns

    Human-Intelligence and Machine-Intelligence Decision Governance Formal Ontology

    Get PDF
    Since the beginning of the human race, decision making and rational thinking played a pivotal role for mankind to either exist and succeed or fail and become extinct. Self-awareness, cognitive thinking, creativity, and emotional magnitude allowed us to advance civilization and to take further steps toward achieving previously unreachable goals. From the invention of wheels to rockets and telegraph to satellite, all technological ventures went through many upgrades and updates. Recently, increasing computer CPU power and memory capacity contributed to smarter and faster computing appliances that, in turn, have accelerated the integration into and use of artificial intelligence (AI) in organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational systems including healthcare and medical diagnosis, automated stock trading, robotic production, telecommunications, space explorations, and homeland security. Self-driving cars and drones are just the latest extensions of AI. This thrust of AI into organizations and daily life rests on the AI community’s unstated assumption of its ability to completely replicate human learning and intelligence in AI. Unfortunately, even today the AI community is not close to completely coding and emulating human intelligence into machines. Despite the revolution of digital and technology in the applications level, there has been little to no research in addressing the question of decision making governance in human-intelligent and machine-intelligent (HI-MI) systems. There also exists no foundational, core reference, or domain ontologies for HI-MI decision governance systems. Further, in absence of an expert reference base or body of knowledge (BoK) integrated with an ontological framework, decision makers must rely on best practices or standards that differ from organization to organization and government to government, contributing to systems failure in complex mission critical situations. It is still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence and machine-intelligence (HI-MI) decision capacity is required in any given decision situation. To address this deficiency, this research establishes a formal, top level foundational ontology of HI-MI decision governance in parallel with a grounded theory based body of knowledge which forms the theoretical foundation of a systemic HI-MI decision governance framework

    Arbeitsbericht Nr. 2007-04, Juli 2007

    Get PDF
    Ilmenauer Beiträge zur Wirtschaftsinformatik Nr. 2007-04 / Technische Universität Ilmenau, Fakultät für Wirtschaftswissenschaften, Institut für Wirtschaftsinformatik, ISSN 1861-9223 ISBN 978-3-938940-15-
    • …
    corecore