87,147 research outputs found

    ARGOS policy brief on semantic interoperability

    Get PDF
    Semantic interoperability requires the use of standards, not only for Electronic Health Record (EHR) data to be transferred and structurally mapped into a receiving repository, but also for the clinical content of the EHR to be interpreted in conformity with the original meanings intended by its authors. Accurate and complete clinical documentation, faithful to the patient’s situation, and interoperability between systems, require widespread and dependable access to published and maintained collections of coherent and quality-assured semantic resources, including models such as archetypes and templates that would (1) provide clinical context, (2) be mapped to interoperability standards for EHR data, (3) be linked to well specified, multi-lingual terminology value sets, and (4) be derived from high quality ontologies. Wide-scale engagement with professional bodies, globally, is needed to develop these clinical information standards

    Formal Aspects of Grid Brokering

    Full text link
    Coordination in distributed environments, like Grids, involves selecting the most appropriate services, resources or compositions to carry out the planned activities. Such functionalities appear at various levels of the infrastructure and in various means forming a blurry domain, where it is hard to see how the participating components are related and what their relevant properties are. In this paper we focus on a subset of these problems: resource brokering in Grid middleware. This paper aims at establishing a semantical model for brokering and related activities by defining brokering agents at three levels of the Grid middleware for resource, host and broker selection. The main contribution of this paper is the definition and decomposition of different brokering components in Grids by providing a formal model using Abstract State Machines

    Exact Gap Computation for Code Coverage Metrics in ISO-C

    Full text link
    Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582

    AutoAccel: Automated Accelerator Generation and Optimization with Composable, Parallel and Pipeline Architecture

    Full text link
    CPU-FPGA heterogeneous architectures are attracting ever-increasing attention in an attempt to advance computational capabilities and energy efficiency in today's datacenters. These architectures provide programmers with the ability to reprogram the FPGAs for flexible acceleration of many workloads. Nonetheless, this advantage is often overshadowed by the poor programmability of FPGAs whose programming is conventionally a RTL design practice. Although recent advances in high-level synthesis (HLS) significantly improve the FPGA programmability, it still leaves programmers facing the challenge of identifying the optimal design configuration in a tremendous design space. This paper aims to address this challenge and pave the path from software programs towards high-quality FPGA accelerators. Specifically, we first propose the composable, parallel and pipeline (CPP) microarchitecture as a template of accelerator designs. Such a well-defined template is able to support efficient accelerator designs for a broad class of computation kernels, and more importantly, drastically reduce the design space. Also, we introduce an analytical model to capture the performance and resource trade-offs among different design configurations of the CPP microarchitecture, which lays the foundation for fast design space exploration. On top of the CPP microarchitecture and its analytical model, we develop the AutoAccel framework to make the entire accelerator generation automated. AutoAccel accepts a software program as an input and performs a series of code transformations based on the result of the analytical-model-based design space exploration to construct the desired CPP microarchitecture. Our experiments show that the AutoAccel-generated accelerators outperform their corresponding software implementations by an average of 72x for a broad class of computation kernels

    A Comparison of Big Data Frameworks on a Layered Dataflow Model

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models, for which only informal (and often confusing) semantics is generally provided, all share a common underlying model, namely, the Dataflow model. The Dataflow model we propose shows how various tools share the same expressiveness at different levels of abstraction. The contribution of this work is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm), thus making it easier to understand high-level data-processing applications written in such frameworks. Second, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level.Comment: 19 pages, 6 figures, 2 tables, In Proc. of the 9th Intl Symposium on High-Level Parallel Programming and Applications (HLPP), July 4-5 2016, Muenster, German

    Approaches to Semantic Web Services: An Overview and Comparison

    Get PDF
    Abstract. The next Web generation promises to deliver Semantic Web Services (SWS); services that are self-described and amenable to automated discovery, composition and invocation. A prerequisite to this, however, is the emergence and evolution of the Semantic Web, which provides the infrastructure for the semantic interoperability of Web Services. Web Services will be augmented with rich formal descriptions of their capabilities, such that they can be utilized by applications or other services without human assistance or highly constrained agreements on interfaces or protocols. Thus, Semantic Web Services have the potential to change the way knowledge and business services are consumed and provided on the Web. In this paper, we survey the state of the art of current enabling technologies for Semantic Web Services. In addition, we characterize the infrastructure of Semantic Web Services along three orthogonal dimensions: activities, architecture and service ontology. Further, we examine and contrast three current approaches to SWS according to the proposed dimensions

    Specifying Reusable Components

    Full text link
    Reusable software components need expressive specifications. This paper outlines a rigorous foundation to model-based contracts, a method to equip classes with strong contracts that support accurate design, implementation, and formal verification of reusable components. Model-based contracts conservatively extend the classic Design by Contract with a notion of model, which underpins the precise definitions of such concepts as abstract equivalence and specification completeness. Experiments applying model-based contracts to libraries of data structures suggest that the method enables accurate specification of practical software
    • …
    corecore