189 research outputs found

    Shared Genomics: Developing an accessible integrated analysis platform for Genome-Wide Association Studies

    Get PDF
    Increasingly, genome-wide association studies are being used to identify positions within the human genome that have a link with a disease condition. The number of genomic locations studied means that computationally intensive and bioinformatic intensive solutions will have to be used in the analysis of these data sets. In this paper we present an integrated Workbench that provides user-friendly access to parallelized statistical genetics analysis codes for clinical researchers. In addition we biologically annotate statistical analysis results through the reuse of existing bionformatic Taverna workflows

    Omnispective Analysis and Reasoning: a framework for managing intellectual concerns in scientific workflows

    No full text
    Scientific workflows are extensively used to support the management of experimental and computational research by connecting together different data sources, components and processes. However, certain issues such as the ability to check the appropriateness of the processes orchestrated, management of the context of workflow components and specification, and provision for robust management of intellectual concerns are not addressed adequately. Hence, it is highly desirable to add features to uplift focus from low level details to help clarify the rationale and intent behind the choices and decisions in the workflow specifications and provide a suitable level of abstraction to capture and organize intellectual concerns and map them to the workflow specification and execution semantics. In this paper, we present Omnispective Analysis and Reasoning (OAR), a novel framework for providing the above features and enhancements in scientific workflow management systems and processes. The OAR framework is aimed at supporting effective capture and reuse of intellectual concerns in workflow management

    On the equivalence of specific control flow and data flow patterns with use cases

    Get PDF

    Scientific workflow orchestration interoperating HTC and HPC resources

    Get PDF
    8 páginas, 7 figuras.-- El Pdf del artículo es la versión pre-print.In this work we describe our developments towards the provision of a unified access method to different types of computing infrastructures at the interop- eration level. For that, we have developed a middleware suite which bridges not interoperable middleware stacks used for building distributed computing infrastructues, UNICORE and gLite. Our solution allows to transparently access and operate on HPC and HTC resources from a single interface. Using Kepler as workflow manager, we provide users with the needed integration of codes to create scientific workflows accessing both types of infrastructures.Peer reviewe

    Scientific workflow orchestration interoperating HTC and HPC resources

    Get PDF
    8 páginas, 7 figuras.-- El Pdf del artículo es la versión pre-print.In this work we describe our developments towards the provision of a unified access method to different types of computing infrastructures at the interop- eration level. For that, we have developed a middleware suite which bridges not interoperable middleware stacks used for building distributed computing infrastructues, UNICORE and gLite. Our solution allows to transparently access and operate on HPC and HTC resources from a single interface. Using Kepler as workflow manager, we provide users with the needed integration of codes to create scientific workflows accessing both types of infrastructures.Peer reviewe

    Transparent Access to Scientific and Commercial Clouds from the Kepler Workflow Engine

    Get PDF
    his paper describes the architecture for transparently using several different Cloud Resources from with the graphical Kepler Worklfow environment. This architecture was proven to work by implementing and using it in practice within the FP7 EUFORIA project. The clouds supported are the Open Source cloud OpenNEbula (ONE) environment and the commercial Amazon Elastic Compute Cloud (EC2). Subsequently, these clouds are compared regarding their cost-effectiveness, which covers a performance examination but also the comparison of the commercial against a scientific cloud provider

    mmodel: A workflow framework to accelerate the development of experimental simulations

    Full text link
    Simulation has become an essential component of designing and developing scientific experiments. The conventional procedural approach to coding simulations of complex experiments is often error-prone, hard to interpret, and inflexible, making it hard to incorporate changes such as algorithm updates, experimental protocol modifications, and looping over experimental parameters. We present mmodel, a framework designed to accelerate the writing of experimental simulation packages. mmodel uses a graph-theory approach to represent the experiment steps and can rewrite its own code to implement modifications, such as adding a loop to vary simulation parameters systematically. The framework aims to avoid duplication of effort, increase code readability and testability, and decrease development time
    corecore