28 research outputs found

    A Web portal to simplify the scientific communities in using Grid and Cloud resources

    Get PDF
    The modern scientific applications demand increasing availability of computing and storage resources in order to collect and analyse big volume of data that often the single laboratories are not able to provide. Distributed computing models have proved to be a valid and effective solution. Proofs are the Grid, widely used in the high energy physics experiments, and Cloud solutions that are showing an increasing acceptance. These infrastructures require robust Authentication and Authorization mechanisms. The X.509 certificate is the standard used to authenticate Grid users and although it represents a valid security mechanism, many communities complain about the difficulty of handling digital certificates and the complexity of the Grid middleware. These represent the main obstacles to the full exploitation of computing and data distributed infrastructures. In order to simplify the use of these resources it has been developed a Web-based portal that provides users with several important functionalities such as job and workflow submission, interactive service and data management for both Grid and Cloud environments. The thesis describes the Portal architecture, its features, the main benefits for users and the custom views which have been defined and tested in collaboration with some communities to address relevant use cases

    ENABLING GENERIC DISTRIBUTED COMPUTING INFRASTRUCTURE COMPATIBILITY FOR WORKFLOW MANAGEMENT SYSTEMS

    Get PDF
    Solving workflow management system’s Distributed Computing Infrastructure (DCI) incompatibility and their workflow interoperability issues are very challenging and complex tasks. Workflow management systems (and therefore their workflows, workflow developers and also their end-users) are bounded tightly to some limited number of supported DCIs, and efforts required to allow additional DCI support. In this paper we are specifying a concept how to enable generic DCI compatibility for grid workflow management systems (such as ASKALON, MOTEUR, gUSE/WS-PGRADE, etc.) on job and indirectly on workflow level. To enable DCI compatibility among the different workflow management systems we have developed the DCI Bridge software solution. In this paper we will describe its internal architecture, provide usage scenarios to show how the developed service resolve the DCI interoperability issues between various middleware types. The generic DCI Bridge service enables the execution of jobs onto the existing major DCI platforms (such as Service Grids (Globus Toolkit 2 and 4, gLite, ARC, UNICORE), Desktop Grids, Web services, or even cloud based DCIs)

    Interacting with scientific workflows

    Get PDF

    The CloudSME Simulation Platform and its Applications: A Generic Multi-cloud Platform for Developing and Executing Commercial Cloud-based Simulations

    Get PDF
    Simulation is used in industry to study a large variety of problems ranging from increasing the productivity of a manufacturing system to optimizing the design of a wind turbine. However, some simulation models can be computationally demanding and some simulation projects require time consuming experimentation. High performance computing infrastructures such as clusters can be used to speed up the execution of large models or multiple experiments but at a cost that is often too much for Small and Medium-sized Enterprises (SMEs). Cloud computing presents an attractive, lower cost alternative. However, developing a cloud-based simulation application can again be costly for an SME due to training and development needs, especially if software vendors need to use resources of different heterogeneous clouds to avoid being locked-in to one particular cloud provider. In an attempt to reduce the cost of development of commercial cloud-based simulations, the CloudSME Simulation Platform (CSSP) has been developed as a generic approach that combines an AppCenter with the workflow of the WS-PGRADE/gUSE science gateway framework and the multi-cloud-based capabilities of the CloudBroker Platform. The paper presents the CSSP and two representative case studies from distinctly different areas that illustrate how commercial multi-cloud-based simulations can be created

    Computational Methods for Interactive and Explorative Study Design and Integration of High-throughput Biological Data

    Get PDF
    The increase in the use of high-throughput methods to gain insights into biological systems has come with new challenges. Genomics, transcriptomics, proteomics, and metabolomics lead to a massive amount of data and metadata. While this wealth of information has resulted in many scientific discoveries, new strategies are needed to cope with the ever-growing variety and volume of metadata. Despite efforts to standardize the collection of study metadata, many experiments cannot be reproduced or replicated. One reason for this is the difficulty to provide the necessary metadata. The large sample sizes that modern omics experiments enable, also make it increasingly complicated for scientists to keep track of every sample and the needed annotations. The many data transformations that are often needed to normalize and analyze omics data require a further collection of all parameters and tools involved. A second possible cause is missing knowledge about statistical design of studies, both related to study factors as well as the required sample size to make significant discoveries. In this thesis, we develop a multi-tier model for experimental design and a portlet for interactive web-based study design. Through the input of experimental factors and the number of replicates, users can easily create large, factorial experimental designs. Changes or additional metadata can be quickly uploaded via user-defined spreadsheets including sample identifiers. In order to comply with existing standards and provide users with a quick way to import existing studies, we provide full interoperability with the ISA-Tab format. We show that both data model and portlet are easily extensible to create additional tiers of samples annotated with technology-specific metadata. We tackle the problem of unwieldy experimental designs by creating an aggregation graph. Based on our multi-tier experimental design model, similar samples, their sources, and analytes are summarized, creating an interactive summary graph that focuses on study factors and replicates. Thus, we give researchers a quick overview of sample sizes and the aim of different studies. This graph can be included in our portlets or used as a stand alone application and is compatible with the ISA-Tab format. We show that this approach can be used to explore the quality of publicly available experimental designs and metadata annotation. The third part of this thesis contributes to a more statistically sound experiment planning for differential gene expression experiments. We integrate two tools for the prediction of statistical power and sample size estimation into our portal. This integration enables the use of existing data, in order to arrive at more accurate calculation for sample variability. Additionally, the statistical power of existing experimental designs of certain sample sizes can be analyzed. All results and parameters are stored and can be used for later comparison. Even perfectly planned and annotated experiments cannot eliminate human error. Based on our model we develop an automated workflow for microarray quality control, enabling users to inspect the quality of normalization and cluster samples by study factor levels. We import a publicly available microarray dataset to assess our contributions to reproducibility and explore alternative analysis methods based on statistical power analysis
    corecore