130,799 research outputs found

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    SciTokens: Capability-Based Secure Access to Remote Scientific Data

    Full text link
    The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Design and Evaluation of Opal2: A Toolkit for Scientific Software as a Service

    Full text link
    Abstract—Grid computing provides mechanisms for making large-scale computing environments available to the masses. In recent times, with the advent of Cloud computing, the concepts of Software as a Service (SaaS), where vendors provide key software products as services over the internet that can be accessed by users to perform complex tasks, and Service as Software (SaS), where customizable and repeatable services are packaged as software products that dynamically meet the demands of individual users, have become increasingly popular. Both SaaS and SaS models are highly applicable to scientific software and users alike. Opal2 is a toolkit for wrapping scientific applications as Web services on Grid and cloud computing resources. It provides a mechanism for scientific application developers to expose the functionality of their codes via simple Web service APIs, abstracting out the details of the back-end infrastructure. Services may be combined via cus-tomized workflows for specific research areas and distributed as virtual machine images. In this paper, we describe the overall philosophy and architecture of the Opal2 framework, including its new plug-in architecture and data handling capabilities. We analyze its performance in typical cluster and Grid settings, and in a cloud computing environment within virtual machines

    An interoperable and self-adaptive approach for SLA-based service virtualization in heterogeneous Cloud environments

    Get PDF
    Cloud computing is a newly emerged computing infrastructure that builds on the latest achievements of diverse research areas, such as Grid computing, Service-oriented computing, business process management and virtualization. An important characteristic of Cloud-based services is the provision of non-functional guarantees in the form of Service Level Agreements (SLAs), such as guarantees on execution time or price. However, due to system malfunctions, changing workload conditions, hard- and software failures, established SLAs can be violated. In order to avoid costly SLA violations, flexible and adaptive SLA attainment strategies are needed. In this paper we present a self-manageable architecture for SLA-based service virtualization that provides a way to ease interoperable service executions in a diverse, heterogeneous, distributed and virtualized world of services. We demonstrate in this paper that the combination of negotiation, brokering and deployment using SLA-aware extensions and autonomic computing principles are required for achieving reliable and efficient service operation in distributed environments. © 2012 Elsevier B.V. All rights reserved

    From Grid to Cloud computing service model: new business model for web services based computing

    Get PDF
    Grid and cloud computing are two models of distributed computing systems that are based on the same objective of sharing heterogeneous resources in terms of networks, compute, storage, software, platform and infrastructure, in a transparent way for the end users. They manifest similarities and was born to satisfy the same needs, even in different times, but their adoptions in profit or not-profit organizations followed different paths. In Europe grid computing emerged thanks to several European funding research projects and demonstrations, which gave rise to the creation of a European grid infrastructure (EGI), even if the core software of architecture was first developed as a US project of the Globus Alliance. Cloud computing, instead, exploded through wide commercial offerings as a dynamic and scalable computing and storage platform that more closely responds to the demands and needs of users. The model is now embraced by different organizations, since it seems much easier to adopt and use and could implement economies of scales considering investments in computing infrastructure. With the aim to demonstrate that the two models could converge rather than compete, this paper analyses the two infrastructures to evaluate their use in a typical not-for-profit environment such the educational or public research institutes, focusing on both the benefits and problem

    A formal architecture-centric and model driven approach for the engineering of science gateways

    Get PDF
    From n-Tier client/server applications, to more complex academic Grids, or even the most recent and promising industrial Clouds, the last decade has witnessed significant developments in distributed computing. In spite of this conceptual heterogeneity, Service-Oriented Architecture (SOA) seems to have emerged as the common and underlying abstraction paradigm, even though different standards and technologies are applied across application domains. Suitable access to data and algorithms resident in SOAs via so-called ‘Science Gateways’ has thus become a pressing need in order to realize the benefits of distributed computing infrastructures.In an attempt to inform service-oriented systems design and developments in Grid-based biomedical research infrastructures, the applicant has consolidated work from three complementary experiences in European projects, which have developed and deployed large-scale production quality infrastructures and more recently Science Gateways to support research in breast cancer, pediatric diseases and neurodegenerative pathologies respectively. In analyzing the requirements from these biomedical applications the applicant was able to elaborate on commonly faced issues in Grid development and deployment, while proposing an adapted and extensible engineering framework. Grids implement a number of protocols, applications, standards and attempt to virtualize and harmonize accesses to them. Most Grid implementations therefore are instantiated as superposed software layers, often resulting in a low quality of services and quality of applications, thus making design and development increasingly complex, and rendering classical software engineering approaches unsuitable for Grid developments.The applicant proposes the application of a formal Model-Driven Engineering (MDE) approach to service-oriented developments, making it possible to define Grid-based architectures and Science Gateways that satisfy quality of service requirements, execution platform and distribution criteria at design time. An novel investigation is thus presented on the applicability of the resulting grid MDE (gMDE) to specific examples and conclusions are drawn on the benefits of this approach and its possible application to other areas, in particular that of Distributed Computing Infrastructures (DCI) interoperability, Science Gateways and Cloud architectures developments

    On Cyber-Physical Security of Smart Grid: Data Integrity Attacks and Experiment Platform

    Get PDF
    A Smart Grid is a digitally enabled electric power grid that integrates the computation and communication technologies from cyber world with the sensors and actuators from physical world. Due to the system complexity, typically the high cohesion of communication and power system, the Smart Grid innovation introduces new and fundamentally different security vulnerabilities and risks. In this work, two important research aspects about cyber-physical security of Smart Grid are addressed: (i) The construction, impact and countermeasure of data integrity attacks; and (ii) The design and implementation of general cyber-physical security experiment platform. For data integrity attacks: based on the system model of state estimation process in Smart Grid, firstly, a data integrity attack model is formulated, such that the attackers can generate financial benefits from the real-time electrical market operations. Then, to reduce the required knowledge about the targeted power system when launching attacks, an online attack approach is proposed, such that the attacker is able to construct the desired attacks without the network information of power system. Furthermore, a network information attacking strategy is proposed, in which the most vulnerable meters can be directly identified and the desired measurement perturbations can be achieved by strategically manipulating the network information. Besides the attacking strategies, corresponding countermeasures based on the sparsity of attack vectors and robust state estimator are provided respectively. For the experiment platform: ScorePlus, a software-hardware hybrid and federated experiment environment for Smart Grid is presented. ScorePlus incorporates both software emulator and hardware testbed, such that they all follow the same architecture, and the same Smart Grid application program can be tested on either of them without any modification; ScorePlus provides a federated environment such that multiple software emulators and hardware testbeds at different locations are able to connect and form a unified Smart Grid system; ScorePlus software is encapsulated as a resource plugin in OpenStack cloud computing platform, such that it supports massive deployments with large scale test cases in cloud infrastructure
    • 

    corecore