144 research outputs found

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Campus Bridging: Software & Software Service Issues Workshop Report

    Get PDF
    This report summarizes the discussion at and findings of a workshop on the software and services aspects of cyberinfrastructure as they apply to campus bridging. The workshop took a broad view of software and services, including services in the business sense of the word, such as user support, in addition to information technology services. Specifically, the workshop addressed the following two goals: * Suggest common elements of software stacks widely usable across the nation/world to promote interoperability/economy of scale; and * Suggested policy documents that any research university should have in place.The preparation of this report and related documents was supported by several sources, including: * The National Science Foundation through Grant 0829462 (Bradley C. Wheeler, PI; Geoffrey Brown, Craig A. Stewart, Beth Plale, Dennis Gannon Co-PIs). * Indiana University Pervasive Technology Institute (http://pti.iu.edu/) for funding staff providing logistical support of the task force activities, writing and editorial staff, and layout and production of the final report document. * RENCI (the Renaissance Computing Institute, http://www.renci.org/) supported this workshop and report by generously providing the time and effort of John McGee. * Texas A&M University (http://www.tamu.edu) supported this workshop and report by generously providing the time and effort of Guy Almes

    Workshop Report: Campus Bridging: Reducing Obstacles on the Path to Big Answers 2015

    Get PDF
    For the researcher whose experiments require large-scale cyberinfrastructure, there exists significant challenges to successful completion. These challenges are broad and go far beyond the simple issue that there are not enough large-scale resources available; these solvable issues range from a lack of documentation written for a non-technical audience to a need for greater consistency with regard to system configuration and consistent software configuration and availability on the large-scale resources at national tier supercomputing centers, with a number of other challenges existing alongside the ones mentioned here. Campus Bridging is a relatively young discipline that aims to mitigate these issues for the academic end-user, for whom the entire process can feel like a path comprised entirely of obstacles. The solutions to these problems must by necessity include multiple approaches, with focus not only on the end user but on the system administrators responsible for supporting these resources as well as the systems themselves. These system resources include not only those at the supercomputing centers but also those that exist at the campus or departmental level and even on the personal computing devices the researcher uses to complete his or her work. This workshop report compiles the results of a half-day workshop, held in conjunction with IEEE Cluster 2015 in Chicago, IL.NSF XSED

    National Science Foundation Advisory Committee for Cyberinfrastructure Task Force on Campus Bridging Final Report

    Get PDF
    The mission of the National Science Foundation (NSF) Advisory Committee on Cyberinfrastructure (ACCI) is to advise the NSF as a whole on matters related to vision and strategy regarding cyberinfrastructure (CI). In early 2009 the ACCI charged six task forces with making recommendations to the NSF in strategic areas of cyberinfrastructure: Campus Bridging; Cyberlearning and Workforce Development; Data and Visualization; Grand Challenges; High Performance Computing (HPC); and Software for Science and Engineering. Each task force was asked to offer advice on the basis of which the NSF would modify existing programs and create new programs. This document is the final, overall report of the Task Force on Campus Bridging.National Science Foundatio

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    A survey of the European Open Science Cloud services for expanding the capacity and capabilities of multidisciplinary scientific applications

    Get PDF
    Open Science is a paradigm in which scientific data, procedures, tools and results are shared transparently and reused by society. The European Open Science Cloud (EOSC) initiative is an effort in Europe to provide an open, trusted, virtual and federated computing environment to execute scientific applications and store, share and reuse research data across borders and scientific disciplines. Additionally, scientific services are becoming increasingly data-intensive, not only in terms of computationally intensive tasks but also in terms of storage resources. To meet those resource demands, computing paradigms such as High-Performance Computing (HPC) and Cloud Computing are applied to e-science applications. However, adapting applications and services to these paradigms is a challenging task, commonly requiring a deep knowledge of the underlying technologies, which often constitutes a general barrier to its uptake by scientists. In this context, EOSC-Synergy, a collaborative project involving more than 20 institutions from eight European countries pooling their knowledge and experience to enhance EOSC’s capabilities and capacities, aims to bring EOSC closer to the scientific communities. This article provides a summary analysis of the adaptations made in the ten thematic services of EOSC-Synergy to embrace this paradigm. These services are grouped into four categories: Earth Observation, Environment, Biomedicine, and Astrophysics. The analysis will lead to the identification of commonalities, best practices and common requirements, regardless of the thematic area of the service. Experience gained from the thematic services can be transferred to new services for the adoption of the EOSC ecosystem framework. The article made several recommendations for the integration of thematic services in the EOSC ecosystem regarding Authentication and Authorization (federated regional or thematic solutions based on EduGAIN mainly), FAIR data and metadata preservation solutions (both at cataloguing and data preservation—such as EUDAT’s B2SHARE), cloud platform-agnostic resource management services (such as Infrastructure Manager) and workload management solutions.This work was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 857647, EOSC-Synergy, European Open Science Cloud - Expanding Capacities by building Capabilities. Moreover, this work is partially funded by grant No 2015/24461-2, São Paulo Research Foundation (FAPESP). Francisco Brasileiro is a CNPq/Brazil researcher (grant 308027/2020-5).Peer Reviewed"Article signat per 20 autors/es: Amanda Calatrava, Hernán Asorey, Jan Astalos, Alberto Azevedo, Francesco Benincasa, Ignacio Blanquer, Martin Bobak, Francisco Brasileiro, Laia Codó, Laura del Cano, Borja Esteban, Meritxell Ferret, Josef Handl, Tobias Kerzenmacher, Valentin Kozlov, Aleš Křenek, Ricardo Martins, Manuel Pavesio, Antonio Juan Rubio-Montero, Juan Sánchez-Ferrero "Postprint (published version

    Cloud resource orchestration in the multi-cloud landscape: a systematic review of existing frameworks

    Get PDF
    The number of both service providers operating in the cloud market and customers consuming cloud-based services is constantly increasing, proving that the cloud computing paradigm has successfully delivered its potential. Nevertheless, the unceasing growth of the cloud market is posing hard challenges on its participants. On the provider side, the capability of orchestrating resources in order to maximise profits without failing customers’ expectations is a matter of concern. On the customer side, the efficient resource selection from a plethora of similar services advertised by a multitude of providers is an open question. In such a multi-cloud landscape, several research initiatives advocate the employment of software frameworks (namely, cloud resource orchestration frameworks - CROFs) capable of orchestrating the heterogeneous resources offered by a multitude of cloud providers in a way that best suits the customer’s need. The objective of this paper is to provide the reader with a systematic review and comparison of the most relevant CROFs found in the literature, as well as to highlight the multi-cloud computing open issues that need to be addressed by the research community in the near future
    • …
    corecore