343 research outputs found

    The advanced cyberinfrastructure research and education facilitators virtual residency: Toward a national cyberinfrastructure workforce

    Get PDF
    An Advanced Cyberinfrastructure Research and Education Facilitator (ACI-REF) works directly with researchers to advance the computing- and data-intensive aspects of their research, helping them to make effective use of Cyberinfrastructure (CI). The University of Oklahoma (OU) is leading a national "virtual residency" program to prepare ACI-REFs to provide CI facilitation to the diverse populations of Science, Technology, Engineering and Mathematics (STEM) researchers that they serve. Until recently, CI Facilitators have had no education or training program; the Virtual Residency program addresses this national need by providing: (1) training, specifically (a) summer workshops and (b) third party training opportunity alerts; (2) a community of CI Facilitators, enabled by (c) a biweekly conference call and (d) a mailing list

    Campus Bridging: Software & Software Service Issues Workshop Report

    Get PDF
    This report summarizes the discussion at and findings of a workshop on the software and services aspects of cyberinfrastructure as they apply to campus bridging. The workshop took a broad view of software and services, including services in the business sense of the word, such as user support, in addition to information technology services. Specifically, the workshop addressed the following two goals: * Suggest common elements of software stacks widely usable across the nation/world to promote interoperability/economy of scale; and * Suggested policy documents that any research university should have in place.The preparation of this report and related documents was supported by several sources, including: * The National Science Foundation through Grant 0829462 (Bradley C. Wheeler, PI; Geoffrey Brown, Craig A. Stewart, Beth Plale, Dennis Gannon Co-PIs). * Indiana University Pervasive Technology Institute (http://pti.iu.edu/) for funding staff providing logistical support of the task force activities, writing and editorial staff, and layout and production of the final report document. * RENCI (the Renaissance Computing Institute, http://www.renci.org/) supported this workshop and report by generously providing the time and effort of John McGee. * Texas A&M University (http://www.tamu.edu) supported this workshop and report by generously providing the time and effort of Guy Almes

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Summary of the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1)

    Get PDF
    Challenges related to development, deployment, and maintenance of reusable software for science are becoming a growing concern. Many scientists’ research increasingly depends on the quality and availability of software upon which their works are built. To highlight some of these issues and share experiences, the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1) was held in November 2013 in conjunction with the SC13 Conference. The workshop featured keynote presentations and a large number (54) of solicited extended abstracts that were grouped into three themes and presented via panels. A set of collaborative notes of the presentations and discussion was taken during the workshop. Unique perspectives were captured about issues such as comprehensive documentation, development and deployment practices, software licenses and career paths for developers. Attribution systems that account for evidence of software contribution and impact were also discussed. These include mechanisms such as Digital Object Identifiers, publication of “software papers”, and the use of online systems, for example source code repositories like GitHub. This paper summarizes the issues and shared experiences that were discussed, including cross-cutting issues and use cases. It joins a nascent literature seeking to understand what drives software work in science, and how it is impacted by the reward systems of science. These incentives can determine the extent to which developers are motivated to build software for the long-term, for the use of others, and whether to work collaboratively or separately. It also explores community building, leadership, and dynamics in relation to successful scientific software

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490

    Processing and Managing the Kepler Mission's Treasure Trove of Stellar and Exoplanet Data

    Get PDF
    The Kepler telescope launched into orbit in March 2009, initiating NASAs first mission to discover Earth-size planets orbiting Sun-like stars. Kepler simultaneously collected data for 160,000 target stars at a time over its four-year mission, identifying over 4700 planet candidates, 2300 confirmed or validated planets, and over 2100 eclipsing binaries. While Kepler was designed to discover exoplanets, the long term, ultra- high photometric precision measurements it achieved made it a premier observational facility for stellar astrophysics, especially in the field of asteroseismology, and for variable stars, such as RR Lyraes. The Kepler Science Operations Center (SOC) was developed at NASA Ames Research Center to process the data acquired by Kepler from pixel-level calibrations all the way to identifying transiting planet signatures and subjecting them to a suite of diagnostic tests to establish or break confidence in their planetary nature. Detecting small, rocky planets transiting Sun-like stars presents a variety of daunting challenges, from achieving an unprecedented photometric precision of 20 parts per million (ppm) on 6.5-hour timescales, supporting the science operations, management, processing, and repeated reprocessing of the accumulating data stream. This paper describes how the design of the SOC meets these varied challenges, discusses the architecture of the SOC and how the SOC pipeline is operated and is run on the NAS Pleiades supercomputer, and summarizes the most important pipeline features addressing the multiple computational, image and signal processing challenges posed by Kepler

    National Science Foundation Advisory Committee for Cyberinfrastructure Task Force on Campus Bridging Final Report

    Get PDF
    The mission of the National Science Foundation (NSF) Advisory Committee on Cyberinfrastructure (ACCI) is to advise the NSF as a whole on matters related to vision and strategy regarding cyberinfrastructure (CI). In early 2009 the ACCI charged six task forces with making recommendations to the NSF in strategic areas of cyberinfrastructure: Campus Bridging; Cyberlearning and Workforce Development; Data and Visualization; Grand Challenges; High Performance Computing (HPC); and Software for Science and Engineering. Each task force was asked to offer advice on the basis of which the NSF would modify existing programs and create new programs. This document is the final, overall report of the Task Force on Campus Bridging.National Science Foundatio

    Workshop Report: Campus Bridging: Reducing Obstacles on the Path to Big Answers 2015

    Get PDF
    For the researcher whose experiments require large-scale cyberinfrastructure, there exists significant challenges to successful completion. These challenges are broad and go far beyond the simple issue that there are not enough large-scale resources available; these solvable issues range from a lack of documentation written for a non-technical audience to a need for greater consistency with regard to system configuration and consistent software configuration and availability on the large-scale resources at national tier supercomputing centers, with a number of other challenges existing alongside the ones mentioned here. Campus Bridging is a relatively young discipline that aims to mitigate these issues for the academic end-user, for whom the entire process can feel like a path comprised entirely of obstacles. The solutions to these problems must by necessity include multiple approaches, with focus not only on the end user but on the system administrators responsible for supporting these resources as well as the systems themselves. These system resources include not only those at the supercomputing centers but also those that exist at the campus or departmental level and even on the personal computing devices the researcher uses to complete his or her work. This workshop report compiles the results of a half-day workshop, held in conjunction with IEEE Cluster 2015 in Chicago, IL.NSF XSED

    Mid-Scale Instrumentation: Regional Facilities to Address Grand Challenges in Chemistry

    Get PDF
    A regional workshop sponsored by the National Science Foundation, Arlington, Virginia, September 29-30, 2016. To determine what needs and opportunities might exist for mid-scale instrumentation (MSI), two workshops were held in fall of 2016 to explore opportunities within the discipline that could be provided by such investment. One workshop was convened to explore the need for co-localization of existing instrumentation at a regional or cyber-enabled facilities (addressed in this report, “Mid-Scale Instrumentation: Regional Facilities to Address Grand Challenges in Chemistry”). In this report, we identify different areas where investment in such MSI facilities would be highly beneficial. These appear as six “grand challenges” that can be summarized here as follows: 1. Structure and dynamics at interfaces 2. Highly parallel chemical synthesis and characterization 3. Transient intermediates 4. New science arising from the characterization of heterogeneous mixtures 5. Multi-scale dynamics of complex systems: integrating transport with reaction 6. Structure-function relationship in disordered and/or heterogeneous system
    • …
    corecore