87 research outputs found

    Campus Bridging Birds-of-a-Feather Session at TeraGrid 2011 Conference

    Get PDF
    Richard Knepper attended the TeraGrid 2011 conference in Salt Lake City, Utah, and together with representatives from the TeraGrid Campus Champions program conducted a Birds of a Feather session discussing the XSEDE Campus Bridging initiative and the Campus Champions program. The attendees discussed the Campus Bridging and Campus Champions plans for XSEDE.This research was supported in part by the National Science Foundation through XSEDE resources provided by the XSEDE Campus Bridging program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. More information is available at: http://xsede.org

    Book Reviews

    Get PDF

    Workshop Report: Campus Bridging: Reducing Obstacles on the Path to Big Answers 2015

    Get PDF
    For the researcher whose experiments require large-scale cyberinfrastructure, there exists significant challenges to successful completion. These challenges are broad and go far beyond the simple issue that there are not enough large-scale resources available; these solvable issues range from a lack of documentation written for a non-technical audience to a need for greater consistency with regard to system configuration and consistent software configuration and availability on the large-scale resources at national tier supercomputing centers, with a number of other challenges existing alongside the ones mentioned here. Campus Bridging is a relatively young discipline that aims to mitigate these issues for the academic end-user, for whom the entire process can feel like a path comprised entirely of obstacles. The solutions to these problems must by necessity include multiple approaches, with focus not only on the end user but on the system administrators responsible for supporting these resources as well as the systems themselves. These system resources include not only those at the supercomputing centers but also those that exist at the campus or departmental level and even on the personal computing devices the researcher uses to complete his or her work. This workshop report compiles the results of a half-day workshop, held in conjunction with IEEE Cluster 2015 in Chicago, IL.NSF XSED

    Genome Analysis: Birds of a Feather

    Get PDF
    This Birds-of-a-Feather session presents the national infrastructure serving genome science, including NCGAS, iPlant, XSEDE, and networks, detailing goals, resources, and projects.This research is supported by NSF Award 1062432 – ABI Development: National Center for Genome Analysis Support (NCGAS).  This research was also supported by a generous grant from the Lilly Endowment, Inc. to the Indiana University Pervasive Technology Institute

    Leveraging Your Local Resources and National Cyberinfrastructure Resources without Tears

    Get PDF
    Compute resources for conducting research inhabit a wide range from researchers' personal computers, servers in labs, campus clusters and condos, regional resource-sharing models, and national cyberinfrastructure. Researchers agree that there are not enough resources available on a broad scale, and significant barriers exist for getting analyses moved from smaller- to larger-scale cyberinfrastructure. The XSEDE Campus Bridging program disseminates a several tools that assist researchers and campus IT administrators in reducing barriers to the effective use of national cyberinfrastructure for research. Tools for data management, job submission and steering, best practices for building and administering clusters, and common documentation and training activities all support a flexible environment that allows cyberinfrastructure to be as simple to utilize as a plug-and-play peripheral. In this paper and the accompanying poster we provide an overview of Campus Bridging, including specific challenges and solutions to the problem of making the computerized parts of research easier. We focus particularly on tools that facilitate management of campus computing clusters and integration of such clusters with the national cyberinfrastructure

    XCBC and XNIT - tools for cluster implementation and management in research and training

    Get PDF
    The Extreme Science and Engineering Discovery Environment has created a suite of software designed to facilitate the local management of computer clusters for scientific research and integration of such clusters with the US open research national cyberinfrastructure. This suite of software is distributed in two ways. One distribution is called the XSEDE-compatible basic cluster (XCBC), a Rocks Roll that does an “all at once, from scratch” installation of core components. The other distribution is called the XSEDE National Integration Toolkit (XNIT), so that specific tools can be downloaded and installed in portions as appropriate on existing clusters. In this paper, we describe the software included in XCBC and XNIT, and examine the use of XCBC installed on the LittleFe cluster design created by the Earlham College Cluster Computing Group as a teaching tool to show the deployment of XCBC from Rocks. In addition, the demonstration of the commercial Limulus HPC200 Deskside Cluster solution is shown as a viable, off-the-shelf cluster that can be adapted to become an XSEDE-like cluster through the use of the XNIT repository. We demonstrate that both approaches to cluster management – use of SCBC to build clusters from scratch and use of XNIT to expand capabilities of existing clusters – aid cluster administrators in administering clusters that are valuable locally and facilitate integration and interoperability of campus clusters with national cyberinfrastructure. We also demonstrate that very economical clusters can be useful tools in education and research.This document was developed with support from National Science Foundation (NSF) grant OCI-1053575. The LittleFe project has been funded in part by a grant from Intel, Inc. to Charlie Peck as well as NSF grants 1258604 and ACI-1347089. This research has also been supported in part by the Indiana University Pervasive Technology Institute, which was established with a major grant from the Lilly Endowment, Inc

    XSEDE Campus Bridging – Cluster software distribution strategy and tactics

    Get PDF
    This document is both a public document and an internal working document intended to define XSEDE strategies related to XSEDE’s cluster build software distribution project. This is part a strategy document, part tactical.XSEDE is supported by National Science Foundation Grant 1053575 (XSEDE: eXtreme Science and Engineering Discovery Environment)

    Sustained Software for Cyberinfrastructure - Analyses of Successful Efforts with a Focus on NSF-Funded Software

    Get PDF
    Reliable software that provides needed functionality is clearly essential for an effective distributed cyberinfrastructure (CI) that supports comprehensive, balanced, and flexible distributed CI that, in turn, supports science and engineering applications. The purpose of this study was to understand what factors lead to software projects being well sustained over the long run, focusing on software created with funding from the US National Science Foundation (NSF) and/or used by researchers funded by the NSF. We surveyed NSF-funded researchers and performed in-depth studies of software projects that have been sustained over many years. Successful projects generally used open-source software licenses and employed good software engineering practices and test practices. However, many projects that have not been well sustained over time also meet these criteria. The features that stood out about successful projects included deeply committed leadership and some sort of user forum or conference at least annually. In some cases, software project leaders have employed multiple financial strategies over the course of a decades-old software project. Such well-sustained software is used in major distributed CI projects that support thousands of users, and this software is critical to the operation of major distributed CI facilities in the US. The findings of our study identify some characteristics of software that is relevant to the NSF-supported research community, and that has been sustained over many years

    Rockhopper: a True HPC System with Cloud Concepts

    Get PDF
    Presented at IEEE Cluster 2013 in Indianapolis, INA number of services for scientific computing based on cloud resources have recently drawn significant attention in both research and infrastructure provider communities. Most cloud resources currently available lack true high performance characteristics, such as high-speed interconnects or storage. Researchers studying cloud systems have pointed out that many cloud services do not provide service level agreements that may meet the needs of the research community. Furthermore, the lack of location information provided to the user and the shared nature of the systems use may create risk for users of the system, in the instance that their data is moved to an unknown location with an unknown level of security. Indiana University and Penguin Computing have partnered to create a system, Rockhopper, which addresses many of these issues. This system is a true high performance resource, with on-demand allocations and control and tracking of jobs, situated at Indiana University's high-security datacenter facility. Rockhopper allows researchers to flexibly conduct their work under a number of use cases while also serving as an extension of cyberinfrastructure that scales from the researcher's local environment all the way up through large national resources. We describe the architecture and ideas behind the creation of the system, present a use case for campus bridging, and provide a typical example of system usage. In a comparison of Rockhopper to a cloud-based system, we run the Trinity RNA-seq software against a number of datasets on both the Rockhopper system and on Amazon's EC2 service
    • …
    corecore