11 research outputs found

    Pilot Job Accounting and Auditing in Open Science Grid

    Get PDF
    The Grid accounting and auditing mechanisms were designed under the assumption that users would submit their jobs directly to the Grid gatekeepers. However, many groups are starting to use pilot-based systems, where users submit jobs to a centralized queue and are successively transferred to the Grid resources by the pilot infrastructure. While this approach greatly improves the user experience, it does disrupt the established accounting and auditing procedures. Open Science Grid deploys gLExec on the worker nodes to keep the pilot-related accounting and auditing information and centralizes the accounting collection with GRATIA. 1

    The Tera-Gridiron: A Natural Turf for High-Throughput Computing

    No full text
    Abstract — Teragrid resources are often used when high-performance computing is required. We describe our experiences in using Teragrid resources in a high-throughput manner, generating a significant number of CPU cycles over a long time span. In particular, we discuss using Teragrid resources as part of a larger computational grid to perform computations in an ongoing attempt to solve an open problem in mathematical coding theory—the Football Pool Problem

    The Tera-Gridiron: A Natural Turf for High-Throughput Computing

    No full text
    Teragrid resources are often used when high-performance computing is required. We describe our experiences in using Teragrid resources in a high-throughput manner, generating a significant number of CPU cycles over a long time span. In particular, we discuss using Teragrid resources as part of a larger computational grid to perform computations in an ongoing attempt to solve an open problem in mathematical coding theory—the Football Pool Problem

    Wrangling distributed computing for high-throughput environmental science: An introduction to HTCondor.

    No full text
    Biologists and environmental scientists now routinely solve computational problems that were unimaginable a generation ago. Examples include processing geospatial data, analyzing -omics data, and running large-scale simulations. Conventional desktop computing cannot handle these tasks when they are large, and high-performance computing is not always available nor the most appropriate solution for all computationally intense problems. High-throughput computing (HTC) is one method for handling computationally intense research. In contrast to high-performance computing, which uses a single "supercomputer," HTC can distribute tasks over many computers (e.g., idle desktop computers, dedicated servers, or cloud-based resources). HTC facilities exist at many academic and government institutes and are relatively easy to create from commodity hardware. Additionally, consortia such as Open Science Grid facilitate HTC, and commercial entities sell cloud-based solutions for researchers who lack HTC at their institution. We provide an introduction to HTC for biologists and environmental scientists. Our examples from biology and the environmental sciences use HTCondor, an open source HTC system
    corecore