10 research outputs found

    Using Containers to Create More Interactive Online Training and Education Materials

    Full text link
    Containers are excellent hands-on learning environments for computing topics because they are customizable, portable, and reproducible. The Cornell University Center for Advanced Computing has developed the Cornell Virtual Workshop in high performance computing topics for many years, and we have always sought to make the materials as rich and interactive as possible. Toward the goal of building a more hands-on experimental learning experience directly into web-based online training environments, we developed the Cornell Container Runner Service, which allows online content developers to build container-based interactive edit and run commands directly into their web pages. Using containers along with CCRS has the potential to increase learner engagement and outcomes.Comment: 10 pages, 3 figures, PEARC '20 conference pape

    Implementing a Loosely-Coupled Integrated Assessment Model in the Pegasus Workflow Management System

    Get PDF
    Integrated assessment models (IAMs) are commonly used to explore the interactions between different modeled components of socio-environmental systems (SES). Most IAMs are built in a tightly-coupled framework so that the complex interactions between the models can be efficiently implemented within the framework in a straightforward manner. However, tightly-coupled frameworks make it more difficult to change individual models within the IAM because of the high level of integration between the models. Prioritizing flexibility over computational efficiency, the IAM presented here is built using a loosely-coupled framework and implemented in the Pegasus Workflow Management System. The modular nature of loosely-coupled systems allows each component model within the IAM to be easily exchanged for another component model from the same domain assuming each provides the same input / output interface. This flexibility allows researchers to experiment with different models for each SES component and facilitates smoother upgrades between each version of the independently developed component models

    Spannotation: Enhancing Semantic Segmentation for Autonomous Navigation with Efficient Image Annotation

    Full text link
    Spannotation is an open source user-friendly tool developed for image annotation for semantic segmentation specifically in autonomous navigation tasks. This study provides an evaluation of Spannotation, demonstrating its effectiveness in generating accurate segmentation masks for various environments like agricultural crop rows, off-road terrains and urban roads. Unlike other popular annotation tools that requires about 40 seconds to annotate an image for semantic segmentation in a typical navigation task, Spannotation achieves similar result in about 6.03 seconds. The tools utility was validated through the utilization of its generated masks to train a U-Net model which achieved a validation accuracy of 98.27% and mean Intersection Over Union (mIOU) of 96.66%. The accessibility, simple annotation process and no-cost features have all contributed to the adoption of Spannotation evident from its download count of 2098 (as of February 25, 2024) since its launch. Future enhancements of Spannotation aim to broaden its application to complex navigation scenarios and incorporate additional automation functionalities. Given its increasing popularity and promising potential, Spannotation stands as a valuable resource in autonomous navigation and semantic segmentation. For detailed information and access to Spannotation, readers are encouraged to visit the project's GitHub repository at https://github.com/sof-danny/spannotationComment: 8 pages, 6 figures, 1 table, 1 pseudo code (algorithm), 55 reference

    Design considerations for workflow management systems use in production genomics research and the clinic

    Get PDF
    Abstract The changing landscape of genomics research and clinical practice has created a need for computational pipelines capable of efficiently orchestrating complex analysis stages while handling large volumes of data across heterogeneous computational environments. Workflow Management Systems (WfMSs) are the software components employed to fill this gap. This work provides an approach and systematic evaluation of key features of popular bioinformatics WfMSs in use today: Nextflow, CWL, and WDL and some of their executors, along with Swift/T, a workflow manager commonly used in high-scale physics applications. We employed two use cases: a variant-calling genomic pipeline and a scalability-testing framework, where both were run locally, on an HPC cluster, and in the cloud. This allowed for evaluation of those four WfMSs in terms of language expressiveness, modularity, scalability, robustness, reproducibility, interoperability, ease of development, along with adoption and usage in research labs and healthcare settings. This article is trying to answer, which WfMS should be chosen for a given bioinformatics application regardless of analysis type?. The choice of a given WfMS is a function of both its intrinsic language and engine features. Within bioinformatics, where analysts are a mix of dry and wet lab scientists, the choice is also governed by collaborations and adoption within large consortia and technical support provided by the WfMS team/community. As the community and its needs continue to evolve along with computational infrastructure, WfMSs will also evolve, especially those with permissive licenses that allow commercial use. In much the same way as the dataflow paradigm and containerization are now well understood to be very useful in bioinformatics applications, we will continue to see innovations of tools and utilities for other purposes, like big data technologies, interoperability, and provenance

    Workflow models for heterogeneous distributed systems

    Get PDF
    The role of data in modern scientific workflows becomes more and more crucial. The unprecedented amount of data available in the digital era, combined with the recent advancements in Machine Learning and High-Performance Computing (HPC), let computers surpass human performances in a wide range of fields, such as Computer Vision, Natural Language Processing and Bioinformatics. However, a solid data management strategy becomes crucial for key aspects like performance optimisation, privacy preservation and security. Most modern programming paradigms for Big Data analysis adhere to the principle of data locality: moving computation closer to the data to remove transfer-related overheads and risks. Still, there are scenarios in which it is worth, or even unavoidable, to transfer data between different steps of a complex workflow. The contribution of this dissertation is twofold. First, it defines a novel methodology for distributed modular applications, allowing topology-aware scheduling and data management while separating business logic, data dependencies, parallel patterns and execution environments. In addition, it introduces computational notebooks as a high-level and user-friendly interface to this new kind of workflow, aiming to flatten the learning curve and improve the adoption of such methodology. Each of these contributions is accompanied by a full-fledged, Open Source implementation, which has been used for evaluation purposes and allows the interested reader to experience the related methodology first-hand. The validity of the proposed approaches has been demonstrated on a total of five real scientific applications in the domains of Deep Learning, Bioinformatics and Molecular Dynamics Simulation, executing them on large-scale mixed cloud-High-Performance Computing (HPC) infrastructures

    Navigating Diverse Datasets in the Face of Uncertainty

    Get PDF
    When exploring big volumes of data, one of the challenging aspects is their diversity of origin. Multiple files that have not yet been ingested into a database system may contain information of interest to a researcher, who must curate, understand and sieve their content before being able to extract knowledge. Performance is one of the greatest difficulties in exploring these datasets. On the one hand, examining non-indexed, unprocessed files can be inefficient. On the other hand, any processing before its understanding introduces latency and potentially un- necessary work if the chosen schema matches poorly the data. We have surveyed the state-of-the-art and, fortunately, there exist multiple proposal of solutions to handle data in-situ performantly. Another major difficulty is matching files from multiple origins since their schema and layout may not be compatible or properly documented. Most surveyed solutions overlook this problem, especially for numeric, uncertain data, as is typical in fields like astronomy. The main objective of our research is to assist data scientists during the exploration of unprocessed, numerical, raw data distributed across multiple files based solely on its intrinsic distribution. In this thesis, we first introduce the concept of Equally-Distributed Dependencies, which provides the foundations to match this kind of dataset. We propose PresQ, a novel algorithm that finds quasi-cliques on hypergraphs based on their expected statistical properties. The probabilistic approach of PresQ can be successfully exploited to mine EDD between diverse datasets when the underlying populations can be assumed to be the same. Finally, we propose a two-sample statistical test based on Self-Organizing Maps (SOM). This method can outperform, in terms of power, other classifier-based two- sample tests, being in some cases comparable to kernel-based methods, with the advantage of being interpretable. Both PresQ and the SOM-based statistical test can provide insights that drive serendipitous discoveries

    Navigating diverse datasets in the face of uncertainty

    Get PDF
    When exploring big volumes of data, one of the challenging aspects is their diversity of origin. Multiple files that have not yet been ingested into a database system may contain information of interest to a researcher, who must curate, understand and sieve their content before being able to extract knowledge. Performance is one of the greatest difficulties in exploring these datasets. On the one hand, examining non-indexed, unprocessed files can be inefficient. On the other hand, any processing before its understanding introduces latency and potentially unnecessary work if the chosen schema matches poorly the data. We have surveyed the state-of-the-art and, fortunately, there exist multiple proposal of solutions to handle data in-situ performantly. Another major difficulty is matching files from multiple origins since their schema and layout may not be compatible or properly documented. Most surveyed solutions overlook this problem, especially for numeric, uncertain data, as is typical in fields like astronomy. The main objective of our research is to assist data scientists during the exploration of unprocessed, numerical, raw data distributed across multiple files based solely on its intrinsic distribution. In this thesis, we first introduce the concept of Equally-Distributed Dependencies, which provides the foundations to match this kind of dataset. We propose PresQ, a novel algorithm that finds quasi-cliques on hypergraphs based on their expected statistical properties. The probabilistic approach of PresQ can be successfully exploited to mine EDD between diverse datasets when the underlying populations can be assumed to be the same. Finally, we propose a two-sample statistical test based on Self-Organizing Maps (SOM). This method can outperform, in terms of power, other classifier-based twosample tests, being in some cases comparable to kernel-based methods, with the advantage of being interpretable. Both PresQ and the SOM-based statistical test can provide insights that drive serendipitous discoveries.Uno de los mayores problemas del big data es el origen diverso de los datos. Un investigador puede estar interesado en agregar datos provenientes de múltiples ficheros que aún no han sido pre-procesados e insertados en un sistema de bases de datos, debiendo depurar y filtrar el contenido antes de poder extraer conocimiento. La exploración directa de estos ficheros presentará serios problemas de rendimiento: examinar archivos sin ningún tipo de preparación ni indexación puede ser ineficiente tanto en términos de lectura de datos como de tiempo de ejecución. Por otro lado, ingerirlos en un sistema de base de datos antes de entenderlos introduce latencia y trabajo potencialmente redundante si el esquema elegido no se ajusta a las consultas que se ejecutarán. Afortunadamente, nuestra revisión del estado del arte demuestra que existen múltiples soluciones posibles para explorar datos in-situ de manera efectiva. Otra gran dificultad es la gestión de archivos de diversas procedencias, ya que su esquema y disposición pueden no ser compatibles, o no estar correctamente documentados. La mayoría de las soluciones encontradas pasan por alto esta problemática, especialmente en lo referente a datos numéricos e inciertos, como, por ejemplo, aquellos relacionados con atributos físicos generados en campos como la astronomía. Nuestro objetivo principal es ayudar a los investigadores a explorar este tipo de datos sin procesamiento previo, almacenados en múltiples archivos, y empleando únicamente su distribución intrínseca. En esta tesis primero introducimos el concepto de Equally-Distributed Dependencies (EDD) (Dependencias de Igualdad de Distribución), estableciendo las bases necesarias para ser capaz de emparejar conjuntos de datos con esquemas diferentes, pero con atributos en común. Luego, presentamos PresQ, un nuevo algoritmo probabilístico de búsqueda de quasi-cliques en hiper-grafos. El enfoque estadístico de PresQ permite proyectar el problema de búsqueda de EDD en el de búsqueda de quasi-cliques. Por último, proponemos una prueba estadística basada en Self-Organizing Maps (SOM) (Mapa autoorganizado). Este método puede superar, en términos de poder estadístico, otras técnicas basadas en clasificadores, siendo en algunos casos comparable a métodos basados en kernels, con la ventaja adicional de ser interpretable. Tanto PresQ como la prueba estadística basada en SOM pueden impulsar descubrimientos serendípicos.211 página

    XSEDE: The Extreme Science and Engineering Discovery Environment (OAC 15-48562) Interim Project Report 13: Report Year 5, Reporting Period 2 August 1, 2020 – October 31, 2020

    Get PDF
    This is the Interim Project Report 13 (IPR13) for the NSF XSEDE project. It includes Key Performance Indicator data and project highlights for Reporting Year 5, Report Period 2 (August 1-October 31, 2020).NSF OAC 15-48562Ope

    Proceedings of the European Conference on Agricultural Engineering AgEng2021

    Get PDF
    This proceedings book results from the AgEng2021 Agricultural Engineering Conference under auspices of the European Society of Agricultural Engineers, held in an online format based on the University of Évora, Portugal, from 4 to 8 July 2021. This book contains the full papers of a selection of abstracts that were the base for the oral presentations and posters presented at the conference. Presentations were distributed in eleven thematic areas: Artificial Intelligence, data processing and management; Automation, robotics and sensor technology; Circular Economy; Education and Rural development; Energy and bioenergy; Integrated and sustainable Farming systems; New application technologies and mechanisation; Post-harvest technologies; Smart farming / Precision agriculture; Soil, land and water engineering; Sustainable production in Farm buildings

    Matrix-free finite-element computations at extreme scale and for challenging applications

    Get PDF
    For numerical computations based on finite element methods (FEM), it is common practice to assemble the system matrix related to the discretized system and to pass this matrix to an iterative solver. However, the assembly step can be costly and the matrix might become locally dense, e.g., in the context of high-order, high-dimensional, or strongly coupled multicomponent FEM, leading to high costs when applying the matrix due to limited bandwidth on modern CPU- and GPU-based hardware. Matrix-free algorithms are a means of accelerating FEM computations on HPC systems, by applying the effect of the system matrix without assembling it. Despite convincing arguments for matrix-free computations as a means of improving performance, their usage still tends to be an exception at the time of writing of this thesis, not least because they have not yet proven their applicability in all areas of computational science, e.g., solid mechanics. In this thesis, we further develop a state-of-the-art matrix-free framework for high-order FEM computations with focus on the preconditioning and adopt it in novel application fields. In the context of high-order FEM, we develop means of improving cache efficiency by interleaving cell loops with vector updates, which we use to increase the throughput of preconditioned conjugate gradient methods and of block smoothers based on additive Schwarz methods; we also propose an algorithm for the fast application of hanging-node constraints in 3D for up to 137 refinement configurations. We develop efficient geometric and polynomial multigrid solvers with optimized transfer operators, whose performance is experimentally investigated in detail in the context of locally refined meshes, indicating the superiority of global-coarsening algorithms. We apply the developed solvers in the context of novel stage-parallel implicit Runge–Kutta methods and demonstrate the benefit of stage–parallel solvers in decreasing the time to solution at the scaling limit. Novel challenging application fields of matrix-free computations include high-dimensional computational plasma physics, solid-state-sintering simulations with a high and dynamically changing number of strongly coupled components, and coupled multiphysics problems with evaluation and integration at arbitrary points. In the context of these fields, we detail computational challenges, propose modified versions of the standard matrix-free algorithms for high-performance computing, and discuss preconditioning-related topics. The efficiency of the derived algorithms on the node level and at extreme scales is demonstrated experimentally on SuperMUC-NG, one of Germany’s leading supercomputers, with up to 150k processes and by solving systems of up to 5 × 1012 unknowns. Such problem sizes would not be conceivable for equivalent matrix-based algorithms. The major achievements of this thesis allow to run larger simulations faster and more efficiently, enabling progress and new possibilities for a range of application fields in computational science
    corecore