8 research outputs found

    Modeling and Implementation of an Asynchronous Approach to Integrating HPC and Big Data Analysis

    Get PDF
    With the emergence of exascale computing and big data analytics, many important scientific applications require the integration of computationally intensive modeling and simulation with data-intensive analysis to accelerate scientific discovery. In this paper, we create an analytical model to steer the optimization of the end-to-end time-to-solution for the integrated computation and data analysis. We also design and develop an intelligent data broker to efficiently intertwine the computation stage and the analysis stage to practically achieve the optimal time-to-solution predicted by the analytical model. We perform experiments on both synthetic applications and real-world computational fluid dynamics (CFD) applications. The experiments show that the analytic model exhibits an average relative error of less than 10%, and the application performance can be improved by up to 131% for the synthetic programs and by up to 78% for the real-world CFD application

    Designing a Parallel Memory-Aware Lattice Boltzmann Algorithm on Manycore Systems

    Get PDF
    Lattice Boltzmann method (LBM) is an important computational fluid dynamics (CFD) approach to solving the Naiver-Stokes equations and simulating complex fluid flows. LBM is also well known as a memory bound problem and its performance is limited by the memory access time on modern computer systems. In this paper, we design and develop both sequential and parallel memory-aware algorithms to optimize the performance of LBM. The new memory-aware algorithms can enhance data reuses across multiple time steps to further improve the performance of the original and fused LBM. We theoretically analyze the algorithms to provide an insight into how data reuses occur in each algorithm. Finally, we conduct experiments and detailed performance analysis on two different manycore systems. Based on the experimental results, the parallel memory-aware LBM algorithm can outperform the fused LBM by up to 292% on the Intel Haswell system when using 28 cores, and by 302 % on the Intel Skylake system when using 48 cores

    Accelerating complex modeling workflows in CyberWater using on-demand HPC/Cloud resources

    Get PDF
    Workflow management systems (WMSs) are commonly used to organize/automate sequences of tasks as workflows to accelerate scientific discoveries. During complex workflow modeling, a local interactive workflow environment is desirable, as users usually rely on their rich, local environments for fast prototyping and refinements before they consider using more powerful computing resources. However, existing WMSs do not simultaneously support local interactive workflow environments and HPC resources. In this paper, we present an on-demand access mechanism to remote HPC resources from desktop/laptop-based workflow management software to compose, monitor and analyze scientific workflows in the CyberWater project. Cyber-Water is an open-data and open-modeling software framework for environmental and water communities. In this work, we extend the open-model, open-data design of CyberWater with on-demand HPC accessing capacity. In particular, we design and implement the LaunchAgent library, which can be integrated into the local desktop environment to allow on-demand usage of remote resources for hydrology-related workflows. LaunchAgent manages authentication to remote resources, prepares the computationally-intensive or data-intensive tasks as batch jobs, submits jobs to remote resources, and monitors the quality of services for the users. LaunchAgent interacts seamlessly with other existing components in CyberWater, which is now able to provide advantages of both feature-rich desktop software experience and increased computation power through on-demand HPC/Cloud usage. In our evaluations, we demonstrate how a hydrology workflow that consists of both local and remote tasks can be constructed and show that the added on-demand HPC/Cloud usage helps speeding up hydrology workflows while allowing intuitive workflow configurations and execution using a desktop graphical user interface

    Accelerated in-Situ Workflow of Memory-Aware Lattice Boltzmann Simulation and Analysis

    No full text
    As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis. I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network toolsshows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times. After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times

    Delayed colorectal cancer care during covid-19 pandemic (decor-19). Global perspective from an international survey

    No full text
    Background The widespread nature of coronavirus disease 2019 (COVID-19) has been unprecedented. We sought to analyze its global impact with a survey on colorectal cancer (CRC) care during the pandemic. Methods The impact of COVID-19 on preoperative assessment, elective surgery, and postoperative management of CRC patients was explored by a 35-item survey, which was distributed worldwide to members of surgical societies with an interest in CRC care. Respondents were divided into two comparator groups: 1) ‘delay’ group: CRC care affected by the pandemic; 2) ‘no delay’ group: unaltered CRC practice. Results A total of 1,051 respondents from 84 countries completed the survey. No substantial differences in demographics were found between the ‘delay’ (745, 70.9%) and ‘no delay’ (306, 29.1%) groups. Suspension of multidisciplinary team meetings, staff members quarantined or relocated to COVID-19 units, units fully dedicated to COVID-19 care, personal protective equipment not readily available were factors significantly associated to delays in endoscopy, radiology, surgery, histopathology and prolonged chemoradiation therapy-to-surgery intervals. In the ‘delay’ group, 48.9% of respondents reported a change in the initial surgical plan and 26.3% reported a shift from elective to urgent operations. Recovery of CRC care was associated with the status of the outbreak. Practicing in COVID-free units, no change in operative slots and staff members not relocated to COVID-19 units were statistically associated with unaltered CRC care in the ‘no delay’ group, while the geographical distribution was not. Conclusions Global changes in diagnostic and therapeutic CRC practices were evident. Changes were associated with differences in health-care delivery systems, hospital’s preparedness, resources availability, and local COVID-19 prevalence rather than geographical factors. Strategic planning is required to optimize CRC care
    corecore