9 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe increase in computational power of supercomputers is enabling complex scientific phenomena to be simulated at ever-increasing resolution and fidelity. With these simulations routinely producing large volumes of data, performing efficient I/O at this scale has become a very difficult task. Large-scale parallel writes are challenging due to the complex interdependencies between I/O middleware and hardware. Analytic-appropriate reads are traditionally hindered by bottlenecks in I/O access. Moreover, the two components of I/O, data generation from simulations (writes) and data exploration for analysis and visualization (reads), have substantially different data access requirements. Parallel writes, performed on supercomputers, often deploy aggregation strategies to permit large-sized contiguous access. Analysis and visualization tasks, usually performed on computationally modest resources, require fast access to localized subsets or multiresolution representations of the data. This dissertation tackles the problem of parallel I/O while bridging the gap between large-scale writes and analytics-appropriate reads. The focus of this work is to develop an end-to-end adaptive-resolution data movement framework that provides efficient I/O, while supporting the full spectrum of modern HPC hardware. This is achieved by developing technology for highly scalable and tunable parallel I/O, applicable to both traditional parallel data formats and multiresolution data formats, which are directly appropriate for analysis and visualization. To demonstrate the efficacy of the approach, a novel library (PIDX) is developed that is highly tunable and capable of adaptive-resolution parallel I/O to a multiresolution data format. Adaptive resolution storage and I/O, which allows subsets of a simulation to be accessed at varying spatial resolutions, can yield significant improvements to both the storage performance and I/O time. The library provides a set of parameters that controls the storage format and the nature of data aggregation across he network; further, a machine learning-based model is constructed that tunes these parameters for the maximum throughput. This work is empirically demonstrated by showing parallel I/O scaling up to 768K cores within a framework flexible enough to handle adaptive resolution I/O

    Efficient data restructuring and aggregation for I/O acceleration in PIDX

    Get PDF
    pre-printHierarchical, multiresolution data representations enable interactive analysis and visualization of large-scale simulations. One promising application of these techniques is to store high performance computing simulation output in a hierarchical Z (HZ) ordering that translates data from a Cartesian coordinate scheme to a one-dimensional array ordered by locality at different resolution levels. However, when the dimensions of the simulation data are not an even power of 2, parallel HZ ordering produces sparse memory and network access patterns that inhibit I/O performance. This work presents a new technique for parallel HZ ordering of simulation datasets that restructures simulation data into large (power of 2) blocks to facilitate efficient I/O aggregation. We perform both weak and strong scaling experiments using the S3D combustion application on both Cray-XE6 (65,536 cores) and IBM Blue Gene/P (131,072 cores) platforms. We demonstrate that data can be written in hierarchical, multiresolution format with performance competitive to that of native data-ordering methods

    On the energy footprint of I/O management in Exascale HPC systems

    Get PDF
    International audienceThe advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. However, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. To do so, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid’5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. Our proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches

    Characterization and modeling of PIDX parallel I/O for performance optimization

    Get PDF
    pre-printParallel I/O library performance can vary greatly in re- sponse to user-tunable parameter values such as aggregator count, file count, and aggregation strategy. Unfortunately, manual selection of these values is time consuming and dependent on characteristics of the target machine, the underlying file system, and the dataset itself. Some characteristics, such as the amount of memory per core, can also impose hard constraints on the range of viable parameter values. In this work we address these problems by using machine learning techniques to model the performance of the PIDX parallel I/O library and select appropriate tunable parameter values. We characterize both the network and I/O phases of PIDX on a Cray XE6 as well as an IBM Blue Gene/P system. We use the results of this study to develop a machine learning model for parameter space exploration and performance prediction

    Efficient Task-Local I/O Operations of Massively Parallel Applications

    Get PDF
    Applications on current large-scale HPC systems use enormous numbers of processing elements for their computation and have access to large amounts of main memory for their data. Nevertheless, they still need file-system access to maintain program and application data persistently. Characteristic I/O patterns that produce a high load on the file system often occurduring access to checkpoint and restart files, which have to be frequently stored to allow the application to be restarted after program termination or system failure. On large-scale HPC systems with distributed memory, each application task will often perform such I/O individually by creating task-local file objects on the file system. At large scale, these I/O patterns impose substantial stress on the metadata management components of the I/O subsystem. For example, the simultaneous creation of thousands of task-local files in the same directory can cause delays of several minutes. Also at the startup of dynamically linked applications, such metadata contention occurs while searching for library files and induces a comparably high metadata load on the file system. Even mid-scale applications cause in such load scenarios startup delays of ten minutes or more. Therefore, dynamic linking and loading is nowadays not applied on large HPC systems, although dynamic linking has many advantages for managing large code bases. The reason for these limitations is that POSIX I/O and the dynamic loader are implemented as serial components of the operating system and do not take advantage of the parallel nature of the I/O operations. To avoid the above bottlenecks, this work describes two novel approaches for the integration of locality awareness (e.g., through aggregation or caching) into the serial I/O operations of parallel applications. The underlying methods are implemented in two tools, SIONlib\textit{SIONlib} and Spindle\textit{Spindle}, which exploit the knowledge of application parallelism to coordinate access to file-system objects. In addition, the applied methods also use knowledge of the underlying I/O subsystem structure, the parallel file system configuration, and the network betweenHPC-system and I/O system to optimize application I/O. Both tools add layers between the parallel application and the POSIX-based standard interfaces of the operating system for I/O and dynamic loading, eliminating the need for modifying the underlying system software. SIONlib is already applied in several applications, including PEPC, muphi, and MP2C, to implement efficient checkpointing. In addition, SIONlib is integrated in the performance-analysis tools Scalasca and Score-P to efficiently store and read trace data. Latest benchmarks on the Blue Gene/Q in JĂŒlich demonstrate that SIONlib solves the metadata problem at large scale by running efficiently up to 1.8 million tasks while maintaining high I/O bandwidths of 60-80% of file-system peak with a negligible file-creation time. The scalability of Spindle could be demonstrated by running the Pynamic benchmark, a proxy benchmark for a real application, on a cluster of Lawrence Livermore National Laboratory at large scale. The results show that the startup of dynamically linked applications is now feasible on more than 15000 tasks, whereas the overhead of Spindle is nearly constantly low. With SIONlib and Spindle, this work demonstrates how scalability of operating system components can be improved without modifying them and without changing the I/O patterns of applications. In this way, SIONlib and Spindle represent prototype implementations of functionality needed by next-generation runtime systems

    Evaluation of standards and techniques for retrieval of geospatial raster data : a study for the ICOS Carbon Portal

    Get PDF
    Evaluation of Standards and Techniques for Retrieval of Geospatial Raster Data - A study for ICOS Carbon Portal Geospatial raster data represent the world as a surface with its geographic information which varies continuously. These data can be grid-based data like Digital Terrain Elevation Data (DTED) and geographic image data like multispectral images. The Integrated Carbon Observation System (ICOS) European project is launched to measure greenhouse gases emission. The outputs of these measurements are the data in both geospatial vector (raw data) and raster formats (elaborated data). By using these measurements, scientists create flux maps over Europe. The flux maps are important for many groups such as researchers, stakeholders and public users. In this regard, ICOS Carbon Portal (ICOS CP) looks for a sufficient way to make the ICOS elaborated data available for all of these groups in an online environment. Among others, ICOS CP desires to design a geoportal to let users download the modelled geospatial raster data in different formats and geographic extents. Open GeoSpatial Consortium (OGC) Web Coverage Service (WCS) defines a geospatial web service to render geospatial raster data such as flux maps in any desired subset in space and time. This study presents two techniques to design a geoportal compatible with WCS. This geoportal should be able to retrieve the ICOS data in both NetCDF and GeoTIFF formats as well as allow retrieval of subsets in time and space. In the first technique, a geospatial raster database (Rasdaman) is used to store the data. Rasdaman OGC component (Petascope) as the server tool connects the database to the client side through WCS protocol. In the Second technique, an advanced file-based system (NetCDF) is applied to maintain the data. THREDDS as the WCS server ships the data to the client side through WCS protocol. These two techniques returned good result to download the data in desired formats and subsets.Evaluation of Standards and Techniques for Retrieval of Geospatial Raster Data Geospatial data refer to an object or phenomena located on the specific scene in space, in relation with the other objects. They are linked to geometry and topology. Geospatial raster data are a subset of geospatial data. Geospatial raster data represent the world as a surface with its geographic information which varies continuously. These data can be grid-based data like Digital Terrain Elevation Data (DTED) and geographic image data like multispectral images. The challenges present in working with geospatial raster data are related to three important components: I) storage and management systems, II) standardized services and III) software interface of geospatial raster data. Each component has its own importance in the aim of improving the interaction with geospatial raster data. A proper geospatial raster data storage and management system makes it easy to classify, search and retrieve the data. A standardized service is needed to unify, download, process and share these data among other users. The last challenge is choosing suitable software interface to support the standardized services on the web. The aim is to provide ability for users to download geospatial raster data in different formats in any desired space and time subsets. In this regard, two different techniques are evaluated to connect the main three components to provide such aim. In the first technique, a geospatial raster database is used to store the data. Then this database is connected to the software interface through standardized service. In the Second technique, an advanced file-based system is applied to maintain the data. The server ships the data to software interface through standardized service. Although these two techniques have their own difficulties, they returned good result. Users can download the data in desired formats on the web. In addition, they can download the data for any specific area and specific time

    Improving the throughput of an atmospheric model using an asynchronous parallel I/O server

    Get PDF
    This master's thesis analyzes the I/O process of IFS. It is presented an easy-to-use development that integrates an asynchronous parallel I/O server called XIOS into IFS. Moreover, different optimization techniques are applied in the integration to minimize the I/O overhead in the IFS execution
    corecore