1,773 research outputs found
MPI-Vector-IO: Parallel I/O and Partitioning for Geospatial Vector Data
In recent times, geospatial datasets are growing in terms of size, complexity and heterogeneity. High performance systems are needed to analyze such data to produce actionable insights in an efficient manner. For polygonal a.k.a vector datasets, operations such as I/O, data partitioning, communication, and load balancing becomes challenging in a cluster environment. In this work, we present MPI-Vector-IO 1 , a parallel I/O library that we have designed using MPI-IO specifically for partitioning and reading irregular vector data formats such as Well Known Text. It makes MPI aware of spatial data, spatial primitives and provides support for spatial data types embedded within collective computation and communication using MPI message-passing library. These abstractions along with parallel I/O support are useful for parallel Geographic Information System (GIS) application development on HPC platforms
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
An Overview of a Grid Architecture for Scientific Computing
This document gives an overview of a Grid testbed architecture proposal for
the NorduGrid project. The aim of the project is to establish an inter-Nordic
testbed facility for implementation of wide area computing and data handling.
The architecture is supposed to define a Grid system suitable for solving data
intensive problems at the Large Hadron Collider at CERN. We present the various
architecture components needed for such a system. After that we go on to give a
description of the dynamics by showing the task flow
HTC Scientific Computing in a Distributed Cloud Environment
This paper describes the use of a distributed cloud computing system for
high-throughput computing (HTC) scientific applications. The distributed cloud
computing system is composed of a number of separate
Infrastructure-as-a-Service (IaaS) clouds that are utilized in a unified
infrastructure. The distributed cloud has been in production-quality operation
for two years with approximately 500,000 completed jobs where a typical
workload has 500 simultaneous embarrassingly-parallel jobs that run for
approximately 12 hours. We review the design and implementation of the system
which is based on pre-existing components and a number of custom components. We
discuss the operation of the system, and describe our plans for the expansion
to more sites and increased computing capacity
A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures
Scientific problems that depend on processing large amounts of data require
overcoming challenges in multiple areas: managing large-scale data
distribution, co-placement and scheduling of data with compute resources, and
storing and transferring large volumes of data. We analyze the ecosystems of
the two prominent paradigms for data-intensive applications, hereafter referred
to as the high-performance computing and the Apache-Hadoop paradigm. We propose
a basis, common terminology and functional factors upon which to analyze the
two approaches of both paradigms. We discuss the concept of "Big Data Ogres"
and their facets as means of understanding and characterizing the most common
application workloads found across the two paradigms. We then discuss the
salient features of the two paradigms, and compare and contrast the two
approaches. Specifically, we examine common implementation/approaches of these
paradigms, shed light upon the reasons for their current "architecture" and
discuss some typical workloads that utilize them. In spite of the significant
software distinctions, we believe there is architectural similarity. We discuss
the potential integration of different implementations, across the different
levels and components. Our comparison progresses from a fully qualitative
examination of the two paradigms, to a semi-quantitative methodology. We use a
simple and broadly used Ogre (K-means clustering), characterize its performance
on a range of representative platforms, covering several implementations from
both paradigms. Our experiments provide an insight into the relative strengths
of the two paradigms. We propose that the set of Ogres will serve as a
benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure
High-Throughput Computing on High-Performance Platforms: A Case Study
The computing systems used by LHC experiments has historically consisted of
the federation of hundreds to thousands of distributed resources, ranging from
small to mid-size resource. In spite of the impressive scale of the existing
distributed computing solutions, the federation of small to mid-size resources
will be insufficient to meet projected future demands. This paper is a case
study of how the ATLAS experiment has embraced Titan---a DOE leadership
facility in conjunction with traditional distributed high- throughput computing
to reach sustained production scales of approximately 52M core-hours a years.
The three main contributions of this paper are: (i) a critical evaluation of
design and operational considerations to support the sustained, scalable and
production usage of Titan; (ii) a preliminary characterization of a next
generation executor for PanDA to support new workloads and advanced execution
modes; and (iii) early lessons for how current and future experimental and
observational systems can be integrated with production supercomputers and
other platforms in a general and extensible manner
Any Data, Any Time, Anywhere: Global Data Access for Science
Data access is key to science driven by distributed high-throughput computing
(DHTC), an essential technology for many major research projects such as High
Energy Physics (HEP) experiments. However, achieving efficient data access
becomes quite difficult when many independent storage sites are involved
because users are burdened with learning the intricacies of accessing each
system and keeping careful track of data location. We present an alternate
approach: the Any Data, Any Time, Anywhere infrastructure. Combining several
existing software products, AAA presents a global, unified view of storage
systems - a "data federation," a global filesystem for software delivery, and a
workflow management system. We present how one HEP experiment, the Compact Muon
Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance
metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium
on Big Data Computing (BDC) 201
- …