800 research outputs found
Scalability analysis of declustering methods for multidimensional range queries
Abstract—Efficient storage and retrieval of multiattribute data sets has become one of the essential requirements for many data-intensive applications. The Cartesian product file has been known as an effective multiattribute file structure for partial-match and best-match queries. Several heuristic methods have been developed to decluster Cartesian product files across multiple disks to obtain high performance for disk accesses. Although the scalability of the declustering methods becomes increasingly important for systems equipped with a large number of disks, no analytic studies have been done so far. In this paper, we derive formulas describing the scalability of two popular declustering methods¦Disk Modulo and Fieldwise Xor¦for range queries, which are the most common type of queries. These formulas disclose the limited scalability of the declustering methods, and this is corroborated by extensive simulation experiments. From the practical point of view, the formulas given in this paper provide a simple measure that can be used to predict the response time of a given range query and to guide the selection of a declustering method under various conditions
Scalability Analysis of Declustering Methods for Cartesian Product Files
Efficient storage and retrieval of multi-attribute datasets
has become one of the essential requirements for many data-intensive
applications. The Cartesian product file has been known as an effective
multi-attribute file structure for partial-match and best-match queries.
Several heuristic methods have been developed to decluster Cartesian
product files over multiple disks to obtain high performance for disk
accesses. Though the scalability of the declustering methods becomes
increasingly important for systems equipped with a large number of disks,
no analytic studies have been done so far.
In this paper we derive formulas describing the scalability
of two popular declustering methods Disk Modulo and Fieldwise Xor
for range queries, which are the most common type of queries.
These formulas disclose the limited scalability of the declustering methods
and are corroborated by extensive simulation experiments.
From the practical point of view,
the formulas given in this paper provide a simple measure
which can be used to predict the response time of a given range query
and to guide the selection of a declustering method
under various conditions.
(Also cross-referenced as UMIACS-TR-96-5
Data partitioning and load balancing in parallel disk systems
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces
Foam: A General-Purpose Cellular Monte Carlo Event Generator
A general purpose, self-adapting, Monte Carlo (MC) event generator
(simulator) is described. The high efficiency of the MC, that is small maximum
weight or variance of the MC weight is achieved by means of dividing the
integration domain into small cells. The cells can be -dimensional
simplices, hyperrectangles or Cartesian product of them. The grid of cells,
called ``foam'', is produced in the process of the binary split of the cells.
The choice of the next cell to be divided and the position/direction of the
division hyper-plane is driven by the algorithm which optimizes the ratio of
the maximum weight to the average weight or (optionally) the total variance.
The algorithm is able to deal, in principle, with an arbitrary pattern of the
singularities in the distribution. As any MC generator, it can also be used for
the MC integration. With the typical personal computer CPU, the program is able
to perform adaptive integration/simulation at relatively small number of
dimensions (). With the continuing progress in the CPU power, this
limit will get inevitably shifted to ever higher dimensions. {\tt Foam} is
aimed (and already tested) as a component in the MC event generators for the
high energy physics experiments. A few simple examples of the related
applications are presented. {\tt Foam} is written in fully object-oriented
style, in the C++ language. Two other versions with a slightly limited
functionality, are available in the Fortran77 language. The source codes are
available from http://jadach.home.cern.ch/jadach
Solution of the Skyrme-Hartree-Fock-Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VII) HFODD (v2.49t): a new version of the program
We describe the new version (v2.49t) of the code HFODD which solves the
nuclear Skyrme Hartree-Fock (HF) or Skyrme Hartree-Fock-Bogolyubov (HFB)
problem by using the Cartesian deformed harmonic-oscillator basis. In the new
version, we have implemented the following physics features: (i) the isospin
mixing and projection, (ii) the finite temperature formalism for the HFB and
HF+BCS methods, (iii) the Lipkin translational energy correction method, (iv)
the calculation of the shell correction. A number of specific numerical methods
have also been implemented in order to deal with large-scale multi-constraint
calculations and hardware limitations: (i) the two-basis method for the HFB
method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint
calculations, (iii) the linear constraint method based on the approximation of
the RPA matrix for multi-constraint calculations, (iv) an interface with the
axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or
HFB matrix elements instead of the HF fields. Special care has been paid to
using the code on massively parallel leadership class computers. For this
purpose, the following features are now available with this version: (i) the
Message Passing Interface (MPI) framework, (ii) scalable input data routines,
(iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the
HFB matrix in the simplex breaking case using the ScaLAPACK library. Finally,
several little significant errors of the previous published version were
corrected.Comment: Accepted for publication to Computer Physics Communications. Program
files re-submitted to Comp. Phys. Comm. Program Library after correction of
several minor bug
Data partitioning and load balancing in parallel disk systems
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces
Scalable Storage for Digital Libraries
I propose a storage system optimised for digital libraries. Its key features are its heterogeneous scalability; its integration and exploitation of rich semantic metadata associated with digital objects; its use of a name space; and its aggressive performance optimisation in the digital library domain
Numerical aerodynamic simulation facility feasibility study
There were three major issues examined in the feasibility study. First, the ability of the proposed system architecture to support the anticipated workload was evaluated. Second, the throughput of the computational engine (the flow model processor) was studied using real application programs. Third, the availability reliability, and maintainability of the system were modeled. The evaluations were based on the baseline systems. The results show that the implementation of the Numerical Aerodynamic Simulation Facility, in the form considered, would indeed be a feasible project with an acceptable level of risk. The technology required (both hardware and software) either already exists or, in the case of a few parts, is expected to be announced this year. Facets of the work described include the hardware configuration, software, user language, and fault tolerance
- …